paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
iclr_2018_rk6qdGgCZ
Fixing Weight Decay Regularization in Adam
We note that common implementations of adaptive gradient algorithms, such as Adam, limit the potential benefit of weight decay regularization, because the weights do not decay multiplicatively (as would be expected for standard weight decay) but by an additive constant factor. We propose a simple way to resolve this issue by decoupling weight decay and the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our proposed modification (i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam, and (ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter). We also demonstrate that longer optimization runs require smaller weight decay values for optimal results and introduce a normalized variant of weight decay to reduce this dependence. Finally, we propose a version of Adam with warm restarts (AdamWR) that has strong anytime performance while achieving state-of-the-art results on CIFAR-10 and ImageNet32x32. Our source code will become available after the review process.
rejected-papers
This paper generated quite a bit of controversy among reviewers. The main claim of the paper is that Adam and related optimizers are broken because their "weight decay" regularization is not actually weight decay. It proposes to modify Adam to decay all weights the same regardless of the gradient variances. Calling Adam's weight decay mechanism a mistake seems very far-fetched to me. Neural net optimization researchers are well aware of the connection between weight decay and L2 regularization and the fact that they don't correspond in preconditioned methods. L2 regularization is basically the only justification I have heard for weight decay, and despite rejecting this interpretation, the paper does not provide an alternative justification. Decoupling the optimization from the cost function is a well-established principle. This abstraction barrier is not completely clean (e.g. gradient noise has well-known regularization effects), and the experiments of this paper perhaps provide evidence that the choices may be coupled in this case. This is an interesting finding, and probably worth following up on. However, the paper seems to sweep the "decoupling optimization and cost" issue under the carpet and take for granted that the decay rate is what should be held fixed. All three reviewers found the presentation to be misleading, and I would agree with them. While there may be an interesting contribution here, I cannot endorse the paper as-is.
train
[ "SJs7uYYeM", "rJvLpvdez", "HJX7HvOez", "HyXvNUTQG", "rJ3e48aXM", "BJ5T7IpXM", "BJJu7LaXM", "ryLEQIp7f", "S1cWEZj7M", "H1zlFEcxM", "BJNHyGYlG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "author", "public" ]
[ "At the heart of the paper, there is a single idea: to decouple the weight decay from the number of steps taken by the optimization process (the paragraph at the end of page 2 is the key to the paper). This is an important and largely overlooked area of implementation and most off-the-shelf optimization algorithms, unfortunately, miss this point, too. I think that the proposed implementation should be taken seriously, especially in conjunction with the discussion that has been carried out with the work of Wilson et al., 2017 (https://arxiv.org/abs/1705.08292).\n\nThe introduction does a decent job explaining why it is necessary to pay attention to the norm of the weights as the training progresses within its scope. However, I would like to add a couple more points to the discussion: \n- \"Optimal weight decay is a function (among other things) of the total number of epochs / batch passes.\" in principle, it is a function of weight updates. Clearly, it depends on the way the decay process is scheduled. However, there is a bad habit in DL where time is scaled by the number of epochs rather than the number of weight updates which sometimes lead to misleading plots (for instance, when comparing two algorithms with different batch sizes).\n- Another ICLR 2018 submission has an interesting take on the norm of the weights and the algorithm (https://openreview.net/forum?id=HkmaTz-0W&noteId=HkmaTz-0W). Figure 3 shows the histograms of SGD/ADAM with and without WD (the *un-fixed* version), and it clearly shows how the landscape appear misleadingly different when one doesn't pay attention to the weight distribution in visualizations. \n- In figure 2, it appears that the training process has three phases, an initial decay, a steady progress, and a final decay that is more pronounced in AdamW. This final decay also correlates with the better test error of the proposed method. This third part also seems to correspond to the difference between Adam and AdamW through the way they branch out after following similar curves. One wonders what causes this branching and whether the key the desired effects are observed at the bottom of the landscape.\n- The paper concludes with \"Advani & Saxe (2017) analytically showed that in the limited data regime of deep networks the presence of eigenvalues that are zero forms a frozen subspace in which no learning occurs and thus smaller (e.g., zero) initial weight norms should be used to achieve best generalization results.\" Related to this there is another ICLR 2018 submission (https://openreview.net/forum?id=rJrTwxbCb), figure 1 shows that the eigenvalues of the Hessian of the loss have zero forms at the bottom of the landscape, not at the beginning. Back to the previous point, maybe that discussion should focus on the second and third phases of the training, not the beginning. \n- Finally, it would also be interesting to discuss the relation of the behavior of the weights at the last parts of the training and its connection to pruning. \n\nI'm aware that one can easily go beyond the scope of the paper by adding more material. Therefore, it is not completely reasonable to expect all such possible discussions to take place at once. The paper as it stands is reasonably self-contained and to the point. Just a minor last point that is irrelevant to the content of the work: The slash punctuation mark that is used to indicate 'or' should be used without spaces as in 'epochs/batch'.\n\nEdit: Thanks very much for the updates and refinements. I stand by my original score and would like to indicate my support for this style of empirical work in scientific conferences.", "The paper presents an alternative way to implement weight decay in Adam. Empirical results are shown to support this idea.\n\nThe idea presented in the paper is interesting, but I have some concerns about it.\n\nFirst, the authors argue that the weight decay should be implemented in a way different from the minimization of a L2 regularization. This seems a very weird statement to me. In fact, it easy to see that what the authors propose is to minimize two different objective functions in SGDW and AdamW! I am not even sure how I should interpret what they propose. The fact is that SGD and Adam are optimization algorithms, so we cannot just change the update rule in the same way in both algorithms and expect them to behave in the same way just because the added terms have the same shape!\n\nSecond, the equation (5) that re-normalize the weight decay parameter as been obtained on one dataset, as the author admit, and tested only on another one. I am not sure this is enough to be considered as a scientific proof.\n\nAlso, the empirical experiments seem to use the cosine annealing of the learning rate. This means that the only thing the authors proved is that their proposed change yields better results when used with a particular setting of the cosine annealing. What happens in the other cases?\n\nTo summarize, I think the idea is interesting but the paper might not be ready to be presented in a scientific conference.", "This paper investigates weight decay issues lied in the SGD variants, especially Adam. Current implementations of adaptive gradient algorithms implicitly contain a crucial flaw, by which \bweight decay in these methods does not correspond to L2 regularization. To fix this issue, this paper proposes the decoupling method between weight decay and the gradient-based update.\n\nOverall, this paper is well-written and contain sufficient references to note the overview of recent adaptive gradient-based methods for DNN. In addition, this paper investigates the crucial issue in the recent adaptive gradient methods and find the problem in weight decay. This is an interesting finding. And the proposed method to fix this issue is simple and reasonable. Their experimental results to validate the effectiveness of their proposed method are well-organized. In particular, the investigation on hyperparameter spaces shows the strong advantage of the proposed methods.", "We thank all reviewers for their positive evaluation and their valuable comments. We've uploaded a revision to address the issues raised and replied to reviewers and anonymous comments individually in the OpenReview forum. \nWe are glad that the reviewers agree that our work is novel, simple and might provide useful insights. We agree that some of our experimental findings need to be explored on a wider range of datasets and tasks. Nevertheless, we hope that our paper provides a useful bit of information to better understand regularization of deep neural networks. \n\nThank you again for your reviews!\n", "“For practical purposes, I would like to know whether its worth attempting to use SGDW or SGDWR rather than standard SGD.” \n\nSGDW is worth using if you consider that the search space of hyperparameters of SGDW shown in Figure 1 is easier to search that the one of SGD shown in the same figure. We consider this to be the case due to the more separable nature of that space as described in the paper. Another reason to prefer SGDW to SGD is the proposed normalized weight decay that allows you to simplify the search for the weight decay factor suitable to different computational budgets. Please compare the first two rows of SuppFigure 3: the normalized weight decay factor of 1/20 is suitable for 25 and 400 epochs, in contrast to the raw weight decay factor whose optimal value changes by a factor of about 4.\nAs you can see in Figure 1, despite the fact that it is easier to tune SGDW than SGD, the best validation errors that can be obtained by both algorithms are comparable. Therefore, we only claim that SGDW “simplifies the problem of hyperparameter tuning in SGD” and did not run SGD for Figure 3 which would match the results of SGDW (similarly to Figure 1), i.e., reproduce 2.86% of Gastaldi. However, due to a request made to us earlier, we have included an additional line for the ImageNet32x32 experiment (see Figure 3 right): results for original Adam (with cosine annealing). Similarly to the results on CIFAR-10 (see Figure 3 left), the best results of Adam (out of a set of weight decay factors) were substantially worse than the ones of AdamW. \n\n“I also note that Figure 3 suggests that Adam variants seems always inferior to comparison vanilla SGD methods, which also leads to the question of why bother \"fixing\" Adam if SGD variants are better and simpler choices?”\n\nPlease note that Figure 3 shows that the proposed “fixed” Adam drastically reduces the gap between SGD on CIFAR-10 and performs equally well (no longer inferior) on ImageNet32x32. As mentioned at the end of our introduction, our motivation was to contribute to the goal that “practitioners do not need to switch between Adam and SGD anymore, which in turn should help to reduce the common issue of selecting dataset/task-specific training algorithms and their hyperparameters”. “Fixing” Adam for the considered image classification datasets where its gap with SGD is significant might be a good indication of progress towards achieving the above-mentioned goal. \n\n“was w_t constant between restarts and set according to equation (5), and if so what w_norm was used? In this case, what value of \\alpha_t was used?” \nw_norm is the normalized weight decay hyperparameter, set to the value indicated in the plots (e.g., as 0.025). It is used to derive w_t according to equation (5). Since all inputs of equation (5) are constant between restarts in our setup, w_t is constant as well. Please note that if batch size would change during the run (e.g., increase), then w_t would change as well. \n\nalpha_t is constant and corresponds to the initial learning rate, then it is multiplied by the schedule multiplier eta_t which includes cosine annealing and restarts\n", "We agree that \"number of epochs/batch passes\" should be changed to \"number of batch passes/weight updates\" and fixed this (see Section 1). We also included the following text in Section 3:\n\n\"We note a recent relevant observation of \\cite{li2017visualizing} who demonstrated that a smaller batch size (for the same total number of epochs) leads to the shrinking effect of weight decay being more pronounced. Here, we propose to address that effect with normalized weight decay.\"\n\nFollowing the insight that you provided, we included the following text in our discussion section.\n\n\"The results shown in Figure 2 suggest that Adam and AdamW follow very similar curves most of the time until the third phase of the run where AdamW starts to branch out to outperform Adam. As pointed out by an anonymous reviewer, it would be interesting to investigate what causes this branching and whether the desired effects are observed at the bottom of the landscape. One could investigate this using the approach of \\cite{im2016empirical} to switch from Adam to AdamW at a given epoch index. Since it is quite possible that the effect of regularization is not that pronounced in the early stages of training, one could think of designing a version of Adam which exploits this by being fast in the early stages and well-regularized in the late stages of training. The latter might be achieved with a custom schedule of the weight decay factor.\"\n", "Following your suggestion, we extended Figure 1 to show the results for much larger weight decay factors. The results confirmed our expectations that the original figures included the basin of optimal hyperparameter settings of the considered experimental setup. You rightly pointed out that a sentence describing Figure 1 was confusing; we have fixed the sentence to provide a more illustrative example. ", "“the equation (5) that re-normalize the weight decay parameter as been obtained on one dataset, as the author admit, and tested only on another one.”\nWhile we don’t have evidence that the sqrt scaling we propose is optimal, we believe that *some* scaling should be considered when the total number of batch passes changes (due to the change of the total number of epochs or/and batch size). It is not (computationally) straightforward to investigate the optimal scaling because it is coupled with other hyperparameters. We note, however, that our focused study on CIFAR-10 and ImageNet32x32 represents the first attempt in this direction, and that it at least demonstrates that in these two cases, sqrt scaling is much better than the previous default (no scaling). \n\n“Also, the empirical experiments seem to use the cosine annealing of the learning rate. This means that the only thing the authors proved is that their proposed change yields better results when used with a particular setting of the cosine annealing. What happens in the other cases?”\n\nWe note that we experimented with and presented a set of results/figures with different settings of cosine annealing (varying its initial learning rate). As discussed in Section 2, the separability effect provided by the proposed decoupling does not rely on cosine annealing. In response to the reviewer’s comment, we have now also included SuppFigure 5 (for the moment, unfortunately, it is not of the greatest possible resolution due to its high computational cost) which shows the results without cosine annealing. We included the following text in section 5.3.\n\n\"We investigated whether the use of much longer runs (1800 epochs) of the original Adam with L2 regularization makes the use of cosine annealing unnecessary. The results of Adam without cosine annealing (i.e., with fixed learning rate) for a 4 by 4 logarithmic grid of hyperparameter settings are given in SuppFigure 5 in the supplementary material. Even after taking into account the low resolution of the grid, the results appear to be at best comparable to the ones obtained with AdamW with 18 times less epochs and a smaller network (see SuppFigure 2). These results are not very surprising given Figure 1 (which demonstrates the effectiveness of AdamW) and SuppFigure 2 (which demonstrates the necessity to use some learning rate schedule such as cosine annealing).\"\n\n\nWe agree that the impact of weight decay on the objective function should be mentioned. We included the following text in our discussion section.\n\n\"In this paper, we argue that the popular interpretation that weight decay = L2 regularization is not precise. Instead, the difference between the two leads to the following important consequences. Two algorithms as different as SGD and Adam will exhibit different effective rates of weight decay even if the same regularization coefficient is used to include L2 regularization in the objective function. Moreover, two algorithms as different as SGDW and AdamW will optimize two effectively different objective functions even if the same weight decay factor is used. Our findings suggest that the original Adam algorithm with L2 regularization affects effective rates of weight decay in a way that precludes effective regularization, and that effective regularization is achievable by decoupling the weight decay.\"\n", "This is a very interesting paper and I think optising weight decay is an important under-explored area.\n\nHowever, I am left in doubt as to the value of the contribution, possibly only because some additional clarity is needed.\n\nFor practical purposes, I would like to know whether its worth attempting to use SGDW or SGDWR rather than standard SGD. Its not clear from Figure 3 if it is worth the effort, because there is no direct comparison between your method and standard SGD methods, yet there is a comparison with standard ADAM. Why the omission?\n\nI also note that Figure 3 suggests that Adam variants seems always inferior to comparison vanilla SGD methods, which also leads to the question of why bother \"fixing\" Adam if SGD variants are better and simpler choices? \n\nIt would also be good to know if the W methods work well for more common network variants like standard residual networks.\n\nFinally, due to the efforts to make the algorithms as general as possible, I was left in some confusion about the precise choices of parameters used in the experiments. For example, for the SGDWR results, was w_t constant between restarts and set according to equation (5), and if so what w_norm was used? In this case, what value of \\alpha_t was used?", "Thanks for the note! \n\nEven if the shape of the hyperparameter space would drastically change outside of the current range, the claim would be correct for SGD because the already presented results alone make it impossible to first fix the learning rate LR to any value from the range and then expect that the best weight decay found for that LR value would be nearly-optimal for all other possible values of LR. However, we agree that the example given in the sentence is unfortunate because it asks the reader to extrapolate instead of dealing with the data that is shown. It is confusing and we will correct that with a better example whose results are shown in Figure 1: when LR=0.5, optimal weight decay factor is 1/8 *0.001 but it is not optimal for all other settings of LR. \n\nRegarding the values outside of the current range, it seems very unlikely that better results for LR>0.2 exist given the isolines shown in Figure 1 (note the elliptic shape and that the top results for LR=0.2 are worse than for LR=0.1) and that none of the papers with ResNets on CIFAR-10 (with standard settings of batch size, etc.) we are aware of use LR>0.2. In fact, since momentum-SGD is a standard baseline, its hyperparameters for ResNets on CIFAR-10 have been heavily tuned by researchers so that LR often lies in [0.05, 0.1] that matches the best region of momentum-SGD shown in Figure 1.\n\nThank you for helping to avoid possible confusions: we will correct the sentence and extend Figure 1 of momentum-SGD by an additional column with LR=0.4 and even larger LR if necessary.", "In Figure 1 top, the blue region of SGDW is fully visible in the plot. But for SGD, the blue region gets chopped off the edge of the plot. This seems to make a fair comparison difficult. In particular, the following statement seems questionable, since it is not clear what happens for SGD outside of the visible region in the plot.\n\n\"even if the learning rate is not well tuned yet (e.g., consider the value of 1/1024 in Figure 1, top right), leaving it fixed and only optimizing the weight decay factor would yield a good value (of 1/4*0.001). This is not the case for the original SGD shown in Figure 1 (top left).\"" ]
[ 7, 4, 8, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rk6qdGgCZ", "iclr_2018_rk6qdGgCZ", "iclr_2018_rk6qdGgCZ", "iclr_2018_rk6qdGgCZ", "S1cWEZj7M", "SJs7uYYeM", "H1zlFEcxM", "rJvLpvdez", "iclr_2018_rk6qdGgCZ", "BJNHyGYlG", "iclr_2018_rk6qdGgCZ" ]
iclr_2018_BJaU__eCZ
Hallucinating brains with artificial brains
Human brain function as measured by functional magnetic resonance imaging (fMRI), exhibits a rich diversity. In response, understanding the individual variability of brain function and its association with behavior has become one of the major concerns in modern cognitive neuroscience. Our work is motivated by the view that generative models provide a useful tool for understanding this variability. To this end, this manuscript presents two novel generative models trained on real neuroimaging data which synthesize task-dependent functional brain images. Brain images are high dimensional tensors which exhibit structured spatial correlations. Thus, both models are 3D conditional Generative Adversarial networks (GANs) which apply Convolutional Neural Networks (CNNs) to learn an abstraction of brain image representations. Our results show that the generated brain images are diverse, yet task dependent. In addition to qualitative evaluation, we utilize the generated synthetic brain volumes as additional training data to improve downstream fMRI classifiers (also known as decoding, or brain reading). Our approach achieves significant improvements for a variety of datasets, classifi- cation tasks and evaluation scores. Our classification results provide a quantitative evaluation of the quality of the generated images, and also serve as an additional contribution of this manuscript.
rejected-papers
The submission proposes to use GANs to learn a generative model of fMRI scans that can then be used for downstream classification tasks. Although there was some appreciation from the reviewers of the approach, there were several important remaining concerns: 1) From Reviewer 1: "Generating high resolution images with GANs even on faces for which there is almost infinite data is still a challenge. Here a few thousand data points are used. So it raises too concerns: First is it enough?" and 2) R1 and R2 both raised concerns about the significance of the improvements. Looking through the tables, there are many reported differences that are reasonably small, and no error bars or significance are given. This should be a requirement for an empirical paper about fMRI.
train
[ "rJirqFBgz", "B1LfYs_gf", "rJw8PuhxM", "SJx6w-67M", "S1cT_-6XG", "S1FBv-6QG", "By-5yr6mf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Quality\n\nThis is a very clear contribution which elegantly demonstrates the use of extensions of GAN variants in the context of neuroimaging.\n\nClarity\n\nThe paper is well-written. Methods and results are clearly described. The authors state significant improvements in classification using generated data. These claims should be substantiated with significance tests comparing classification on standard versus augmented datasets.\n\nOriginality\n\nThis is one of the first uses of GANs in the context of neuroimaging. \n\nSignificance \n\nThe approach outlined in this paper may spawn a new research direction.\n\nPros\n\nWell-written and original contribution demonstrating the use of GANs in the context of neuroimaging.\n\nCons\n\nThe focus on neuroimaging might be less relevant to the broader AI community.", "This paper proposes to use 3D conditional GAN models to generate\nfMRI scans. Using the generated images, paper reports improvement\nin classification accuracy on various tasks.\n\nOne claim of the paper is that a generative model of fMRI\ndata can help to caracterize and understand variability of scans\nacross subjects.\n\nArticle is based on recent works such as Wasserstein GANs and AC-GANs\nby (Odena et al., 2016).\n\nDespite the rich literature of this recent topic the related work\nsection is rather convincing.\n\nModel presented extends IW-GAN by using 3D convolution and also\nby supervising the generator using sample labels.\n\nMajor:\n\n- The size of the generated images is up to 26x31x22 which is limited\n(about half the size of the actual resolution of fMRI data). As a\nconsequence results on decoding learning task using low resolution\nimages can end up worse than with the actual data (as pointed out).\nWhat it means is that the actual impact of the work is probably limited.\n\n- Generating high resolution images with GANs even on faces for which\nthere is almost infinite data is still a challenge. Here a few thousand\ndata points are used. So it raises too concerns: First is it enough?\nUsing so-called learning curves is a good way to answer this. Second\nis what are the contributions to the state-of-the-art of the 2\nmethods introduced? Said differently, as there\nis no classification results using images produced by an another\nGAN architecture it is hard to say that the extra complexity\nproposed here (which is a bit contribution of the work) is actually\nnecessary.\n\nMinor:\n\n- Fonts in figure 4 are too small.\n", "The work is motivated by a real challenge of neuroimaging analysis: how to increase the amount of data to support the learning of brain decoding.\nThe contribution seems to mix two objectives: on one hand to prove that it is possible to do data augmentation for fMRI brain decoding, on the other hand to design (or better to extend) a new model (to be more precise two models).\nConcerning the first objective the empirical results do not provide meaningful support that the generative model is really effective. The improvement is really tiny and a statistical test (not included in the analysis) probably wouldn't pass a significant threshold. This analysis is missing a straw man. It is not clear whether the difference in the evaluation measures is related to the greater number of examples or by the specific generative model.\nConcerning the contribution of the model, one novelty is the conditional formulation of the discriminator. The design of the empirical evaluation doesn't address the analysis of the impact of this new formulation. It is not clear whether the supposed improvement is related to the conditional formulation. \nFigure 3 and Figure 5 illustrate the brain maps generated for Collection 1952 with ICW-GAN and for collection 503 with ACD-GAN. It is not clear how the authors operated the choices of these figures. From the perspective of neuroscience a reader, would expect to look at the brain maps for the same collection with different methods. The pairwise brain maps would support the interpretation of the generated data. It is worthwhile to remember that the location of brain activations is crucial to detect whether the brain decoding (classification) relies on artifacts or confounds.\n\nMinor comments\n- typos: \"a first application or this\" => \"a first application of this\" (p.2)\n- \"qualitative quality\" (p.2)", "Thank you for the clear review.\n\n- w.r.t. size of generated brain maps: Decreasing the resolution of the imaging data is common practice in the neuroimaging analysis, e.g., it is built into the Nilearn python package. Interestingly and in contrast to the reviewer comment, we observe benefits by including synthetic data with higher resolution.\n\n- w.r.t. effectiveness of generative model(s): To highlight the effectiveness of the proposed models, we have added additional results for two generative models to the revised manuscript, the AC-GAN (Tab. 4) and Gaussian Mixture Model (Tab. 6). Our results show that both AC-GAN and GMM achieve much worse results. To further evaluate the generative model, we experimented with using only generated data to train the classifiers (Fig. 8). Our results in the revised manuscript suggest that using several hundred artificial images per class is comparable to using real images.\n\n- w.r.t. stability of GAN: To demonstrate the stability we added training loss curves to the revised manuscript (Fig. 7). We did not observe any issues which we attribute to the stability of Wasserstein variants. \n", "Thank you for your strong review. \n\n- w.r.t. relevance to the AI community: We think neuroscience is an integral part of the larger AI community, benefiting both sides when seeking inspiration. Further, we expect that many of the techniques we propose are directly applicable to more common computer vision tasks.", "Thank you for your valuable comments on our paper. \n\n- w.r.t. goals of the manuscript: From a neuroscience perspective, the paper develops a mechanism addressing two concerns: (i) how to generate synthetic samples which help address the shortage of data that is common in neuroimaging, and can be used to analyze inter-individual variability, among other applications; (ii) how to evaluate artificially generated neuroimaging data. \n\n- w.r.t. effectiveness of generative models: To highlight the effectiveness of the proposed models, we have added additional results for two generative models to the revised manuscript, the AC-GAN (Tab. 4) and Gaussian Mixture Model (Tab. 6). Our results show that both AC-GAN and GMM achieve much worse results. To further evaluate the generative model, we experimented with using only generated data to train the classifiers (Fig.8). Our results in the revised manuscript suggest that using several hundred fake images per class is comparable to using real images.\n\n- w.r.t. improvements to classification performance: We point the reviewer to Tab. 4 for a comparison of different GAN architectures. Also, we mention that for the results reported in Tab. 1-3, both SVM and deep net classifiers are compared with and without artificially generated data. We think this clearly demonstrates the benefits of adding data obtained from GANs. The high performance of the baselines, resulting from careful tuning, are easily on par with typically reported numbers in the literature. Moreover, we point out that the reported improvements are consistent across a variety of metrics. This experimental evaluation suggests that the reported improvements aren’t small and are hard to achieve. In Tab. 5 of the revised manuscript, we provide the variance of the cross-validated performance. The small variances suggest the significance of the performance differences. \n\n- w.r.t. illustrated brain maps and brain decoding: We refer the reviewer to Fig. 5 in the manuscript and Fig. 11-13 in the supplementary material for additional results. We have clarified details as requested, for instance, Fig. 5 and Fig. 13 show the synthetic images with label ‘7’ in collection 503 by the ACD-GAN and ICW-GAN respectively.\n\n", "We thank all reviewers for their constructive feedback and address their comments in the following. We will release all code soon. \n" ]
[ 8, 6, 5, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1 ]
[ "iclr_2018_BJaU__eCZ", "iclr_2018_BJaU__eCZ", "iclr_2018_BJaU__eCZ", "B1LfYs_gf", "rJirqFBgz", "rJw8PuhxM", "iclr_2018_BJaU__eCZ" ]
iclr_2018_Sy1f0e-R-
An empirical study on evaluation metrics of generative adversarial networks
Despite the widespread interest in generative adversarial networks (GANs), few works have studied the metrics that quantitatively evaluate GANs' performance. In this paper, we revisit several representative sample-based evaluation metrics for GANs, and address the important problem of \emph{how to evaluate the evaluation metrics}. We start with a few necessary conditions for metrics to produce meaningful scores, such as distinguishing real from generated samples, identifying mode dropping and mode collapsing, and detecting overfitting. Then with a series of carefully designed experiments, we are able to comprehensively investigate existing sample-based metrics and identify their strengths and limitations in practical settings. Based on these results, we observe that kernel Maximum Mean Discrepancy (MMD) and the 1-Nearest-Neighbour (1-NN) two-sample test seem to satisfy most of the desirable properties, provided that the distances between samples are computed in a suitable feature space. Our experiments also unveil interesting properties about the behavior of several popular GAN models, such as whether they are memorizing training samples, and how far these state-of-the-art GANs are from perfect.
rejected-papers
The problem addressed here is an important one: What is a good evaluation metric for generative models? A good selection of popular metrics are analyzed for their appropriateness for model selection of GANs. Two popular approaches are recommended: the kernel Maximum Mean Discrepancy (MMD) and the 1-Nearest-Neighbour (1-NN) two-sample test. This seems reasonable, but the present work was not recommended for acceptance by 2 reviewers who raised valid concerns. From a readability perspective, it would be nice to simply list the answer to question (1) directly in the introduction. One must read more than a few pages to get to the answer of why the metrics that are advocated were picked. It need not read like a mystery. R4: "The evaluations rely on using a pre-trained imagenet model as a representation. The authors point out that different architectures yield similar results for their analysis, however it is not clear how the biases of the learned representations affect the results. The use of learned representations needs more rigorous justification" R2: "First, it only considers a single task for which GANs are very popular. Second, it could benefit from a deeper (maybe theoretical analysis) of some of the questions." - the first point of which is also related to a concern of R4. Given the overall high selectivity of ICLR, the present submission falls short.
train
[ "HJ_M-jdBG", "rkyzL1NHz", "S1zLl6mlz", "BJquSu9eM", "HyK44e1bM", "BJc3SHHZG", "BJtIwUofM", "S18QPUjGf", "r1LDL8oMM", "BkEGIIjMG", "BkoCeoOef", "HkLsEjDlf", "BJhrEjDgf", "H1bj91Ixz", "HJ27tABef", "r1E16Vmxz", "Sy2r0j_yG" ]
[ "public", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "author", "public", "public", "public", "public" ]
[ "Can we apply these metrics to evaluate conditional versions of GANs or it needs some specific adaptations?\nWhat are the most relevant metrics for evaluating conditional GANs?", "We have updated our paper with FID results included. The FID score does appear to be an appealing metric according to our criterions.", "In the paper, the authors discuss several GAN evaluation metrics.\nSpecifically, the authors pointed out some desirable properties that GANS evaluation metrics should satisfy.\nFor those properties raised, the authors experimentally evaluated whether existing metrics satisfy those properties or not.\nSection 4 summarizes the results, which concluded that the Kernel MMD and 1-NN classifier in the feature space are so far recommended metrics to be used.\n\nI think this paper tackles an interesting and important problem, what metrics are preferred for evaluating GANs.\nIn particular, the authors showed that Inception Score, which is one of the most popular metric, is actually not preferred for several reasons.\nThe result, comparing data distributions and the distribution of the generator would be the preferred choice (that can be attained by Kernel MMD and 1-NN classifier), seems to be reasonable.\nThis would not be a surprising result as the ultimate goal of GAN is mimicking the data distribution.\nHowever, the result is supported by exhaustive experiments making the result highly convincing.\n\nOverall, I think this paper is worthy for acceptance as several GAN methods are proposed and good evaluation metrics are needed for further improvements of the research field.\n", "Thanks for an interesting paper. \n\nThe paper evaluates popular GAN evaluation metrics to better understand their properties. The \"novelty\" of this paper is a bit hard to assess. However, I found their empirical evaluation and experimental observations to be very interesting. If the authors release their code as promised, the off-the-shelf tool would be a very valuable contribution to the GAN community. \n\nIn addition to existing metrics, it would be useful to add Frechet Inception Distance (FID) and Multi-scale structural similarity (MS-SSIM). \n\nHave you considered approximations to Wasserstein distance? E.g. Danihelka et al proposed using an independent Wasserstein critic to evaluate GANs: \nComparison of Maximum Likelihood and GAN-based training of Real NVPs\nhttps://arxiv.org/pdf/1705.05263.pdf\n\nHow sensitive are the results to hyperparameters? It would be interesting to see some sensitivity analysis as well as understand the correlations between different metrics for different hyperparameters (cf. Appendix G in https://arxiv.org/pdf/1706.04987.pdf)\n\nDo you think it would be useful to compare other generative models (e.g. VAEs) using these evaluation metrics? Some of the metrics don't capture perceptual similarity, but I'm curious to hear what you think. \n", "The paper describes an empirical evaluation of some of the most common metrics to evaluate GANs (inception score, mode score, kernel MMD, Wasserstein distance and LOO accuracy). \n\nThe paper is well written, clear, organized and easy to follow.\n\nGiven that the underlying application is image generation, the authors move from a pixel representation of images to using the feature representation given by a pre-trained ResNet, which is key in their results and further comparisons. They analyzed discriminability, mode collapsing and dropping, robustness to transformations, efficiency and overfitting. \n\nAlthough this work and its results are very useful for practitioners, it lacks in two aspects. First, it only considers a single task for which GANs are very popular. Second, it could benefit from a deeper (maybe theoretical analysis) of some of the questions. Some of the conclusions could be further clarified with additional experiments (e.g., Sec 3.6 ‘while the reason that RMS also fails to detect overfitting may again be its lack of generalization to datasets with classes not contained in the ImageNet dataset’).\n", "This paper introduces a comparison between several approaches for evaluating GANs. The authors consider the setting of a pre-trained image models as generic representations of generated and real images to be compared. They compare the evaluation methods based on five criteria termed disciminability, mode collapsing and mode dropping, sample efficiency,computation efficiency, and robustness to transformation. This paper has some interesting insights and a few ideas of how to validate an evaluation method. The topic is an important one and a very difficult one. However, the work has some problems in rigor and justification and the conclusions are overstated in my view.\n\nPros\n-Several interesting ideas for evaluating evaluation metrics are proposed\n-The authors tackle a very challenging subject\n\nCons\n-It is not clear why GANs are the only generative model considered\n-Unprecedented visual quality as compared to other generative models has brought the GAN to prominence and yet this is not really a big factor in this paper.\n-The evaluations rely on using a pre-trained imagenet model as a representation. The authors point out that different architectures yield similar results for their analysis, however it is not clear how the biases of the learned representations affect the results. The use of learned representations needs more rigorous justification\n-The evaluation for discriminative metric, increased score when mix of real and unreal increases, is interesting but it is not convincing as the sole evaluation for “discriminativeness” and seems like something that can be gamed. \n- The authors implicitly contradict the argument of Theis et al against monolithic evaluation metrics for generative models, but this is not strongly supported.\n\nSeveral references I suggest:\nhttps://arxiv.org/abs/1706.08500 (FID score)\nhttps://arxiv.org/abs/1511.04581 (MMD as evaluation)\n", "Thanks a lot for the nice summary of the paper, and the positive comments!", "Thanks for your positive comments! We will definitely release our code as promised.\n\n\n#FID and MS-SSM#\nThanks for pointing us to these interesting works. We have updated our paper and included FID. MS-SSIM does not fit as well into the evaluation framework we are considering as it is an approach that can be combined with the metrics we studied, instead of being a new metric to be compared with. \n\n\n#Approximations to Wasserstein distance#\nThanks for the suggestion. We have indeed considered it, but have found that even the exact Wasserstein distance is not an appealing metric. This makes us believe that it is probably not worth to consider an approximated version. In addition, the approximated Wasserstein distance involves extra hyperparameters, such as network configurations and optimization settings, which further complicates the matter. \n\n\n#Sensitivity to hyperparameters#\nA good evaluation metric should be robust in terms of hyperparameter setting, or ideally, have no hyperparameters. This is indeed satisfied by all the metrics we investigated in this paper, except the MMD. MMD has a single hyperparameter, the Gaussian kernel width, which we empirically set to the averaged pairwise distance of the training data. It appears that MMD is quite insensitive to this hyperparameter.\n\n#Evaluate other generative models and perceptual similarity#\nIn principle it should be possible to use these metrics to evaluate other generative models, but we have not really investigated that further. Our metrics should work if the goal of other generative models is also approximating the data distribution.\n", "Thank you for your comments! \n\n\n#only considers a single task#\nWe focus on the image generation task because, as you pointed out, GANs are most popular in this context. The systematic approach proposed to investigate GAN evaluation metrics is however quite general. For example, we introduce how to evaluate a metric by checking the discriminability between generated images and real images, the sample efficiency, the sensitivity to mode dropping and mode collapsing, etc. \n\n# it could benefit from deeper (maybe theoretical) analysis of some of the questions#\nWe totally agree that a deep understanding of our obtained results is beneficial, and we did provide possible explanations whenever possible. However, our focus in this paper is primarily to identify the properties of an ideal GAN evaluation metric, and how to explicitly and empirically investigate the strengths and limitations of a given metric. \n\nWe are diving into more depth for some of the issues that surfaced in this process, for example why a specific metric works well or fails in terms of discriminability. Hopefully, these could inspires future work on further analyzing existing metrics or designing better ones.\n\nPreviously, evaluation metrics for GANs were proposed and applied - often without detailed investigation. We claim that evaluation metrics themselves need to be carefully evaluated first. \n", "Thanks for your comments. \n\n#Why only GANs#\nWe only consider GANs because adding another generative model type would necessarily add another dimension to the comparison and complicate it further. GANs are very popular and widely used, so we hope there is a sufficient amount of interest even if we restrict ourselves to this domain. It should be interesting to extend our research to other generative models in the future.\n\n\n#Visual quality#\nOur paper focuses on evaluating the metrics for GAN, instead of evaluating the generated images of GANs. Indeed, visual quality of GAN generated images are unprecedented, but the ultimate goal of GAN is to learn the hidden generative distribution. From this perspective, GANs can still be improved in various aspects. For example, mode collapse and mode drop cannot be easily discovered by visual inspection only, but can be evaluated using our proposed methods. \n\n\n#Using pretrained model as feature extractor#\nThe pretrained models on ImageNet are very general and robust. They have been widely used for transfer learning (e.g., object detection, semantic segmentation), image style transfer, image super-resolution, etc. The widely used Inception Score also relies on the Inception model pretrained on ImageNet, and it works quite well in practice as an GAN evaluation metric. In addition, our experiments (Figure 9 in appendix) show that all the observations we have on ResNet also hold on VGG and Inception. \n\n\n#Sole evaluation discriminability#\nIt is important to emphasize that discriminating real and unreal images is not the sole evaluation for “discriminativeness” in our paper. We also considered the prevalent mode dropping and mode collapsing problems. Moreover, even for unreal images, we experimented with three different settings: 1) images generated by a GAN; 2) random noise and 3) images from an entirely different distribution. \n\nDiscriminativeness is not a sufficient condition for a good metric, but seems to be a necessary condition. If a metric does not even pass the discriminativeness test, or other tests in our paper, it might not be a good metric. \n\n#Contradict the argument of Theis et al#\nWe would like to argue that our observations are inline with what have been observed by Theis et al. Specifically, if we directly compute the distances in the pixel space as in Theis et al, most the metrics would fail in terms of discriminability, which is also been observed in their paper. We are able to draw more optimistic observations because of the introduction of a proper feature space.\n\n\n#References#\nThanks for point us to these papers. We have cited both papers, and updated our results with the FID score included.", "If this were true then why not mention \"all\" of the potential metrics that are missing from the paper and instead only cite one paper that is missing. \n\nOther works not covered or mentioned as a potential metric:\n\nRevisiting Classifier Two-Sample Tests\nhttps://arxiv.org/abs/1610.06545\n\nGENERATIVE ADVERARIAL METRIC\nhttps://openreview.net/pdf?id=wVqzLo88YsG0qV7mtLq7\n\nMode Regularized Generative Adversarial Networks\nhttps://arxiv.org/abs/1612.02136\n(They introduce the mode score as a modification to the inception score)\n\nAgain the original \"review\" only mentioned one particular paper for perhaps obvious reasons.", "1) This is an interesting question. For CycleGAN, the mode dropping/collapsing problem might be less several due to the reconstruction loss. Therefore, we may prefer those metrics, e.g., MMD, that have better discriminability (between generated distribution and target distribution).\n\n2) The critical part might be how to define the distance between two sentences/documents. Once we have a well-defined distance, all the metrics investigated in this paper can be applied. The word mover’s distance (WMD) [1] appears to be an ideal candidate.\n[1] Kusner et al, From Word Embeddings To Document Distances, ICML, 2015\n\n3) We plan to include FID in the updated version. Please refer to our reply to the other comments for some preliminary results.\n", "Thanks for pointing this paper to us. The FID is essentially a distance metric for probability distributions, thus it fits into the evaluation framework we investigated. We will include it in our updated version. \n\nFollowing [1], we make the Gaussian distribution assumption on the real/generated data, and test the discriminability of FID. Our preliminary results show that it behaves similarly to the Wasserstein distance under this test. For other properties, we speculate that FID will have a better time/sample complexity than the Wasserstein distance due to the simplified Gaussian distribution assumption. But it might be less sensitive to mode collapse.\n\nOur paper aims to provide a framework to investigate different properties of GAN evaluation metrics. Thus researchers can use it to analyze any sample-based evaluation metric that fits into the framework. As there exist many metrics for probability distributions, we can only focus on several most typical ones (especially those already been used by the GAN community) in our paper.\n", "This review, in particular it's a public comment, makes a valid point. A paper titled \"An empirical study on evaluation metrics of generative adversarial networks\" should consider all evaluation metrics out there or at least give reasons as to why some were not considered.", "1) This may not be a very relevant question. But from your current knowledge of evaluation metrics of GANs, what kind of evaluation metrics are relevant for using for other GAN tasks such as unpaired translation (e.g. CycleGAN)? \n\n2) Can you say anything with evaluation metrics for text generation?\n\n3) I'm also curious as to why FID was omitted, and I'd like to know which one would be better, FID or 1-NN, and in addition the sample efficiency of FID in particular.\n", "This \"review\" appears to be self-promotion.", "[1] proposed the Fréchet Inception Distance (FID) for evaluating GANs which is not mentioned here. To gain a broader insight into evaluation metrics of GANs the authors should also discuss this quality measure.\n\n[1] https://arxiv.org/abs/1706.08500" ]
[ -1, -1, 8, 7, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 3, 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Sy1f0e-R-", "Sy2r0j_yG", "iclr_2018_Sy1f0e-R-", "iclr_2018_Sy1f0e-R-", "iclr_2018_Sy1f0e-R-", "iclr_2018_Sy1f0e-R-", "S1zLl6mlz", "BJquSu9eM", "HyK44e1bM", "BJc3SHHZG", "H1bj91Ixz", "HJ27tABef", "Sy2r0j_yG", "r1E16Vmxz", "iclr_2018_Sy1f0e-R-", "Sy2r0j_yG", "iclr_2018_Sy1f0e-R-" ]
iclr_2018_Syjha0gAZ
Loss Functions for Multiset Prediction
We study the problem of multiset prediction. The goal of multiset prediction is to train a predictor that maps an input to a multiset consisting of multiple items. Unlike existing problems in supervised learning, such as classification, ranking and sequence generation, there is no known order among items in a target multiset, and each item in the multiset may appear more than once, making this problem extremely challenging. In this paper, we propose a novel multiset loss function by viewing this problem from the perspective of sequential decision making. The proposed multiset loss function is empirically evaluated on two families of datasets, one synthetic and the other real, with varying levels of difficulty, against various baseline loss functions including reinforcement learning, sequence, and aggregated distribution matching loss functions. The experiments reveal the effectiveness of the proposed loss function over the others.
rejected-papers
The submission addresses the problem of multiset prediction, which combines predicting which labels are present, and counting the number of each object. Experiments are shown on a somewhat artificial MNIST setting, and a more realistic problem of the COCO dataset. There were several concerns raised by the reviewers, both in terms of the clarity of presentation (Reviewer 1), and that the proposed solution is somewhat heuristic (Reviewer 3). On the balance, two of three reviewers did not recommend acceptance.
train
[ "rktAPxrlG", "SJVGWfceM", "SJYge3sxG", "r1k356QXG", "rJnxOp7mf", "SJcPDpX7G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes a type of loss functions for the problem of multiset prediction. A detailed discussion on the intuition is provided and extensive experiments are conducted to show that this loss function indeed provides some performance gain in terms of Exact Matching and F1-score.\n\nThe idea of this paper is as follows: instead of viewing the multiset prediction task as a classification problem, this paper models it as a sequential decision problem (this idea is not new, see Welleck et al., 2017, as pointed out by the authors). Define pi* to be the optimal policy that outputs the labels of input x in a certain way. We learn a parameterized policy pi(theta) which takes x and all previous predictions as input, and outputs a label as a new prediction. At each time step t, pi* and pi(theta) can be viewed as distributions over all remaining labels; the KL divergence is then used to calculate the difference between those two distributions. Finally, the loss function sums those KL divergences over all t. Computing this loss function directly can be intractable, so the author suggested that one can compute the entire trajectory of predictions and then do aggregation.\n\nAdmitted that such construction is quite intuitive, and can possibly be useful, the technical part of this paper seems to be rather straightforward. In particular, I think the solution to the issue of unknown T is very heuristic, making the proposed loss function less principled. \n\n\nOther Detailed Comments/Issues:\n\n-- It seems to me that models that can utilize this loss function must support varying-length inputs, e.g. LSTM. Any idea how to apply it to models with fixed-length inputs?\n\n-- The proposed loss function assumes that T (i.e. the size of the output set) is known, which is unrealistic. To handle this, the authors simply trained an extra binary classifier that takes x and all previous predictions as the input at each time step, and decides whether or not to terminate. I think this solution is rather hacky and I’d like to see a more elegant solution, e.g. incorporate the loss on T into the proposed loss function.\n\n-- Could you formally define “Exact Match”?\n\n-- In Section 4.4, maybe it is better to run the stochastic sampling multiple times to reduce the variance? I would expect that the result can be quite different in different runs.\n\n-- Any discussion on how will the classifier for T affect the experimental results? \n", "This is an interesting paper, in the sense of looking at a problem such as multiset prediction in the context of sequential decision making (using a policy). \n\nIn more detail, the authors construct an oracle policy, shown to be optimal (in terms of precision and recall). A parametrized policy instead of the oracle policy is utilized in the proposed multiset loss function, while furthermore, a termination policy facilitates the application on variable-sized multiset targets. The authors also study other loss functions, ordered sequence prediction as well as reinforcement learning.\n\nResults show that the proposed order-invariant loss outperforms other losses, along with a set if experiments evaluating choice of rank function for sequence prediction and selection strategies. The experiments seem rather comprehensive, as well as the theoretical analysis. The paper describes an interesting approach to the problem.\n\nWhile the paper is comprehensive it could be improved in terms of clarity & flow (e.g., by better preparing the reader on what is to follow)", "Summary: \nThe paper considers the prediction problem where labels are given as multisets. The authors give a definition of a loss function for multisets and show experimental results. The results show that the proposed methods optimizing the loss function perform better than other alternatives.\n\nComments: \nThe problem of predicting multisets looks challenging and interesting. The experimental results look nice. On the other hand, I have several concerns about writing and technical discussions. \n\nFirst of all, the goal of the problem is not exactly stated. After I read the experimental section, I realized that the goal is to optimize the exact match score (EM) or F1 measure w.r.t. the ground truth multisets. This goal should be explicitly stated in the paper. Now then, the approach of the paper is to design surrogate loss functions to optimize these criteria. \n\nThe technical discussions for defining the proposed loss function seems not reliable for the reasons below. Therefore, I do not understand the rationale of the definition of the proposed loss function.: \n- An exact definition of the term multiset is not given. If I understand it correctly, a multiset is a “set” of instances allowing duplicated ones. \n- There is no definition of Prec or Rec (which look like Precision and Recall) in Remark 1. The definitions appear in Appendix, which might not be well-defined. For example, let y, Y be mutisets , y=[a, a, a] and Y = [a, b]. Then, by definition, Prec(y,Y)=3/3 =1. Is this what you meant? (Maybe, the ill-definedness comes from the lack of definition of inclusion in a mutiset.) \n- I cannot follow the proof of Remark 1 since it does not seem to take account of the randomness by the distribution \\pi^*. \n- I do not understand the definition of the oracle policy exactly. It seems to me that, the oracle policy knows the correct label (multi-set) \\calY for each instance x and use it to construct \\calY_t. But, this implicit assumption is not explicitly mentioned. \n- In (1), (2) and Definition 3, what defines \\calY_t? If \\calY_t is determined by some “optimal” oracle, you cannot define the loss function in Def. 3 since it is not known a priori. Or, if the learner determines \\calY_t, I don’t understand why the oracle policy is optimal since it depends on the learner’s choices. \n\nAlso, I expect an investigation of theoretical properties of the proposed loss function, e.g., relationship to EM or F1 or other loss functions. Without understanding the theoretical properties and the rationale, I cannot judge the goodness of the experimental results (look good though). In other words, I cannot judge the paper in a qualitative perspective, not in a quantitative view. \n\nAs a summary, I think the technical contribution of the paper is marginal because of the lack of reliable mathematical discussion or investigation.\n", "Thanks for your insightful review. Please find our comments below:\n\nRe: Welleck et al. [2017]\n- As noted, Welleck et al. [2017] previously proposed the sequential view of multiset prediction. We believe that the paper here contains valuable additions to the view of multiset prediction as sequential prediction. Namely, it contains a presentation of multiset prediction which is separated from a particular model architecture, comparison with existing supervised learning problems, and an extensive set of baselines that will be valuable for future work on the multiset prediction problem. \n\nRe: Issue of unknown T\n- An alternative method for predicting variable-sized multisets (i.e. the issue of unknown T) is to include an additional “END” class, similar to the token used in NLP sequence models which allows variable sentence lengths. This approach was used in Welleck et al. [2017]. In our proposed loss, this would correspond to setting the free labels set at time T+1 to be {END}. \n\nWe chose to use the auxiliary stop prediction here since it was trivially applicable to all of the baseline losses, thus ensuring a fair comparison between losses. In particular, it was unclear how to extend the END class approach to the distribution matching baseline. For this reason, our experiments use the auxiliary stop prediction. We will however add a discussion of the END class approach to the manuscript (Section 2.3).\n\nRe: Models with fixed-length inputs\n- Here, the key use of the recurrent hidden state is to retain the previously predicted labels, i.e., to remember the full conditioning set \\hat{y}_1,...\\hat{y}_{t-1} in p(y_t|\\hat{y}_1,...,\\hat{y}_{t-1}). Therefore, the proposed loss can be used in a feedforward model by encoding \\hat{y}_1,...,\\hat{y}_{t-1}in the input x_t, and using the feedforward model for T steps. Since this involves feature engineering (e.g., to encode the variable-length sequence into a fixed-dimensional vector) and since RNNs have become a standard for sequential tasks, we use a recurrent model here. However, we appreciate your observation and have added a paragraph to Appendix C discussing this feedforward alternative.\n\nRe: extra binary classifier for T\n- Please see our comments above. In short, we will add a mention of the “END” class approach, which is a natural extension of existing approaches used in NLP sequence models. Here, we used the binary classifier to ensure the applicability to all the baselines.\n\nRe: formal definition of exact match\n- We have added a definition of exact match to Appendix A of the revised version. \n\nAn intuitive way of understanding the definition is to view each multiset as a vector which counts the occurrences of each element from the class set. For example, if the possible classes are x,y,z, then the multisets A= {x,x,y}, B={y,x,x} can be represented as A=[2,1,0], B=[2,1,0]. Exact match then consists of checking whether A[i] == B[i] for all i.\n\nRe: variance across multiple runs for stochastic sampling\n- We did 9 runs of the MNIST Multi 10 experiments for each selection strategy, with a different random seed per run. The standard deviations across runs for the EM metric were 0.009, 0.01 and 0.005 for greedy, stochastic and oracle, respectively. We further tested these strategies using paired t-tests and found no significant difference between between any pair of strategies (in terms of EM.) We have just started running the same set of experiments with the COCO dataset, which we have found to be much more challenging, to ensure our observation is not limited to MNIST. As the experiments are taking much longer, we will update the result in the revision and in the response section as soon as they are completed.\n\nWe however would like to note that we prefer the greedy strategy even in the case of no significant difference among these strategies due to the computational reason. The computational advantage comes from the fact that the greedy strategy does not require sampling, unlike the other ones.\n\nRe: classifier for T affecting experimental results\n- We use the same binary classification approach for each baseline. The multiset loss achieves high evaluation metrics on the variable-sized task (e.g. MNIST 1-4), which shows that the binary classifier is capable of successfully predicting T. (Otherwise, none of the baselines would have high scores on MNIST 1-4). As a result, we believe that the experimental results show the performance difference from varying the loss, given an effective binary classifier for T.\n", "Thank you for the comments and review. To help prepare the reader, in the updated revision we have added a description at the end of the introduction which outlines the paper structure.\n", "Thank you for the insightful comments. We have addressed them below and in the updated revision of the paper:\n\nRe: Problem Goal:\n- Yes, the exact match score and F1 score are used here to compare the predicted and ground truth multisets - we have added a comment when introducing multiset prediction in section 2. Moreover, minimizing the multiset loss function maximizes F1 score and exact match (due to Remarks 2 and 3, respectively); we have added a comment following Definition 3. \n\nRe: Points about technical discussion:\n- Yes, the multiset is a generalization of a set allowing multiple instances of items (here, the items are from the class set). While the bullet points at the beginning of section 2 implicitly define the notion of multiset, we will update the manuscript with an explicit definition in Appendix A of the updated revision.\n\n- Indeed, Prec and Rec are the abbreviations of Precision and Recall. Precision and recall for multisets are defined here by viewing each element of the predicted and target multisets as distinct elements (i.e. even if they are the same class), and using the precision and recall definitions in the Appendix A of the updated revision. That is, in the given example, to understand the definition of Precision / Recall it may be helpful to view y and Y as y=[a1,a2,a3], Y=[a1,b1]. Then for Precision, we have Precision(y, Y) = ½ using the definition in Appendix A.\n\n- The proof of Remark 1 does account for the randomness, as any sample from the oracle policy is guaranteed to be in the free labels set, i.e., \\hat{y}_t \\sim \\pi_*(\\hat{y}_t | \\hat{y}_{<t},x)\\in \\mathcal{Y}_t with probability 1, which can be seen from Definition 2. With this in mind, could you clarify which lines of the proof are difficult to follow? We appreciate the feedback and will add comments to the proof as necessary to make the proof as clear as possible.\n\n- Indeed, the oracle policy is constructed using the target free labels multiset \\mathcal{Y}_t, which relies on knowledge of \\mathcal{Y}. In the updated manuscript, we have added \\mathcal{Y}_t as an explicit argument of the oracle to clarify this point. \n\n- In Definition 3, \\mathcal{Y}_t is defined with respect to the parametrized policy. That is, \\mathcal{Y}_{t+1}=\\mathcal{Y}_{t} \\backslash \\hat{y}_{t}, where \\hat{y}_{t} \\sim \\pi_{\\theta}. The oracle is constructed using a \\mathcal{Y}_t defined with respect to its own predictions. \n\nThe oracle is optimal w.r.t. to any arbitrary prefix (Remark 1). If the entire prefix was generated from the oracle, it only generates a correct multiset according to Remark 3. In other words, the oracle has the optimal behaviour given any free label set \\mathcal{Y}_t.\n\nRe: Investigation of theoretical properties:\n- We have shown that the oracle policy is optimal in terms of precision and recall (and in turn F1 and exact match), so by minimizing divergence with the oracle, we can understand the loss as finding a parametrized policy whose samples are optimal. However, in this paper we have focused on empirical analysis of the proposed loss function, with positive findings. We agree with you that the convergence properties and consistency of the proposed loss function need to be theoretically investigated further in the future.\n\nRe: Goodness of experimental results:\n- The proposed loss function reduces the problem of multiset prediction into a series of supervised learning problems. Assuming that the oracle policy is included in the hypothesis space of the parametrized policy, minimizing the per-step KL divergence is a consistent estimator. However, we agree that more theoretical analysis on the proposed loss function, such as its finite-sample behavior and convergence rate, should be investigated further in the future. \n\nOur experimental results provide evidence that the parametrized policy can achieve high exact match and F1 scores after minimizing the proposed loss. The per-step entropy analysis summarized in Figure 1 provides an indirect, empirical evidence that the behavior of the parametrized policy is consistent with that of the oracle.\n\nSummary:\nWe thank you for the comments, and hope that the additions of the Precision/Recall/multiset definitions, and the above clarifications about the technical analysis improve the manuscript and clarify our technical contributions. " ]
[ 5, 7, 4, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1 ]
[ "iclr_2018_Syjha0gAZ", "iclr_2018_Syjha0gAZ", "iclr_2018_Syjha0gAZ", "rktAPxrlG", "SJVGWfceM", "SJYge3sxG" ]
iclr_2018_SJme6-ZR-
A Deep Learning Approach for Survival Clustering without End-of-life Signals
The goal of survival clustering is to map subjects (e.g., users in a social network, patients in a medical study) to K clusters ranging from low-risk to high-risk. Existing survival methods assume the presence of clear \textit{end-of-life} signals or introduce them artificially using a pre-defined timeout. In this paper, we forego this assumption and introduce a loss function that differentiates between the empirical lifetime distributions of the clusters using a modified Kuiper statistic. We learn a deep neural network by optimizing this loss, that performs a soft clustering of users into survival groups. We apply our method to a social network dataset with over 1M subjects, and show significant improvement in C-index compared to alternatives.
rejected-papers
The submission proposes a Kuiper statistic based loss function for survival clustering. This loss function is applied to train a deep network. Results are presented on a Friendster dataset. This submission received borderline/mixed reviews. The primary concerns were: justification of the Kuiper loss, lack of details of the experimental setup, writing style. In the end, these concerns remain. Of particular importance is the justification and experimental validation of the Kuiper statistic. Although it seems a reasonable choice, from the authors' response to R3: "We now also report results for Kolmogorov-Smirnov loss. Although the difference in performance between the two loss functions is not significant in the Friendster dataset, Kuiper loss has higher statistical power in distinguishing distribution tails [Tygert 2010]." If this theoretical result from [Tygert 2010] is relevant, it should be possible to demonstrate this experimentally. If such differences are irrelevant for the data of interest, the paper should perhaps be reframed with a better discussion of available statistics and literature (cf. Reviewer 2), and a more general presentation de-emphasizing modeling choices that may have limited practical relevance.
train
[ "ry3ETTtxG", "HyM5JWhgG", "S1QsSa1-M", "BJqJHI67z", "BJ4iN8p7M", "S1hGEIa7f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Pros:\nThe paper is a nice read, clearly written, and its originality is well stated by the authors, “addressing the lifetime clustering problem without end-of-life signals for the first time”. I do not feel experienced enough in the field to evaluate the significance of this work.\n\nThe approach proposed in the manuscript is mainly based on a newly-designed nonparametric loss function using the Kuiper statistic and uses a feed-forward neural network to optimize the loss function. This approach does challenge some traditional assumptions, such as the presence of end-of-life signals or the artificial defined timeouts. Instead of giving a clear end-of-life signal, the authors specify a probability of end-of-life that permits us to take into account the associated uncertainty. By analyzing a large-scale social network dataset, it is shown that the proposed method performs better on average than the other two traditional models.\n\nCons: \nI think that the main drawback of the paper is that the structure of the neural network and the deep learning techniques used for optimizing the loss function are not explained in sufficient detail. ", "This paper discusses an application of survival analysis in social networks.\n\nWhile the application area seems to be pertinent, the statistics as presented in this paper are suboptimal at best. There is no useful statistical setup described (what is random? etc etc), the interplay between censoring and end-of-life is left rather fuzzy, and mentioned clustering approaches are extensively studied in the statistical literature in so-called frailty analysis. The setting is also covered in statistics in the extensive literature on repeated measurements and even time-series analysis. It's up to the authors discuss similarities and differences of results of the present approach and those areas.\n\nThe numerical result is not assessing the different design decisions of the approach (why use a Kuyper loss?) in this empirical paper.\n\n", "Authors provide an interesting loss function approach for clustering using a deep neural network. They optimize Kuiper-based nonparametric loss and apply the approach on a large social network data-set. However, the details of the deep learning approach are not well described. Some specific comments are given below.\n\n1.Further details on use of 10-fold cross validation need to be discussed including over-fitting aspect.\n2. Details on deep learning, number of hidden layers, number of hidden units, activation functions, weight adjustment details on each learning methods should be included.\n\n3. Conclusion section is very brief and can be expanded by including a discussion on results comparison and over fitting aspects in cross validation. Use of Kuiper-based nonparametric loss should also be justified as there are other loss functions can be used under these settings.\n", "We are glad you enjoyed the paper. We have added the following paragraph describing the neural network architecture and the deep learning methods we use. We have also reported results for the different design choices in Appendix.\n\n“We experimented with different neural network architectures as shown in Table 2. In Table 1, we show the results for a simple neural network configuration with one fully-connected hidden layer with 128 hidden units and tanh activation function. We use a batch size of 8192 and a learning rate of 10^{-4}. We also use batch normalization to facilitate convergence, and regularize the weights of the neural network using an L2 penalty of 0.01. Appendix 8.2 shows a more detailed evaluation of different architecture choices.”\n", "Thanks for your comments. \n\n1.\tWe had described the underlying statistical setup in Appendix, essentially describing the activity times of a cluster of subjects using a Random Marked Point Process (RMPP). Following your feedback, we have moved it to a section in the paper called ‘Formal Framework’.\n\n2.\t(Frailty analysis) In our original draft, we followed prior work [Witten and Tibshirani, 2010; Gaynor and Bair, 2013] and refrained from comparing our approach with frailty models to avoid confusion w.r.t. the task at hand. But we now do see the benefit of clarifying the tasks, and thank the reviewer for asking us to do so. We added the following paragraph clarifying this difference. \n\n“Extensive research has been done on what is known as frailty analysis, for predicting survival outcomes in the presence of clustered observations. Although frailty models provide more flexibility in the presence of clustered observations, they do not provide a mechanism for obtaining the clusters themselves, which is our primary goal. In addition, our approach does not assume proportional hazards unlike most frailty models.” \n\n3.\tCensoring and ‘end-of-life’ are simply the two possibilities for each user. In the case where we have end-of-life signals, a subject could be “dead” or “censored” based on the signal. Similarly, when we do not have an end-of-life signal, there is a probability of the subject being “dead” or “censored” (in our case, we calculate this probability using S_u, the time till censoring).\n\n4.\tWe have reported the results for different choices for the loss function - Kuiper loss vs Kolmogorov-Smirnov loss. Although, the difference in performance between the two loss functions is not significant in the Friendster dataset, Kuiper loss is theoretically better due to its increased statistical power in distinguishing distribution tails. \n \n5.\tWe have also reported results for different neural network design choices (batch sizes, learning rates, number of hidden layers, and number of hidden units) in Appendix.\n", "Thanks for your comments.\n\n1.\tWe do not seem to be overfitting to the training data because: a) our loss function is not susceptible to outliers in the dataset (as it considers set distributions instead of the more standard approach of using a loss function defined over each individual data point), b) we monitor the validation loss while training the neural network, and c) we are able to generalize well in the test data.\n \n2.\tWe added the following paragraph describing the deep learning techniques we used. Moreover, we now report results for different neural network design choices (batch sizes, learning rates, number of hidden layers, and number of hidden units). \n\n“We experimented with different neural network architectures as shown in Table 2. In Table 1, we show the results for a simple neural network configuration with one fully-connected hidden layer with 128 hidden units and tanh activation function. We use a batch size of 8192 and a learning rate of 10^{-4}. We also use batch normalization to facilitate convergence, and regularize the weights of the neural network using an L2 penalty of 0.01. Appendix 8.2 shows a more detailed evaluation of different architecture choices.”\n\n3.\tWe now also report results for Kolmogorov-Smirnov loss. Although the difference in performance between the two loss functions is not significant in the Friendster dataset, Kuiper loss has higher statistical power in distinguishing distribution tails [Tygert 2010]. \n" ]
[ 6, 4, 6, -1, -1, -1 ]
[ 1, 4, 5, -1, -1, -1 ]
[ "iclr_2018_SJme6-ZR-", "iclr_2018_SJme6-ZR-", "iclr_2018_SJme6-ZR-", "ry3ETTtxG", "HyM5JWhgG", "S1QsSa1-M" ]
iclr_2018_Bys_NzbC-
Achieving Strong Regularization for Deep Neural Networks
L1 and L2 regularizers are critical tools in machine learning due to their ability to simplify solutions. However, imposing strong L1 or L2 regularization with gradient descent method easily fails, and this limits the generalization ability of the underlying neural networks. To understand this phenomenon, we investigate how and why training fails for strong regularization. Specifically, we examine how gradients change over time for different regularization strengths and provide an analysis why the gradients diminish so fast. We find that there exists a tolerance level of regularization strength, where the learning completely fails if the regularization strength goes beyond it. We propose a simple but novel method, Delayed Strong Regularization, in order to moderate the tolerance level. Experiment results show that our proposed approach indeed achieves strong regularization for both L1 and L2 regularizers and improves both accuracy and sparsity on public data sets. Our source code is published.
rejected-papers
The submission is motivated by an empirical observation of a phase transition when a sufficiently high L1 or L2 penalty on the weights is applied. The proposed solution is to optimize for several epochs without the penalty followed by introduction of the penalty. Although empirical results seem to moderately support this approach, there does not seem to be sufficient theoretical justification, and comparisons are missing. Furthermore, the author response to reviewer concerns contain unclear statements e.g. "The reason is that, to reach the level of L1 norm that is low enough, the model needs to go through the strong regularization for the first few epochs, and the neurons already lose its learning ability during this period like the baseline method." It is not at all clear what "neurons already lose its learning ability" is supposed to mean.
train
[ "rk43lpulz", "BJk4jatgM", "H1U0Hpcef", "BJFUdLTQM", "SJp9GptXM", "HJNPfatQf", "B1ej93KXz", "rJLJ_3KXz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The authors studied the behavior that a strong regularization parameter may lead to poor performance in training of deep neural networks. Experimental results on CIFAR-10 and CIFAR-100 were reported using AlexNet and VGG-16. The results seem to show that a delayed application of the regularization parameter leads to improved classification performance.\n\nThe proposed scheme, which delays the application of regularization parameter, seems to be in contrast of the continuation approach used in sparse learning. In the latter case, a stronger parameter is applied, followed by reduced regularization parameter. One may argue that the continuation approach is applied in the convex optimization case, while the one proposed in this paper is for non-convex optimization. It would be interesting to see whether deep networks can benefit from the continuation approach, and the strong regularization parameter may not be an issue because the regularization parameter decreases as the optimization progress goes on.\n\nOne limitation of the work, as pointed by the authors, is that experimental results on big data sets such as ImageNet is not reported. \n", "The paper is well motivated and written. However, there are several issues.\n1. As the regularization constant increases, the performance first increases and then falls down -- this specific aspect is well known for constrained optimization problems. Further, the sudden drop in performance also follows from vanishing gradients problem in deep networks. The description for ReLUs in section 2.2 follows from these two arguments directly, hence not novel. Several of the key aspects here not addressed are: \n1a. Is the time-delayed regularization equivalent to reducing the value (and there by bringing it back to the 'good' regime before the cliff in the example plots)? \n1b. Why should we keep increasing the regularization constant beyond a limit? Is this for compressing the networks (for which there are alternate procedures), or anything else. In other words, for a non-convex problem (about whose landscape we know barely anything), if there are regimes of regularizers that work well (see point 2) -- why should we ask for more stronger regularizers? Is there any optimization-related motivation here (beyond the single argument that networks are overparameterized)? \n2. The proposed experiments are not very conclusive. Firstly, the authors need to test with modern state-of-the-art architectures including inception and residual networks. Secondly, more datasets including imagenet needs to be tested. Unless these two are done, we cannot assertively say that the proposal seems to do interesting things. Thirdly, it is not clear what Figure 5 means in terms of goodness of learning. And lastly, although confidence intervals are reported for Figures 3,4 and Table 2, statistical tests needs to be performed to report p-values (so as to check if one model significantly beats the other).", "The work was prompted by an interesting observation: a phase transition can be observed in deep learning with stochastic gradient descent and Tikhonov regularization. When the regularization parameter exceeds a (data-dependent) threshold, the parameters of the model are driven to zero, thereby preventing any learning. The authors then propose to moderate this problem by letting the regularization parameter to be zero for 5 to 10 epochs, and then applying the \"strong\" penalty parameter. In their experimental results, the phase transition is not observed anymore with their protocol. This leads to better performances, by using penalty parameters that would have prevent learning with the usual protocol.\n\nThe problem targeted is important, in the sense that it reveals that some of the difficulties related to non-convexity and the use of SGD that are often overlooked. The proposed protocol is reported to work well, but since it is really ad hoc, it fails to convince the reader that it provides the right solution to the problem. I would have found much more satisfactory to either address the initialization issue by a proper warm-start strategy, or to explore standard optimization tools such as constrained optimization (i.e. Ivanov regularization) , that could be for example implemented by stochastic projected gradient or barrier functions. I think that the problem would be better handled that way than with the proposed strategy, which seems to rely only on a rather limited amount of experiments, and which may prove to be inefficient when dealing with big databases.\n\nTo summarize, I believe that the paper addresses an important point, but that the tools advocated are really rudimentary compared with what has been already proposed elsewhere.\n\nDetails :\n- there is a typo in the definition of the proximal operator in Eq. (9) \n- there are many unsubstantiated speculations in the comments of the experimental section that do not add value to the paper \n- the figure showing the evolution of the magnitude of parameters arrives too late and could be completed by the evolution of the data-fitting term of the training criterion", "We made the following changes in the revised version:\n\n- We did additional proofreading.\n\n- We adjusted resolution of figures for better presentation.\n\n- We added results of VGG variations for additional data set (SVHN) in Section 5 and Appendix A.\n\n- We explained why we use simple method and what other methods we tried in Section 2.3 and Appendix B.\n\n- We fixed a typo in proximal operator function in Section 2.3.\n\n- We explained how our method is different from using slightly weaker regularization strength in Section 2.3.\n\n- We added explanation of Figure 5.\n\n- We added p-values for improvements.\n\n- We removed unsubstantiated speculations in Section 3.\n\n- We further explained why we did not use ImageNet data set in Section 5. \n\n- We explained that our method could not be applied to ResNet without normalization in Section 5.\n\n", "We thank the reviewer for asking important questions.\n\n1. Novelty\n\nWe agree with the referee that the phenomenon that the performance first increases and then falls down as the regularization parameter increases is well known for constrained optimization problems. However, we observe a different phenomenon that the performance first increases and then \"suddenly\" fails at some point as the regularization parameter increases in deep networks. The sudden failure (as opposed to gradual falling) in performance found by our analysis is novel.

\n\nAlso, in order to claim that the sudden failure in performance also follows from vanishing gradients problem in deep networks, we would at least need to know that the gradients would SUDDENLY vanish as the regularization parameter increases. To the best of our knowledge, there is no such an analysis, be it theoretical or empirical. In contrast, we conduct our analysis in an opposite direction — first of all, we empirically demonstrated that the early-stage weights would diminish suddenly, when the regularization parameter is increased right above a certain threshold. This together with the aid of the derivation in Section 2.2 lead us to conclude that the gradients would suddenly vanish as the regularization parameter increases.\n\nIn other words, knowing the empirically found diminishing weights, we have explained the sudden failure in performance by sudden vanishing gradients through the derivation in Section 2.2. Indeed, this intuition has also guided us to introduce the Delayed Strong Regularization in this paper. (Please see the difference between Delayed Strong Regularization and Reducing Regularization Parameter clarified below.)\n\n\n\n1a. Delayed Strong Regularization vs. Reducing Regularization Parameter\n\nThey are not equivalent. Reducing the regularization parameter means that we enforce weaker regularization in each training step. This is different from our approach (Delayed Strong Regularization) where we enforce the same strong regularization in each training step after five (\\gamma) epochs. In fact, by skipping regularization for the first five epochs out of 300 epochs, the total reduced amount by regularization throughout training is decreased. However, the decreased amount is negligible. Indeed, our approach does not fail in learning with regularization parameter that is two orders of magnitude greater than the highest regularization parameter the baseline can adopt without a fail.\n\nIn case the reviewer meant reducing the regularization strength by \"gradually\" reducing regularization strength throughout the training, we also performed a simple experiment with VGG-16 on CIFAR-100. We set the initial regularization parameter \\lambda=2*10^-3 and 6*10^-5 for L2 and L1 regularization, respectively, which are just above the \"tolerance level\". Then, we continuously reduced \\lambda to zero throughout the training session. The trained models didn't show any improvement over \"random guess\", which means that they were not able to learn. \n\n\n\n1b. Why should we keep increasing the regularization constant beyond a limit?\n\nOften, deep neural networks need strong regularization especially when they are too complex while training data is small. Although data is key in deep learning, it is often very expensive to obtain, so it is difficult to secure enough data set for the networks in practice. When the model is overfitted, one possible solution is to keep increasing the regularization strength, and strong regularization may boost the accuracy of the networks. \n\nAlthough many models are still over-fitted, stronger regularization cannot be achieved due to the vanishing gradient problem in the deep networks, as described in the paper (especially Section 2.2). In the beginning of the training, where gradient is small for stochastic gradient descent method, we find that learning fails if strong regularization is enforced. However, we find that we can overcome this by waiting for the model to reach an \"active learning\" phase, where the gradients' magnitudes are significant, and then enforcing strong regularization. Delayed Strong Regularization enables us to obtain the superior performance that is otherwise hidden by learning failure in deep networks.\n\nStrong regularization provides not only a better accuracy for over-fitted models but also more model compression. We show that we can achieve 2 to 4 times more compression compared to the baseline. The model compression can be done by other approaches such as pruning and quantization, but compression by regularization is also effective especially for removing neurons in groups with group sparsity. Certainly, there are recent efforts on this direction (Wen et al., 2016; Scardapane et al., 2017; Yoon & Hwang, 2017). Although our approach is not applied to group sparsity regularization in this paper, our approach has no limit on it.", "2. Clarification\n\n- State-of-the-art architectures\nAs explained in Section 2.2 and 3, we do not employ architectures that contain normalization techniques. Please see the Normalization paragraph of Section 2.2 for details. Unfortunately, most recent architectures contain normalization techniques. In order to apply our approach to recent architectures such as Residual Networks, we actually tried to intentionally excluded normalization part from them. However, we could not control the exploding gradients caused by the exclusion of normalization.\n\n- More data sets\nAs described in Section 5, we did not experiment on ImageNet only because it requires much time to train each model although we need to train many models. We need to fine-tune the models with different regularization parameters, and we also need multiple training sessions of each model to obtain confidence interval. For example, the experiment results in Figure 3 and 4 include 750 training sessions in total. This is something we cannot afford with ImageNet data set, which requires several weeks of training for EACH session (unless you have GPU clusters). However, we instead performed more experiments on another data set. Specifically, we will add results of different VGG architectures on the SVHN data set, in order to see the difference in the tolerance level that is caused by a different number of hidden layers. We will add these results in the revised version.\n\n- Explanation of Figure 5\nHere is the detailed explanation of Figure 5. Through the grace period where the regularization parameter is zero, we expect the model to reach an \"active learning\" phase with an elevated gradient amount (e.g., green and blue lines in Figure 2b reach there in a couple of epochs). We hypothesize that once the model reaches there, it does not suffer from vanishing gradients any more even when strong regularization is enforced. We empirically show that the hypothesis is valid in Figure 5a, where the gradient amount does not decrease when the strong regularization is enforced (at epoch=5).\n\n In Figure 5b, although the same strong regularization is enforced since epoch 5, the magnitude of weights in our model stops decreasing around epoch 20, while that in baseline keeps decreasing towards zero. This means that our model can cope with strong regularization, and it maintains its equilibrium between gradients from L and those from regularization. We will change the Figure 5b and its description to make it more clear.\n\n- p-value\nWe did not compute p-values since we only ran three training sessions for each model. However, as suggested by the reviewer, we computed the p-value and found that most improvements are statistically significant (p < 0.05). We will include the exact p-values in the revised version.\n\nThank you again for the comments, and we will make them clear in the next version.", "We thank the reviewer for the interesting suggestion. \n\nIn our paper, we proposed to adopt strong regularization for two main goals. One goal is to improve the model's accuracy, and the other goal is to compress the model while the accuracy is kept at the same level. Your suggestion meets especially the latter one. However, we think that it will be very difficult for your suggested approach to perform well with deep neural networks. Once the strong regularization is enforced in the beginning, the magnitudes of weights decrease quickly. This in turn drives the magnitudes of gradients to diminish exponentially in deep neural networks as explained in Section 2.2, and thus, the model loses its ability to learn after a short period of strong regularization. Even if we reduce the strength of the regularization after the strong regularization, it will be difficult for the model to recover its learning ability because the gradients are proportional to the product of the weights at later layers.\n\nIn order to actually check if your suggested method works, we performed a simple experiment with VGG-16 on CIFAR-100. We set the initial regularization parameter \\lambda=2*10^-3 and 6*10^-5 for L2 and L1 regularization, respectively, which are just above the \"tolerance level\". Then, we continuously reduced \\lambda_t to zero throughout the training session. The trained models didn't show any improvement over \"random guess\", which means that they were not able to learn. \n\nWe could not perform experiments on ImageNet for the following reason (as answered to other reviewers).\n\nAs described in Section 5, we did not experiment on ImageNet only because it requires much time to train each model although we need to train many models. We need to fine-tune the models with different regularization parameters, and we also need multiple training sessions of each model to obtain confidence interval. For example, the experiment results in Figure 3 and 4 include 750 training sessions in total. This is something we cannot afford with ImageNet data set, which requires several weeks of training for EACH session (unless we have GPU clusters). \n\nHowever, we instead performed additional experiments on another data set. Specifically, we will add results of different VGG architectures on the SVHN data set, in order to see the difference in the tolerance level that is caused by a different number of hidden layers. We will add these results in the revised version.\n\nWe will make these points clear in the revised draft.", "We thank the reviewer for the insightful comments.\n\nWe agree that our proposed approach is very simple. The reason we employed the simple method is that it is effective while it is simple to implement for readers. The only additional hyper-parameter, which is the number of initial epochs to skip regularization, is also not difficult to set. We think that the proposed method is very close to the traditional regularization method so that it inherits the traditional one's good performance for non-strong regularization while it also achieves strong regularization. \n\nWe actually tried a couple more approaches other than the proposed one in our preliminary experiments. We found that the proposed one shows the best accuracy among the approaches we tried while it is the simplest. For example, we tried an approach that can be regarded as a warm-start strategy. It starts with the regularization parameter \\lambda_t=0, and then it gradually increases \\lambda_t to \\lambda for \\gamma epochs, where \\gamma >= 0 and it is empirically set. We found that it can achieve strong regularization, but its best accuracy is slightly lower than that of our proposed approach. We think that this is because our model can explore the search space more freely without regularization while the warm-start model enforces some regularization during the warm-up stage. \n\nWe also tried a method that is similar to Ivanov regularization. In this method, the regularization term is applied only when the L1 norm of the weights is greater than a certain threshold. To enforce strong regularization, we set the \\lambda just above the tolerance level that is found by the baseline method. However, this method did not accomplish any learning. The reason is that, to reach the level of L1 norm that is low enough, the model needs to go through the strong regularization for the first few epochs, and the neurons already lose its learning ability during this period like the baseline method. If we set the lambda below the tolerance level, it cannot reach the desired L1 norm without strong regularization, and thus the performance is inferior to our proposed method. \n\nWe did not extend these preliminary experiments to full experiments because the required number of training sessions is overwhelming, and the preliminary results were not promising. As mentioned in the answers to the other reviewers, the number of training sessions needed for the results in Figure 3 and 4 was 750, which takes quite much time. We will add this discussion to the paper in the new version to make it clear. We will also add more experiment results on another data set to convince readers of our proposed method's superiority. Specifically, we will add results of different VGG architectures on the SVHN data set, in order to see the difference in the tolerance level that is caused by a different number of hidden layers.\n\nWe also thank the detailed comments at the end of the review. We agree with the reviewer, and we will revise the paper accordingly." ]
[ 6, 4, 5, -1, -1, -1, -1, -1 ]
[ 2, 5, 5, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Bys_NzbC-", "iclr_2018_Bys_NzbC-", "iclr_2018_Bys_NzbC-", "iclr_2018_Bys_NzbC-", "BJk4jatgM", "BJk4jatgM", "rk43lpulz", "H1U0Hpcef" ]
iclr_2018_HkOhuyA6-
Graph Classification with 2D Convolutional Neural Networks
Graph classification is currently dominated by graph kernels, which, while powerful, suffer some significant limitations. Convolutional Neural Networks (CNNs) offer a very appealing alternative. However, processing graphs with CNNs is not trivial. To address this challenge, many sophisticated extensions of CNNs have recently been proposed. In this paper, we reverse the problem: rather than proposing yet another graph CNN model, we introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs. Despite its simplicity, our method proves very competitive to state-of-the-art graph kernels and graph CNNs, and outperforms them by a wide margin on some datasets. It is also preferable to graph kernels in terms of time complexity. Code and data are publicly available.
rejected-papers
The submission proposes a strategy for creating vector representations of graphs, upon which a CNN can be applied. Although this is a useful problem to solve, there are multiple works in the existing literature for doing so. Given that the choice between these is essentially empirical, a through comparison is necessary. This was pointed out in the reviews, and relevant missing comparisons were given. The authors did not provide a response to these concerns.
train
[ "BJL5gdKgG", "H1W1OsYxG", "HkyP9hteG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose to use 2D CNNs for graph classification by transforming graphs to an image-like representation from its node embedding. The approach uses node2vec to obtain a node embedding, which is then compacted using PCA and turned into a stack of discretized histograms. Essentially the authors propose an approach to use a node embedding to achieve graph classification.\n\nIn my opinion there are several weak points:\n\n1) The approach to obtain the image-like representation is not well motivated. Other approaches how to aggregate the set of node embeddings for graph classification are known, see, e.g., \"Representation Learning on Graphs: Methods and Applications\", William L. Hamilton, Rex Ying, Jure Leskovec, 2017. The authors should compare to such methods as a baseline.\n\n2) The experimental evaluation is not convincing:\n- the selection of competing methods is not sufficient. I would like to suggest to add an approach similar to Duvenaud et al., \"Convolutional networks on graphs for learning molecular fingerprints\", NIPS 2015.\n- the accuracy results are taken from other publications and it is not clear that this is an authoritative comparison; the accuracy results published for state-of-the-art graph kernels are superior to those obtained by the proposed method, cf., e.g., Kriege et al., \"On Valid Optimal Assignment Kernels and Applications to Graph Classification\", NIPS 2016.\n- it would be interesting to apply the approach to graphs with discrete and continuous labels.\n\n3) The authors argue that their method is preferable to graph kernels in terms of time complexity. This argument is questionable. Most graph kernels compute explicit feature maps and can therefore be used with efficient linear SVMs (unfortunately most publications use a kernelized SVM). Moreover, the running of computing the node embedding must be emphasized: On page 2 the authors claim a \"constant time complexity at the instance level\", which is not true when also considering the running time of node2vec. Moreover, I do not think that node2vec is more efficient than, e.g., Weisfeiler-Lehman refinement used by graph kernels.\n\nIn summary: Since the technical contribution is limited, the approach needs to be justified by an authoritative experimental comparison. This is not yet achieved with the results presented in the submitted paper. Therefore, it should not be accepted in its current form.", "The paper introduces a method for learning graph representations (i.e., vector representations for graphs). An existing node embedding method is used to learn vector representations for the nodes. The node embeddings are then projected into a 2-dimensional space by PCA. The 2-dimensional space is binned using an imposed grid structure. The value for a bin is the (normalized) number of nodes falling into the corresponding region. \n\nThe idea is simple and easily explained in a few minutes. That is an advantage. Also, the experimental results look quite promising. It seems that the methods outperforms existing methods for learning graph representations. \n\nThe problem with the approach is that it is very ad-hoc. There are several (existing) ideas of how to combine node representations into a representation for the entire graph. For instance, averaging the node embeddings is something that has shown promising results in previous work. Since the methods is so ad-hoc (node2vec -> PCA -> discretized density map -> CNN architecure) and since a theoretical understanding of why the approach works is missing, it is especially important to compare your method more thoroughly to simpler methods. Again, pooling operations (average, max, etc.) on the learned node2vec embeddings are examples of simpler alternatives. \n\nThe experimental results are also not explained thoroughly enough. For instance, since two runs of node2vec will give you highly varying embeddings (depending on the initialization), you will have to run node2vec several times to reduce the variance of your resulting discretized density maps. How many times did you run node2vec on each graph? \n\n", "The paper presents a novel representation of graphs as multi-channel image-like structures. These structures are extrapolated by \n1) mapping the graph nodes into an embedding using an algorithm like node2vec\n2) compressing the embedding space using pca\n3) and extracting 2D slices from the compressed space and computing 2D histograms per slice.\nhe resulting multi-channel image-like structures are then feed into vanilla 2D CNN.\n \nThe papers is well written and clear, and proposes an interesting idea of representing graphs as multi-channel image-like structures. Furthermore, the authors perform experiments with real graph datasets from the social science domain and a comparison with the SoA method both graph kernels and deep learning architectures. The proposed algorithm in 3 out of 5 datasets, two of theme with statistical significant." ]
[ 3, 4, 7 ]
[ 5, 3, 3 ]
[ "iclr_2018_HkOhuyA6-", "iclr_2018_HkOhuyA6-", "iclr_2018_HkOhuyA6-" ]
iclr_2018_SyW4Gjg0W
Kernel Graph Convolutional Neural Nets
Graph kernels have been successfully applied to many graph classification problems. Typically, a kernel is first designed, and then an SVM classifier is trained based on the features defined implicitly by this kernel. This two-stage approach decouples data representation from learning, which is suboptimal. On the other hand, Convolutional Neural Networks (CNNs) have the capability to learn their own features directly from the raw data during training. Unfortunately, they cannot handle irregular data such as graphs. We address this challenge by using graph kernels to embed meaningful local neighborhoods of the graphs in a continuous vector space. A set of filters is then convolved with these patches, pooled, and the output is then passed to a feedforward network. With limited parameter tuning, our approach outperforms strong baselines on 7 out of 10 benchmark datasets, and reaches comparable performance elsewhere. Code and data are publicly available.
rejected-papers
The reviewers were unanimous in their assessment that the paper was not ready for publication in ICLR. Their concerns included: - lack of novelty over Niepert, Ahmed, Kutzkov, ICML 2016 - The approach learns combinations of graph kernels and its expressive capacity is thus limited - The results are close to the state of the art and it is not clear whether any improvement is statistically significant. The authors have not provided a response to these concerns.
train
[ "rywsxcOxG", "BJS9j2YlM", "rJstP6Fef" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a graph classification method by integrating three techniques, community detection, graph kernels, and CNNs.\n\n* This paper is clearly written and easy to follow. Thus the clarity is high.\n\n* The originality is not high as the application of neural networks for graph classification has already been studied elsewhere and the proposed method is a direct combination of three existing methods, community detection, graph kernels, and CNNs.\n\n* The quality and the significance of this paper it not high due to the following reasons:\n- The motivation is misleading in two folds.\n First, the authors say that the graph kernel + SVM approach has a drawback due to two independent processes of graph representation and learning.\n However, the parameters included in respective graph kernel is usually optimized via the SVM classification, hence they are not independent with each other.\n Second, the authors say that the proposed method addresses the above issue of independence between graph representation and learning.\n However, it also uses the two-step procedure as it first obtain the kernel matrix K via graph kernels and then apply CNN for classification, which is fundamentally the same as the existing approach.\n Although community detection is used before graph kernels, such subgraph extraction process is already implicitly employed in various graph kernels.\n I recommend to revise and clarify this point.\n- In experimental evaluation, why several kernels including SP, RW, and WL are not used in the latter five datasets?\n This missing experiment significantly deteriorate the quality of empirical evaluation and I strongly recommend to add results for such kernels.\n- It is mentioned that the parameter h is fixed to 5 in the WL kernel. However, it is known that the performance of the WL kernel depends on the parameter and it should be tuned by cross-validation.\n In contrast, parameters (number of epochs and the learning rate) are tuned in the proposed method. Thus the current comparison is not fair.\n- In addition to the above point, how are parameters for GR and RW?\n- Runtime is shown in Table 4 but there is no comparison with other methods. Although it is mentioned in the main text that the proposed method is faster than Graph CNN and Depp Graph Kernels, there is no concrete values and this statement is questionable (Runtime will easily vary due to the hardware configuration).\n\n* Additional comment:\n- Why is the community detection step needed? What will happen if K is directly constructed from given N graphs and what is the advantage of using not the original graphs but extracted subgraphs?\n- In the first step of finding characteristic subgraphs, frequent subgraph mining can be used instead community detection.\n Frequent subgraph mining is extensively used in various methods for classification of graph-structured data, for example:\n * Tsuda, K., Entire regularization paths for graph data, ICML 2007.\n * Thoma, M. et al., Discriminative frequent subgraph mining with optimality guarantees, Statistical Analysis and Data Mining, 2010\n * Takigawa, I., Mamitsuka, H., Generalized Sparse Learning of Linear Models Over the Complete Subgraph Feature Set, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017\n What is the advantage of using the community detection compared to frequent subgraph mining or other subgraph enumeration methods?\n", "The paper presents a method of using convolution neural networks for classifying arbitrary graphs. The authors proposed the following methodology\n1) Extract subgraph communities from the graphs, known as patches\n2) For each patch generate a graph kernel representation and subsampled them using nystrom method, producing the normalized patches\n3) Passes the set of normalized patches as input to the CNN \n\nThe paper is well written, proposes an interesting and original idea, provides experiments with real graph datasets from two domains, bioinformatics and social sciences, and a comparison with SoA algorithms both graph kernels and other deep learning architectures. Although the proposed algorithm seems to outperform on 7 out of 10 datasets, the performances are really close to the best SoA algorithm. Is there any statistical significance over the gain in the performances? It's not really clear from the reported numbers. Moreover, the method makes an strong assumption that the graph is strongly characterized by one of its patches, ie its subgraph communities, which might not be the case in arbitrary graph structures, thus limiting their method. I am not really convince about the preprocessing step of patch extraction. Have the authors tried to test what is the performance of graph kernel representation in the complete graph as input to the CNN, instead of a set of patches? Moreover, although the authors claim that typical graph kernel methods are two-stage approached decoupling representation from learning, their proposal also folds into that respect, as representation is achieved in the preprocessing step of patching extractions and normalization, while learning is achieved by the CNN. Finally, it is not also clear to me the what are the communities reported in Table 2 for the bioinformatics datasets. Where they come from and what do they represent? ", "The authors propose a method for graph classification by combining graph kernels and CNNs. In a first step patches are extracted via community detection algorithms. These are then transformed into vector representation using graph kernels and fed to a neural network. Multiple graph kernels may serve as different channels. The approach is evaluated on synthetic and real-world graphs.\n\nThe article is well-written and easily comprehensible, but suffers from several weak points:\n\n* Features are not learned directly from the graphs, but the approach merely weights graph kernel features.\n* The weights refer to the RKHS and filters are not easily interpretable.\n* The approach is similar in spirit to Niepert, Ahmed, Kutzkov, ICML 2016 and thus incremental.\n* The experiments are not convincing: The improvement over the existing work is small on real-world data sets. The synthetic classification task essentially is to distinguish a clique from star graph and not very meaningful. Moreover, a comparison to at least one of the recent approaches similar to \"Convolutional Networks on Graphs for Learning Molecular Fingerprints\" (Duvenaud et al., NIPS 2015) or \"Message Passing Neural Networks\" (Gilmer et al., 2017) would be desirable.\n\nTherefore, I cannot recommend the paper for acceptance." ]
[ 5, 5, 4 ]
[ 5, 4, 5 ]
[ "iclr_2018_SyW4Gjg0W", "iclr_2018_SyW4Gjg0W", "iclr_2018_SyW4Gjg0W" ]
iclr_2018_BkVsWbbAW
Deep Generative Dual Memory Network for Continual Learning
Despite advances in deep learning, artificial neural networks do not learn the same way as humans do. Today, neural networks can learn multiple tasks when trained on them jointly, but cannot maintain performance on learnt tasks when tasks are presented one at a time -- this phenomenon called catastrophic forgetting is a fundamental challenge to overcome before neural networks can learn continually from incoming data. In this work, we derive inspiration from human memory to develop an architecture capable of learning continuously from sequentially incoming tasks, while averting catastrophic forgetting. Specifically, our model consists of a dual memory architecture to emulate the complementary learning systems (hippocampus and the neocortex) in the human brain and maintains a consolidated long-term memory via generative replay of past experiences. We (i) substantiate our claim that replay should be generative, (ii) show the benefits of generative replay and dual memory via experiments, and (iii) demonstrate improved performance retention even for small models with low capacity. Our architecture displays many important characteristics of the human memory and provides insights on the connection between sleep and learning in humans.
rejected-papers
Thank you for submitting you paper to ICLR. The big-picture idea is fairly simple, although the implementation is certainly challenging requiring a deep generative model to be trained as part of the final system. The experimental validation is not sufficient to warrant publication. A comparison to a larger number of competitors e.g. [1,2] on a greater range of tasks is required. [1] Continual Learning Through Synaptic Intelligence Friedemann Zenke BenPoole SuryaGanguli, ICML 2017 [2] Gradient Episodic Memory for Continual Learning, David Lopez-Paz and Marc’Aurelio Ranzato, NIPS 2017
train
[ "rkVGk_UEG", "SydEmAKgM", "rk8n0esxf", "S1t718Tef", "B1uEjlZXM", "BJf-jeW7z", "Hk_3rebmG", "SJTSBxZmf", "r1gLq2olz", "rkb_ZMilM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "Using inspirations from brain architecture and mechanism is certainly great direction and I applause authors for good referencing of some of the relevant literature . But I get a sense from the authors both in the paper and in their comment that they are over emphasizing the link of their work and how brain functions. We should be more careful and cautious in drawing conclusions and connecting theories specially about the a topic like memory and sleep which is still heavily under investigation in neuroscience and cognitive science communities and no widely accepted there exist. The theory of short-term long-term memory and memory consolidate is not the only model for memory, e.g. there is the multiple trace model and there are more recent studies that question the memory consolidation models. \n\nI like to thank you the authors for adding new experiments and revising the paper. It has improved compared to the original version. I very much like the directions of this kind of research. But unfortunately I think it’s a complicated training mechanism with weak experimental evidence. Even though whole structure is novel, the components, and the breakdown of training to sleep, wake, or generative memory are novel.", "This paper propose a variant of generative replay buffer/memory to overcome catastrophic forgetting. They use multiple copy of their model DGMN as short term memories and then consolidate their knowledge in a larger DGMN as a long term memory. \n\nThe main novelty of this work are 1-balancing mechanism for the replay memory. 2-Using multiple models for short and long term memory. The most interesting aspect of the paper is using a generate model as replay buffer which has been introduced before. As explained in more detail below, it is not clear if the novelties introduced in this paper are important for the task or if they are they are tackling the core problem of catastrophic forgetting. \n\nThe paper claims using the task ID (either from Oracle or from a HMM) is an advantage of the model. It is not clear to me as why is the case, if anything it should be the opposite. Humans and animal are not given task ID and it's always clear distinction between task in real world.\n\nDeep Generative Replay section and description of DGDMN are written poorly and is very incomprehensible. It would have been more comprehensive if it was explained in more shorter sentences accompanied with proper definition of terms and an algorithm or diagram for the replay mechanism. \n\nUsing the STTM during testing means essentially (number of STTM) + 1 models are used which is not same as preventing one network from catastrophic forgetting.\n\nBaselines: why is Shin et al. (2017) not included as one of the baselines? As it is the closet method to this paper it is essential to be compared against.\n\nI disagree with the argument in section 4.2. A good robust model against catastrophic forgetting would be a model that still can achieve close to SOTA. Overfitting to the latest task is the central problem in catastrophic forgetting which this paper avoids it by limiting the model capacity.\n\n12 pages is very long, 8 pages was the suggested page limit. It’s understandable if the page limit is extend by one page, but 4 pages is over stretching. ", "This paper reports on a system for sequential learning of several supervised classification tasks in a challenging online regime. Known task segmentation is assumed and task specific input generators are learned in parallel with label prediction. The method is tested on standard sequential MNIST variants as long as a class incremental variant. Superior performance to recent baselines (e.g. EWC) is reported in several cases. Interesting parallels with human cortical and hippocampal learning and memory are discussed.\n\nUnfortunately, the paper does not go beyond the relatively simplistic setup of sequential MNIST, in contrast to some of the methods used as baselines. The proposed architecture implicitly reduces the continual learning problem to a classical multitask learning (MTL) setting for the LTM, where (in the best case scenario) i.i.d. data from all encountered tasks is available during training. This setting is not ideal, though. There are several example of successful multitask learning, but it does not follow that a random grouping of several tasks immediately leads to successful MTL. Indeed, there is good reason to doubt this in both supervised and reinforcement learning domains. In the latter case it is well known that MTL with arbitrary sets of task does not guarantee superior, or even comparable performance to plain single-task learning, due to ‘negative interference’ between tasks [1, 2]. I agree that problems can be constructed where these assumptions hold, but this core assumption is limiting. The requirement of task labels also rules out important use cases such as following a non-stationary objective function, which is important in several realistic domains, including deep RL.\n\n\n[1] Parisotto, Emilio; Lei Ba, Jimmy; Salakhutdinov, Ruslan: \t\nActor-Mimic: Deep Multitask and Transfer Reinforcement Learning. ICLR 2016.\n[2] Andrei A. Rusu, Sergio Gomez Colmenarejo, Çaglar Gülçehre, Guillaume Desjardins, James Kirkpatrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, Raia Hadsell: Policy Distillation. ICLR 2016.", "This paper introduces a neural network architecture for continual learning. The model is inspired by current knowledge about long term memory consolidation mechanisms in humans. As a consequence, it uses:\n-\tOne temporary memory storage (inspired by hippocampus) and a long term memory\n-\tA notion of memory replay, implemented by generative models (VAE), in order to simultaneously train the network on different tasks and avoid catastrophic forgetting of previously learnt tasks.\nOverall, although the result are not very surprising, the approach is well justified and extensively tested. It provides some insights on the challenges and benefits of replay based memory consolidation.\n\nComments:\n\t\n1-\tThe results are somewhat unsurprising: as we are able to learn generative models of each tasks, we can use them to train on all tasks at the same time, a beat algorithms that do not use this replay approach. \n2-\tIt is unclear whether the approach provides a benefit for a particular application: as the task information has to be available, training separate task-specific architectures or using classical multitask learning approaches would not suffer from catastrophic forgetting and perform better (I assume). \n3-\tSo the main benefit of the approach seems to point towards the direction of what possibly happens in real brains. It is interesting to see how authors address practical issues of training based on replay and it show two differences with real brains: 1/ what we know about episodic memory consolidation (the system modeled in this paper) is closer to unsupervised learning, as a consequence information such as task ID and dictionary for balancing samples would not be available, 2/ the cortex (long term memory) already learns during wakefulness, while in the proposed algorithm this procedure is restricted to replay-based learning during sleep.\n4-\tDue to these differences, I my view, this work avoids addressing directly the most critical and difficult issues of catastrophic forgetting, which relates more to finding optimal plasticity rules for the network in an unsupervised setting\n5-\tThe writing could have been more concise and the authors could make an effort to stay closer to the recommended number of pages.\n", "7- We had many insightful experiments in our paper and hence needed more than 8 pages. But respecting the reviewer’s advice, we have worked hard to shorten the paper and bring the main body closer to the recommended number of pages (at 9 pages). During the process we have also improved the figures, made some sections (especially section 3) more concise and understandable, and moved some parts to the appendices. We hope the reviewer will not object to our usage of an extra page considering the experiments and insights involved, and also since ICLR has no strict limit.\n\n8- Below is a brief summary of changes in the new version of our paper:-\n\t[a] Figures 1 and 2 redrawn for clarity.\n\t[b] Length of main paper shortened from 12 to 9 pages, some parts moved to appendices.\n\t[c] Section 3 made more comprehensible (per your suggestion).\n\t[d] Deep Generative Replay added as an algorithm (appendix A), with clearer explanation.\n\t[e] Section 5 made more concise, discussion on MTL, task interference and distillation added.\n\t[f] Add results on two more datasets (Shapes, Hindi) to appendix A.\n\t[g] Some minor spelling and grammar issues rectified.\n\nLastly, we thank the reviewer for taking the time to read our response and sincerely hope that in the light of the above clarifications, the reviewer would reconsider his/her rating.", "We are grateful for the valuable feedback. Below is our response for the questions and comments:\n\n1- From the reviewer’s feedback, we felt that some of our contributions might have gone unnoticed and we clarify them here:\nOur architecture was inspired from the mammalian brain's dual memory architecture and the evidence for replay in the human brain. Though neuroscientific theories of complementary learning systems have existed for a long time, there is no clear agreement on why the brain has evolved a dual memory architecture and on the connection between sleep and learning? It is also unclear how the two memories interact (there is some evidence for some kind of experience replay).\nOur work made one of the first attempts at finding a plausible computational architecture solution. Apart from establishing why replay must be a generative process and showing the scalability and performance retention offered by a dual-memory architecture, our work also sheds light on the evolution of sleep and how it might be required for learning scalably. We do not claim that our architecture is exactly how the human brain functions, but our approach has remarkably similar characteristics to those observed for the human memory observed in neuroscience and psychology literature. Our work lays foundation for a neuroscience inspired solution to challenges in artificial general intelligence.\nAt the same time, from an algorithmic perspective, our approach achieves many desirable characteristics absent in other state-of-the-arts: (a) no stagnation unlike in approaches which modulate neural plasticity or generate sparse representations, (b) permits gradual forgetting when lacking capacity to learn all tasks, (c) reconstruction and denoising capabilities, (d) works well under revision, and (e) works even for small neural networks and on datasets like Digits (with heavily correlated samples per task) where most other baselines undergo severe catastrophic forgetting.\n\n2- We do not claim that using task IDs is an advantage of our model. It is a requirement, but is not particularly limiting since it can be handled in practice with a HMM-based inference scheme as has also been used by previous state-of-the-arts (e.g. Kirkpatrick et al., 2017).\n\n3- We do not mitigate catastrophic forgetting in a **single** network, but rather in an architecture capable of learning continuously on incoming tasks.\n\n4- As of writing this response, neither the arXiv version nor the NIPS version of Shin et al. (2017) contains any details of the network architectures or hyperparameters for any experiments (no supplementary material or github links either). Further, no details have been provided about the mixing ratio of samples to perform generative replay, and it is unclear how to re-implement their work. Even so, one of our baselines (DGR) is fairly close to their work and our approach outperforms DGR, both in terms of performance retention and training time (section 4.4).\n\n5- Limiting model capacity does not get around the central challenge of catastrophic forgetting, but rather takes it head-on. See figure 3b of Kirkpatrick et al (2017) which shows that even after learning 10 tasks sequentially, their baseline (SGD+Dropout) drops to only little below 80% net accuracy on these tasks. Such forgetting can hardly be deemed catastrophic, and occurs because using large networks partly mitigates the problem. Using 2-hidden layer networks with above 400 units per layer (Goodfellow et al, 2015; Kirkpatrick et al, 2017) masks the contribution of any approach since the overparameterized network might be aiding in mitigating catastrophic forgetting. Our experiments show that our approach still retains a good accuracy even with small networks (two fully-connected layers having 48, 48 units; appendix B), whereas most other baselines are not able to retain their accuracy (see figures 4 and 5). If smaller models would have helped our approach, they would **also have helped the baselines**, which is clearly not the case (figure 4).\n\n6- The reviewer might have misunderstood our comment about achieving less accuracy than SOTA. We only implied that we have not used large overparameterized networks for greater than 99% **joint** accuracies, but rather those with a reasonable (94-96%) joint accuracy (see point 5 for reason). We do indeed outperform the SOTA baselines in mitigating catastrophic forgetting, as shown by all our experiments. We have rectified this in the new draft to avoid misunderstanding for future readers.", "We are grateful for the valuable feedback. Below is our response for the questions and comments:\n\n1- Since this was an initial attempt to understand the memory architecture in mammals and design a similar one to mitigate forgetting and learn continuously, we experimented with simple tasks in the supervised settings (and unsupervised settings, since our VAEs do unsupervised reconstruction). Nevertheless, our experiments were quite insightful (at least for us) and have provided interesting ways to explore this approach further. Since you mentioned experimenting only with sequential MNIST variants, we have also added experiments with two new datasets to the current draft (one dealing with learning geometric shapes and other for hindi language) to clarify that our approach has no reliance on MNIST variants and easily extends beyond it.\n\nThe revered reviewer also pointed out that our algorithm requires task IDs, which may not be available. We emphasize that in supervised and unsupervised settings, where tasks come in batches, this is not particularly limiting since IDs can be generated via a task identification module in practice (say using a HMM-based inference scheme), as has been done by previous work (e.g. EWC: Kirkpatrick et al., 2017).\n\nHowever, we eventually wish to scale up the architecture to continually streaming inputs like in reinforcement learning. Even in this setting if the idea is to learn on multiple RL domains sequentially, then our method can be extended easily with task IDs (as done by Kirkpatrick et al., 2017). However, learning without forgetting within a single domain is a somewhat more challenging job and in such setting, a replacement for task IDs might be required. We point out that we this was out of scope of our current work and should not be counted as a shortcoming, but we are actively working towards the same for future work.\n\n2- We understand that MTL and inter-task interaction are important subproblems in the field of continual learning, but we focus on mitigating catastrophic forgetting in this work. This is also an equally important problem and it is hard to fit two broad problems in a single paper. Modeling inter-task interaction deserves a study of its own. As mentioned at the end of section 2, our goal is to learn tasks sequentially, while avoiding catastrophic forgetting and achieve test loss close to that of a jointly trained model.\nAs for distillation, authors in [2] write: \"Policy distillation may offer a means of combining multiple policies into a single network without the damaging interference and scaling problems. Since policies are compressed and refined during the distillation process, we surmise that they may also be more effectively combined into a single network.\" This is also true of our approach. Since our generative replay distills several tasks together from the STM to the LTM while refining them along with previously existing tasks in the LTM, we believe that it provides a similar effective way to combine tasks and deal with the damaging inter-task interference.\nWe have also added connections between our approach and distillation to the new version of our paper (section 5).\n\n3- Below is a brief summary of changes in the new version of our paper:-\n\t[a] Figures 1 and 2 redrawn for clarity.\n\t[b] Length of main paper shortened from 12 to 9 pages, some parts moved to appendices.\n\t[c] Section 3 made more comprehensible (as suggested by AnonReviewer3).\n\t[d] Deep Generative Replay added as an algorithm (appendix A), with clearer explanation.\n\t[e] Section 5 made more concise, discussion on MTL, task interference and distillation added.\n\t[f] Add results on two more datasets (Shapes, Hindi) to appendix A.\n\t[g] Some minor spelling and grammar issues rectified.\n\nLastly, we thank the reviewer for taking the time to read our response and hope that your queries were appropriately clarified.\n\n[2] Andrei A. Rusu, Sergio Gomez Colmenarejo, Çaglar Gülçehre, Guillaume Desjardins, James Kirkpatrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, Raia Hadsell: Policy Distillation. ICLR 2016.", "We are grateful for the valuable feedback. Below is our response for the questions and comments:\n\n2- Classic MTL approaches do not involve task segmentation because they assume that all tasks are predefined (in which case assigning an ID is trivial and can be done manually without fear of repetition). Continuous learning requires task segmentation since tasks arrive sequentially, and we do not explicitly store any task samples. Task segmentation is not a very limiting assumption and can be met in practice using an HMM-based inference scheme as has been used in existing state-of-the-arts (e.g. EWC: Kirkpatrick et al., 2017). We have also added a discussion of connections to classical MTL and policy distillation in section 5.\n\n3- In addition to shedding light into the utility of the brain's dual memory architecture, our method also achieves many other desirable characteristics absent from other state-of-the-arts (please see our joint note to reviewers).\nAs for learning during wakefulness, we omitted it due to space constraints, since it is easily emulated by small intermediate consolidation steps with fewer training iterations.\n\n4- Unsupervised plasticity is one way to mitigate catastrophic forgetting, but it is not the only way. Performing (unsupervised) generative replay along with dual-memory consolidation is a viable option and our experiments show that generative replay outperforms plasticity and sparse-representation oriented approaches (Kirkpatrick et al., 2017; Goodfellow et al., 2015).\n\n5- We had many insightful experiments in our paper and hence needed more than 8 pages. But respecting the reviewer’s advice, we have worked hard to shorten the paper and bring the main body closer to the recommended number of pages (at 9 pages). During the process we have also improved the figures, made some sections more concise, and moved some parts to the appendices. We hope the reviewer will understand and not object to our usage of an extra page considering the experimentation and insights involved, and also since ICLR has no strict limit.\n\n6- Below is a brief summary of changes in the new version of our paper:-\n\t[a] Figures 1 and 2 redrawn for clarity.\n\t[b] Length of main paper shortened from 12 to 9 pages, some parts moved to appendices.\n\t[c] Section 3 made more comprehensible (as suggested by AnonReviewer3).\n\t[d] Deep Generative Replay added as an algorithm (appendix A), with clearer explanation.\n\t[e] Section 5 made more concise, discussion on MTL, task interference and distillation added.\n\t[f] Add results on two more datasets (Shapes, Hindi) to appendix A.\n\t[g] Some minor spelling and grammar issues rectified.\n\nLastly, we thank the reviewer for taking the time to read the response and hope that your queries were appropriately clarified.", "Our permuted MNIST variant does contain the \"fixed random permutation of all pixels\" tasks (see sec 8.1, appendix B, tasks iv and v for permnist). We basically included a few of all kinds of tasks (whitening-type, permutation-type and reflection-type), as was necessary to prove that our algorithm performs as well as the baselines on a set of tasks used in the past.\nHowever, permnist is not our major focus, since as pointed out in section 4.2.1, permnist is not a good dataset to test for catastrophic forgetting. We observed that it is easily conquered by most approaches if they use a largely overparameterized neural network. See figure 3b of Kirkpatrick et al (2017) and you'll observe that even after learning 10 tasks sequentially, their baseline (SGD+Dropout) drops to only little below 80% accuracy. Such forgetting can hardly be deemed catastrophic, and is partly because of using really large networks. Using a 2-hidden layer network each with above 400 units per layer (Goodfellow et al, 2015; Kirkpatrick et al, 2017) allows the network to essentially finds ways to **memorize** samples and labels from different tasks without inducing much parameter sharing. In such cases, it is unclear if it is the continual learning algorithm at work, or just the overparameterized network aiding in mitigating catastrophic forgetting. But our experiments show that our approach still retains a good accuracy even with small networks (with fully-connected layers having 48, 48 units; see appendix B), whereas most other baselines are not able to retain their accuracy (see figure 5a). \nMoreover, we show that datasets like Digits, although simple at first glance, are actually much more challenging datasets to test for catastrophic forgetting and are hard to conquer even with overparameterized networks. Hence, we focused most of our attention on Digits and TDigits. \nLastly, to clarify again, we did experiment with the full permnist dataset and can include more \"permutation\" tasks if needed, since our algorithm (DGDMN) works perfectly well with all permutation-type tasks and outperforms all baselines on the full permnist too.\n\nThe consolidation phase frequency is characterized by n_{STM} hyperparameter. n_{STM} was 2 for both Digits and Permnist, and 5 for TDigits (see section 8.4 in appendix B).", "Interesting approach! \n\nJust a few questions about the tasks and parameters. \n\nThe permuted MNIST variant considered here seems to be different from the setting in the Goodfellow et al (2014) and Kirkpatrick et al (2017) unless I'm mistaken? What was the rationale behind this? Does the model proposed also cope well with the standard \"fixed random permutation of all pixels\" for each task, as opposed to the cropping and whitening style tasks employed in the paper? \n\nFurther, how often was the \"sleep\" or consolidation phase used?\n\n\n\n\n" ]
[ -1, 5, 6, 7, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 4, 2, -1, -1, -1, -1, -1, -1 ]
[ "BJf-jeW7z", "iclr_2018_BkVsWbbAW", "iclr_2018_BkVsWbbAW", "iclr_2018_BkVsWbbAW", "SydEmAKgM", "SydEmAKgM", "rk8n0esxf", "S1t718Tef", "rkb_ZMilM", "iclr_2018_BkVsWbbAW" ]
iclr_2018_HJ5AUm-CZ
The Variational Homoencoder: Learning to Infer High-Capacity Generative Models from Few Examples
Hierarchical Bayesian methods have the potential to unify many related tasks (e.g. k-shot classification, conditional, and unconditional generation) by framing each as inference within a single generative model. We show that existing approaches for learning such models can fail on expressive generative networks such as PixelCNNs, by describing the global distribution with little reliance on latent variables. To address this, we develop a modification of the Variational Autoencoder in which encoded observations are decoded to new elements from the same class; the result, which we call a Variational Homoencoder (VHE), may be understood as training a hierarchical latent variable model which better utilises latent variables in these cases. Using this framework enables us to train a hierarchical PixelCNN for the Omniglot dataset, outperforming all existing models on test set likelihood. With a single model we achieve both strong one-shot generation and near human-level classification, competitive with state-of-the-art discriminative classifiers. The VHE objective extends naturally to richer dataset structures such as factorial or hierarchical categories, as we illustrate by training models to separate character content from simple variations in drawing style, and to generalise the style of an alphabet to new characters.
rejected-papers
Thank you for submitting your paper to ICLR. The reviewers agree that the idea of sharing the approximating distribution across sets of variables is an interesting one and that the Omniglot experiments are thorough. However, although the authors make the nice addition of some simple examples during the revision period and a new table of quantitative results on Omniglot, the consensus is that the experimental results are not quite persuasive enough for publication. Adding a second dataset, such as mini-imagenet or the youtube faces dataset, would make the paper very strong.
train
[ "rJX5uPLNG", "BkviGptxG", "rkJkoJogf", "H1V4eZ3lG", "HJyH8DTQM", "SyleUDaQG", "B1e9BP67M", "H1ufSPa7z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Thank you for making edits to the technical portion of the paper. I believe the changes improve the paper's readability.", "This paper presents an alternative approach to constructing variational lower bounds on data log likelihood in deep, directed generative models with latent variables. Specifically, the authors propose using approximate posteriors shared across groups of examples, rather than posteriors which treat examples independently. The group-wise posteriors allow amortization of the information cost KL(group posterior || prior) across all examples in the group, which the authors liken to the \"KL annealing\" tricks that are sometimes used to avoid posterior collapse when training models with strong decoders p(x|z) using current techniques for approximate variational inference in deep nets.\n\nThe presentation of the core idea is solid, though it did take two read-throughs before the equations really clicked for me. I think the paper could be improved by spending more time on a detailed description of the model for the Omniglot experiments (as illustrated in Figure 3). E.g., explicitly describing how group-wise and per-example posteriors are composed in this model, using Equations and pseudo-code for the main training loop, would have saved me some time. For readers less familiar with amortized variational inference in deep nets, the benefit would be larger.\n\nI appreciate that the authors developed extensions of the core method to more complex group structures, though I didn't find the related experiments particularly convincing. \n\nOverall, I like this paper and think the underlying group-wise posterior construction trick is worth exploring further. Of course, the elephant in the room is how to determine the groups across which the posteriors can be shared and their information costs amortized.", "- Good work on developing VAEs for few-shot learning.\n- Most of the results are qualitative and I reckon the paper was written in haste.\n- The rest of the comments are below:\n\n- 3.1: I got a bit confused over what X actually is:\n -- \"We would like to learn a generative model for **sets X** of the form\".\n --\"... to refer to the **class X_i** ...\".\n -- \"we can lower bound the log-likelihood of each **dataset X** ...\"\n\n- 3.2: \"In general, if we wish to learn a model for X in which each latent variable ci affects some arbitrary subset Xi of the data (**where the Xi may overlap**), ...\": Which is just like learning a Z for a labeled X but learning it in an unsupervised manner, i.e. the normal VAE, isn't it? If not, could you please elaborate on what is different (in the case of 3.2 only, I mean)? i.e. Could you please elaborate on what's different (in terms of learning) between 3.2 and a normal latent Z that is definitely allowed to affect different classes of the data without knowing the classes?\n\n- Figure 1 is helpful to clarify the main idea of a VHE.\n\n- \"In a VHE, this recognition network takes only small subsets of a class as input, which additionally ...\": And that also clearly leads to loss of information that could have been used in learning. So there is a possibility for potential regularization but there is definitely a big loss in estimation power. This is obviously possible with any regularization technique, but I think it is more of an issue here since parts of the data are not even used in learning.\n\n- \"Table 4.1 compares these log likelihoods, with VHE achieving state-of-the-art. To\": Where is Table 4.1??\n\n- This is a minor point and did not have any impact on the evaluation but VAE --> VHE, reparameterization trick --> resampling trick. Maybe providing rather original headings is better? It's a style issue that is up to tastes anyway so, again, it is minor.\n\n- \"However, sharing latent variables across an entire class reduces the encoding cost per element is significantly\": typo.\n\n- \"Figure ?? illustrates\".\n", "The paper presents some conceptually incremental improvements over the models in “Neural Statistician” and “Generative matching networks”. Nevertheless, it is well written and I think it is solid work with reasonable convincing experiments and good results. Although, the authors use powerful PixelCNN priors and decoders and they do not really disentangle to what degree their good results rely on the capabilities of these autoregressive components.", "> It did take two read-throughs before the equations really clicked for me\nTo improve the clarity of our training procedure, particularly for readers less familiar with amortized variational inference, we have now added pseudocode for the main training loop. We have also elaborated on section 3.1 to explicitly detail how a per-element latent variable z may be added to the bound. Thanks for the suggestions!\n\n> I appreciate that the authors developed extensions of the core method to more complex group structures, though I didn't find the related experiments particularly convincing.\nWe have since slightly revised the architecture used in our style/content factorisation experiments, leading to a more powerful model which is able to adapt both colour and pen stroke simultaneously. We hope that the reviewer finds our revised factorial model at least somewhat more convincing.\n\n> Of course, the elephant in the room is how to determine the groups across which the posteriors can be shared and their information costs amortized.\nIndeed, in this paper we tackle only the problem of learning with known labels, but discovering the structure of a dataset unsupervised is an interesting problem and potential future direction! We’d be particularly keen to see our approach embedded within a larger EM-like training algorithm, alternating gradient steps with reassignment of elements to groups. From a good initialisation, we expect that this approach could go a long way towards learning such expressive mixture models from only unlabelled data.\n", "> “I reckon the paper was written in haste” “I got a bit confused over what X actually is”\nWe greatly apologise that our initial submission contained typos and broken references. These have been fixed in our revised submission, and much of the language has also been updated to improve consistency. We now use ‘set’ for all elements which share a particular latent variable (e.g. all images of a particular character) and ‘dataset’ for the collection of all such sets (e.g. “the omniglot dataset”). Thanks for highlighting this point of unclarity - we hope that our modification will make the exposition clearer for future readers.\n\n> Most of the results are qualitative\nFor our revised submission, we include a new section (4.1) in the main paper which compares VHE and Neural Statistician objectives on five simple synthetic datasets, aiming to provide stronger empirical support for the theoretical motivations of section 3. In Omniglot experiments, we now also include quantitative results comparing the generative performance for all eight architecture/objective combinations (Table 3), in addition to the classification accuracy results provided in Table 1.\n\n> Could you please elaborate on what's different (in terms of learning) between 3.2 and a normal latent Z that is definitely allowed to affect different classes of the data without knowing the classes?\nThe extended VHE objective allows us to learn latent variable models for structured datasets when the structure is known in advance. For example, our factorial objective might be applied to a dataset of rendered 3D faces when each image is labelled by identity, pose and lighting conditions (as in Deep Convolutional Inverse Graphics Network, Kulkarni et al. 2015) by introducing a separate latent variable for each identity, each pose and each lighting condition. \nWe do not tackle unsupervised learning of such categorical structure in this paper, although we’d be keen to see our approach embedded within a larger EM-like training algorithm, alternating gradient steps with reassignment of elements to groups.\n\n> There is a possibility for potential regularization but there is definitely a big loss in estimation power.\nThis loss in estimation power is a significant limitation of our method; indeed, it was a great surprise to us that our model achieved such strong results despite this! We attribute its success to the high similarity between Omniglot images within the same class, allowing the approximate posterior q(c; D) to remain relatively robust to different choices of D. However, we expect that reduced estimation power may pose a greater challenge in domains with greater intra-class variation (such natural images) and with this in mind propose an alternative objective in Supplement 6.1 which may tighten the variational bound using an auxiliary inference network. Experimentation in this setting remains a direction of future research.\nWe’d also like to note the tricky comparison to existing work with regard to estimation power. The Neural Statistician employs a tighter variational bound, but does so for only an approximate marginal likelihood (based on subsampled training sets). This approximation itself carries unbounded error with respect to the true likelihood. By contrast, the VHE objective uses resampling only within the inference network, trading off estimation power in order to provide a true lower bound on the likelihood of the complete training set.\n", "> Although, the authors use powerful PixelCNN priors and decoders and they do not really disentangle to what degree their good results rely on the capabilities of these autoregressive components\n\nIn our revised submission, we compare our hierarchical PixelCNN against a standard deconvolutional baseline model on both generation and classification. In these experiments we find that the expressive PixelCNN architecture can improve results significantly, but that our novel training objective is necessary for gaining this improvement. In particular, amongst the four alternative training objectives we tested, only the VHE was able to utilise the more expressive architecture without suffering from either overfitting or latent degeneracy (Tables 1 & 3, Figure 6). We have now modified the title and abstract of the paper to more strongly emphasise this relationship between training objective and model architecture.", "We are most grateful for the time and thoughtful comments offered by our reviewers, and delighted by the generally positive sentiment towards the ideas present in this paper. We have posted a revised version in which we aim to address each reviewer’s specific concerns (individual comments are given below). The main changes are the following:\n- Rewriting the abstract/title to emphasise the relationship between our VHE objective and the PixelCNN architecture we apply it to.\n- Elaborating the training procedure (Algorithm 1)\n- Moving the synthetic data experiments to the main paper (Section 4.1)\n- Adding quantitative evaluation of generative performance on all 8 architecture/objective variants (Table 3)\n- Minor architectural modification on factorial architecture (Section 4.3), to obtain improved results (Figure 8)\n- Moving Silhouette generation experiments to Supplementary Material for space.\n\nWe believe that addressing these comments has significantly improved the presentation of our work, and hope that this improvement justifies the increased length of our paper (now 10 pages).\n" ]
[ -1, 7, 5, 6, -1, -1, -1, -1 ]
[ -1, 4, 5, 3, -1, -1, -1, -1 ]
[ "HJyH8DTQM", "iclr_2018_HJ5AUm-CZ", "iclr_2018_HJ5AUm-CZ", "iclr_2018_HJ5AUm-CZ", "BkviGptxG", "rkJkoJogf", "H1V4eZ3lG", "iclr_2018_HJ5AUm-CZ" ]
iclr_2018_Hkp3uhxCW
Revisiting Bayes by Backprop
In this work we explore a straightforward variational Bayes scheme for Recurrent Neural Networks. Firstly, we show that a simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only a small extra computational cost during training, also reducing the amount of parameters by 80\%. Secondly, we demonstrate how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs. We incorporate local gradient information into the approximate posterior to sharpen it around the current batch statistics. We show how this technique is not exclusive to recurrent neural networks and can be applied more widely to train Bayesian neural networks. We also empirically demonstrate how Bayesian RNNs are superior to traditional RNNs on a language modelling benchmark and an image captioning task, as well as showing how each of these methods improve our model over a variety of other schemes for training them. We also introduce a new benchmark for studying uncertainty for language models so future methods can be easily compared.
rejected-papers
Thank you for submitting you paper to ICLR. The revision improved the paper e.g. moving Appendix A3 to the main text has improved clarity, but, like reviewer 3, I still found section 4 hard to follow. As the authors suggest, shifting the terminology to "posterior shifting” rather than “sharpening" would help at a high level, but the design choices should be more carefully explained. The experiments are interesting and promising. The title, although altered, still seems a misnomer given that the experimental evaluation focusses on RNNs. Summary: There is the basis of a good paper here, but the rationale for the design choices should be more carefully explained.
train
[ "BJZRkfFgG", "Hy2OnpKeG", "rk156h2gf", "Sk5DnNTXz", "rynmh4amz", "ry8ynV67G", "H1d6T1y7z", "HkueR0FlG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "public" ]
[ "*Summary*\n\nThe paper applies variational inference (VI) with the 'reparameterisation' trick for Bayesian recurrent neural networks (BRNNs). The paper first considers the \"Bayes by Backprop\" approach of Blundell et al. (2015) and then modifies the BRNN model with a hierarchical prior over the network parameters, which then requires a hierarchical variational approximation with a simple linear recognition model. Several experiments demonstrate the quality of the prediction and the uncertainty over dropout. \n\n*Originality + significance*\n\nTo my knowledge, there is no other previous work on VI with the reparameterisation trick for BRNNs. However, one could say that this paper is, on careful examination, an application of reparameterisation gradient VI for a specific application. \n\nNevertheless, the parameterisation of the conditional variational distribution q(\\theta | \\phi, (x, y)) using recognition model is interesting and could be useful in other models. However, this has not been tested or concretely shown in this paper. The idea of modifying the model by introducing variables to obtain a looser bound which can accommodate a richer variational family is also not new, see: hierarchical variational model (Ranganath et al., 2016) for example. \n\n*Clarity*\n\nThe paper is, in general, well-written. However, the presentation in 4 is hard to follow. I would prefer if appendix A3 was moved up front -- in this case, it would make it clear that the model is modified to contain \\phi, a variational approximation over both \\theta and \\phi is needed, and a q that couples \\theta, \\phi and and the gradient of the log likelihood term wrt \\phi is chosen. \n\nAdditional comments:\n\nWhy is the variational approximation called \"sharpened\"?\n\nAt test time, normal VI just uses the fixed q(\\theta) after training. It's not clear to me how prediction is done when using 'posterior sharpening' -- how is q(\\theta | \\phi, x) in eqs. 19-20 parameterised? The first paragraph of page 5 uses q(\\theta | \\phi, (x, y)), but y is not known at test time.\n\nWhat is C in eq. 9?\n\nThis comment \"variational typically underestimate the uncertainty in the posterior...whereas expectation propagation methods are mode averaging and so tend to overestimate uncertainty...\" is not precise. EP can do mode averaging as well as mode seeking, depending on the underlying and approximate factor graphs. In the Bayesian neural network setting when the likelihood is factorised point-wise and there is one factor for each likelihood, EP is just as mode-seeking as variational. On the other hand, variational methods can avoid modes too, see the mixture of Gaussians example in the \"Two problems with variational EM... \" paper by Turner and Sahani (2010).\n\nThere are also many hyperparameters that need to be chosen -- what would happen if these are optimised using the free-energy? Was there any KL reweighting scheduling as done in the original BBB paper? \n\nWhat is the significance of the difference between BBB and BBB with sharpening in the language modelling task? Was sharpening used in the image caption generation task?\n\nWhat is the computational complexity of BBB with posterior sharpening? Twice that BBB? If this is the case, would BBB get to the same performance if we optimise it for longer? Would be interesting to see the time/accuracy frontier.", "This paper proposes an interesting variational posterior approximation for the weights of an RNN. The paper also proposes a scheme for assessing the uncertainty of the predictions of an RNN. \n\npros:\n--I liked the posterior sharpening idea. It was well motivated from a computational cost perspective hence the use of a hierarchical prior. \n--I liked the uncertainty analysis. There are many works on Bayesian neural networks but they never present an analysis of the uncertainty introduced in the weights. These works can benefit from the uncertainty analysis scheme introduced in this paper.\n--The experiments were well carried through.\n\ncons:\n--Change the title! the title is too vague. \"Bayesian recurrent neural networks\" already exist and is rather vague for what is being described in this paper.\n--There were a lot of unanswered questions:\n (1) how does sharpening lead to lower variance? This was a claim in the paper and there was no theoretical justification or an empirical comparison of the gradient variance in the experiment section\n(2) how is the level of uncertainty related to performance? It would have been insightful to see effect of \\sigma_0 on the performance rather than report the best result. \n(3) what was the actual computational cost for the BBB RNN and the baselines?\n--There were very minor typos and some unclear connotations. For example there is no such thing as a \"variational Bayes model\".\n\nI am willing to adjust my rating when the questions and remarks above get addressed.", "The manuscript proposes a new framework for inference in RNN based upon the Bayes by Backprop (BBB) algorithm. In particular, the authors propose a new framework to \"sharpen\" the posterior.\n\nIn particular, the hierarchical prior in (6) and (7) frame an interesting modification to directly learning a multivariate normal variational approximation. In the experimental results, it seems clear that this approach is beneficial, but it's not clear as to why. In particular, how does the variational posterior change as a result of the hierarchical prior? It seems that (7) would push the center of the variational structure back towards the MAP point and reduces the variance of the output of the hierarchical prior; however, with the two layers in the prior it's unclear what actually is happening. Carefully explaining *what* the authors believe is happening and exploring how it changes the variational approximation in a classic modeling framework would be beneficial to understanding the proposed change and evaluating it. As a final point, the authors state, \"as long as the improvement along the gradient is great than the KL loss incurred...this method is guaranteed to make progress towards optimizing L.\" Do the authors mean that the negative log-likelihood will be improved in this case? Or the actual optimization? Improving the negative log-likelihood seems straightforward, but I am confused by what the authors mean by optimization.\n\nThe new evaluation metric proposed in Section 6.1.1 is confusing, and I do not understand what the metric is trying to capture. This needs significantly more detail and explanation. Also, it is unclear to me what would happen when you input data examples that are opposite to the original input sequence; in particular, for many neural networks the predictions are unstable outside of the input domain and inputting infeasible data leads to unusable outputs. It's completely feasible that these outputs would just be highly uncertain, and I'm not sure how you can ascribe meaning to them. The authors should not compare to the uniform prior as a baseline for entropy. It's much more revealing to compare it to the empirical likelihoods of the words.\n", "Thanks for helpful comments and useful feedback; we have made amendments the manuscript.\n\nWe accepted the suggestion of moving Appendix A3 to the main text of paper, we agree that it makes the presentation more clear. \n\nRegarding the constant C, it is the number of truncated sequences, it is specified just above eq (4) in the paper. We have made it more explicit on the revised version. \n\nWe thank the reviewer for the comment on mode seeking and move averaging, and have updated the text to be more precise.\n\nRegarding the choice of hyperparameters by using the free energy, we optimised the hyperparameters using the performance on the tasks we considered (perplexity); but we found this to correlate with the free-energy. Moreover, we did not do any KL reweighting scheduling.\n\nIn terms of evaluation, many applications of language modeling (such as machine translation, or speech recognition) use a language model to “rank” sentences. In this case, “y” is known at test time. Otherwise, one can still use the hierarchical prior that does not depend on knowing the answer (to e.g. do ancestral sampling). \n\nThe posterior sharpening technique was not tested in the image captioning task and still needs to be further investigated. The improvements of using the posterior sharpening technique are small (but consistent) when compared to standard BBB. Perhaps also shifting the variance of the posterior rather than only the mean (or instead of stepping in the direction of the gradient, you do an update RMS style as proposed in \"Dynamic Evaluation of neural sequence models\" Krause et al) would yield further improvements. \n\nWe may consider renaming posterior sharpening to posterior shifting as that more accurately describes the technique that we introduced in this paper. Furthermore, we believe the technique can still be enhanced by e.g. shifting the variance of the posterior rather than only the mean (or instead of stepping in the direction of the gradient, you do an update RMS style as proposed in \"Dynamic Evaluation of neural sequence models\" Krause et al). Nonetheless, the small (but consistent) improvements shown in the paper and the VAE treatment of Bayesiann Neural Networks novel to this technique makes us excited for further developments around posterior sharpening / shifting.\n\n\nRegarding the computational cost for BBB with posterior sharpening, it will be twice as for standard BBB because the computational cost is dominated by the backward pass of the neural network and posterior sharpening requires two backward passes (see reply to AnonReviewer1 for further discussion). All reported performances are at convergence where both methods have remained at the same performance for the same amount of time. We observed it took roughly the same number of steps to plateau.\n", "Thanks for helpful comments and useful feedback; we have made amendments the manuscript.\n\nWe agree that the title of the paper is too vague and have updated it to \"Revisiting Bayes by Backprop\".\n\nRegarding the lower variance of posterior sharpening, we point the reviewer to the discussion on the last paragraph of Session 6.1. There we compare the perplexity of standard training (i.e., deterministic weights), standard BBB approach and BBB with posterior shaperning after only one epoch of training. We see the model with posterior sharpening trains faster and achieves significantly better performance after one epoch, significantly closing the gap with standard training (zero variance) (perplexities of 205 (zero variance), 227 (posterior sharpening) vs 258 (standard BBB)).\n\nRegarding the effect of sigma_0 on the performance of posterior sharpening. We did not find sigma_0 to have a significant effect on performance: if sigma_0 is set too small (<10^-10), you recover the BBB baseline as the KL term pushes \\eta towards 0; if sigma_0 is too large (>0.2), the noise in parameter space becomes too large and no training occurs. The effect is otherwise small but consistently outperforms the BBB baseline.\n\nRegarding the computational cost (at training time), as we stated towards the end of section 6.1: \" \nwe note that the speed of our naive implementation of Bayesian RNNs was 0.7 times the original\nspeed and 0.4 times the original speed for posterior sharpening\". Note that the asymptotic time complexity remains unchanged because the run time complexity of a forward and backward pass through the network is still dominated by the same computations as in a non-Bayesian RNN.\n", "Thanks for helpful comments and useful feedback; we have made amendments the manuscript.\n\nRegarding the posterior sharpening technique, we note that (7) pushes the mean of the posterior towards the maximum likelihood solution, not the MAP solution. Pushing towards the MAP solution is also an option, but as the reviewer notes, in the case of a hierarchical prior, a chicken-and-egg problem emerges as the posterior is defined in terms of the posterior sharpening already. The classic variational formulation for posterior sharpening was previously in Appendix A3 and A4, but now it has been moved to the main text (Sec 4.1-4.2) as suggested by AnonReviewer3.\n\nRegarding the statement \"as long as the improvement along the gradient is great than the KL loss incurred...this method is guaranteed to make progress towards optimizing L.\" Thanks for pointing out the lack of clarity. What we meant is: if the gradient g_phi improves the log likelihood log p(y|theta,x) term more than the KL cost added for posterior sharpening (KL[q(theta|phi,(x,y))||p(theta|phi)]) then the lower bound in (8) will improve. We have amended it in the manuscript. \n\nRegarding the evaluation metric in 6.1.1, the intuition behind it is if you take a natural language sentence and reverse it, then this destroys much of its structure. One would expect that a probabilistic language model (LM) would give lower probability to the reversed sentence over the original. Moreover, a LM equipped with uncertainty estimates such as the one proposed here should produce lower certainty for out of domain inputs (such as reversed text). The metric precisely tries to quantify this (un)certainty. This was meant to be a very simple illustration of how uncertainty estimates behave when the language models are misspecified. Finally, we agree that comparing to the empirical likelihoods is more sensible and we have updated the manuscript with it.\n", "For the comments \"There are many works on Bayesian neural networks but they never present an analysis of the uncertainty introduced in the weights.\"\n\nI am not sure whether this is true. [*] conducted the uncertainty analysis in the context of RNNs. Please see details below:\n\n(1) In Figure 4: Image captioning with different weight samples (each sample is a RNN). It shows the diversity of generated captions due to the uncertainty in the weights. Left are the given images, right are the corresponding captions. The captions in each box are from the same model sample\n\n(2) In Figure 6: Question type classification. Both the mean and standard derivation of prediction are shown. It suggests one can leverage the uncertainty information to make decisions: either manually make a human judgement when uncertainty is high, or automatically choose the one with lower standard derivations when both types exhibits similar prediction means. \n\n[*] Scalable Bayesian Learning of Recurrent Neural Networks for Language Modeling, ACL 2017\n\n", "Is it possible to revise the title, to better reflect the proposed variational technique for RNNs? \"Bayesian Recurrent Neural Networks\" have been proposed in several papers with different Bayesian learning methods. See below for examples:\n\nA Theoretically Grounded Application of Dropout in Recurrent Neural Networks, NIPS 2017\nScalable Bayesian Learning of Recurrent Neural Networks for Language Modeling, ACL 2017\nBayesian Recurrent Neural Network for Language Modeling, IEEE Trans Neural Netw Learn Syst. 2016" ]
[ 5, 6, 6, -1, -1, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Hkp3uhxCW", "iclr_2018_Hkp3uhxCW", "iclr_2018_Hkp3uhxCW", "BJZRkfFgG", "Hy2OnpKeG", "rk156h2gf", "Hy2OnpKeG", "iclr_2018_Hkp3uhxCW" ]
iclr_2018_S1fduCl0b
Lifelong Generative Modeling
Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner where knowledge gained from previous tasks is retained and used for future learning. It is essential towards the development of intelligent machines that can adapt to their surroundings. In this work we focus on a lifelong learning approach to generative modeling where we continuously incorporate newly observed streaming distributions into our learnt model. We do so through a student-teacher architecture which allows us to learn and preserve all the distributions seen so far without the need to retain the past data nor the past models. Through the introduction of a novel cross-model regularizer, the student model leverages the information learnt by the teacher, which acts as a summary of everything seen till now. The regularizer has the additional benefit of reducing the effect of catastrophic interference that appears when we learn over streaming data. We demonstrate its efficacy on streaming distributions as well as its ability to learn a common latent representation across a complex transfer learning scenario.
rejected-papers
Thank you for submitting you paper to ICLR. The paper studies an interesting problem and the solution, which fuses student-teacher approaches to continual learning and variational auto-encoders, is interesting. The revision of the paper has improved readability. However, although the framework is flexible, it is complex and appears rather ad hoc as currently presented. Exploration of the effect of the many hyper-parameters or some more supporting theoretical work / justification would help. The experimental comparisons were varied, but adding more baselines e.g. comparing to a parameter regularisation approach like EWC or synaptic intelligence applied to a standard VAE would have been enlightening. Summary: There is the basis of a good paper here, but a comprehensive experimental evaluation of design choices or supporting theory would be useful for assessing what is a complex approach.
train
[ "rJuN3apyf", "SJuF9Eqez", "rJokGmjgG", "rJlVE72ZG", "SJHQwXnWM", "H1Y7BmhZz", "r1Y8VQ3bz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "We have seen numerous variants of variational autoencoders, most of them introducing delta changes to the original architecture to address the same sort of modeling problems. This paper attacks a different kind of problem, namely lifelong learning. This key aspect of the paper, besides the fact that it constitutes a very important problem, does also addes a strong element of freshness to the paper.\n\nThe construction of the generative model is correct, and commensurate with standard practice in the field of deep generative models. The derivations are correct, while the experimental evaluation is diverse and convincing. ", "- Second paragraph in Section 1: Nice motivation. I am not sure though whether the performed experiments are the most expressive for such motivation. For instance, is the experiment in Section 5.1 a common task in that sequential lifelong learning setting?\n\n- Section 4, which is the main technical section of the paper, is quite full of lengthy descriptions that are a bit equivocal. I reckon each claim really needs to be supported by a corresponding unequivocal mathermatical formulation.\n\n- An example of the last point can be found in Section 4.2: \"The synthetic samples need to be representative of all the previously observed distributions ...\": It will be much clearer how such samples are representative if a formulation follows, and that did not happen in Section 4.2.\n\n- \"1) Sampling the prior can select a point in the latent space that is in between two separate distributions ...\": I am not sure I got this drawback of using the standard form of VAEs. Could you please further elaborate on this?\n\n- \"we restrict the posterior representation of the student model to **be close to that of the teacher** for the previous distributions** accumulated by the teacher. This allows the model parameters to **vary as necessary** in order to best fit the data\": What if the previous distributions are not that close to the new one?\n\n- Distribution intervals: Will it be the case in reality that these intervals will be given? Otherwise, what are the solutions to that? Can they be estimated somehow (as a future work)?\n\n\nMinor:\n- \"we observe a sample X of K\": sample X of size K, I guess?\n- \"... form nor an efficient estimator Kingma (2017)\": citation style.\n- \"we illustrates ...\"", "The paper proposed a teacher-student framework and a modified objective function to adapt VAE training to streaming data setting. The qualitative experimental result shows that the learned model can generate reasonable-looking samples. I'm not sure about what conclusion to make from the numerical result, as the test negative ELBO actually increased after decreasing initially. Why did it increase?\n\nThe modified objective function is a little ad-hoc, and it's unclear how to relate the overall objective function to Bayesian posterior inference (what exactly is the posterior that the encoder tries to approximate?). There is a term in the objective function that is synthetic data specific. Does that imply that the objective function is different depending on if the data is synthetic or real? What is the motivation/justification of choosing KL(Q_student||Q_teacher) as regularisation instead of the other way around? Would that make a difference in the goodness of the learned model? If not, wouldn't KL(Q_teacher||Q_student) result reduction in the variance of gradients and therefore a better choice?\n\nDetails on the minimum number of real samples per interval for the model to be able to learn is also missing. Also, how many synthetic samples per real samples are needed? How is the update with respect to synthetic sample scheduled? Given infinite amount of streaming data with a fixed number of classes/underlying distributions and interval length, and sample the class of each interval (uniformly) randomly, will the model/algorithm converge? Is there a minimum number of real examples that the student learner needs to see before it can be turned into a teacher?\n\nOther question: How is the number of latent category J of the latent discrete distribution chosen?\n\nQuality: The numerical experiment doesn't really compare to any other streaming benchmark and is a little unsatisfying. Without a streaming benchmark or a realistic motivating example in which the proposed scheme makes a significant difference, it's difficult to judge the contribution of this work.\nClarity: The manuscript is reasonably well-written. (minor: Paragraph 2, section 5, 'in principle' instead of 'in principal')\nOriginality: Average. The student-teacher framework by itself isn't novel. The modifications to the objective function appears to be novel as far as I am aware, but it doesn't require much special insights.\nSignificance: Below average. I think it will be very helpful if the authors can include a realistic motivating example where lifelong unsupervised learning is critical, and demonstrate that the proposed scheme makes a difference in the example.\n\n\n", "Thanks for your detailed review! Unsupervised learning is one of the most important challenge in machine learning; bringing it to a life-long setting is a crucial step towards systems that can continuously adapt their models of the world without supervision and without forgetting. We try to demonstrate some of the critical issues faced in transitioning VAE's to this setting and demonstrate an algorithm that allows for learning over long distributional intervals without access to any of the prior data. We would like to try to address some of your points below (we have a second part that address the rest of your comments as it didn't fit in one message) :\n\nELBO Increase: In experiment 1 we present the model with data from a given distribution only within a single interval; the model never sees data from the same distribution again. Nevertheless we require it to reconstruct data points coming from all the distributions seen so far. As the number of uniquely seen distributions increases the task of reconstruction becomes more and more difficult which is why the -ELBO increases. In experiment 2 where the model might see a distribution again (due to sampling with replacement) we do observe the –ELBO decreasing. \n\nBayesian posterior inference: Our goal with this work is to learn how to represent the mixture data distribution through time while only observing a single component at each interval. Our posterior comes into play because VAE's learn an approximate data distribution through the assumption of a latent variable model and the optimization of the ELBO objective. In a standard VAE, the learnt posterior is close to the prior. This is enforced through the KL divergence term of the standard ELBO. Similar to standard VAEs our learnt posterior is also close to the prior, but in addition we keep the inferred student posterior over the synthetic data close to their respective posterior inferred by the teacher. This ensures that the student’s encoder maps data samples to a similar latent space as the teacher (in order not to forget what the teacher has learned). Indeed as the reviewer notes, the objective function treats real data from the currently observable distribution in a different manner than the synthetic data; we do **not** constrain the posterior of data from the currently observed distribution to be similar to that of the teacher posterior. We have rewritten equation 3 to ensure that this point is clear. In the case of a VAE with an isotropic gaussian posterior, the consistency regularizer can be interpret as the standard VAE KL regularizer, with the mean and variance of the student posterior scaled by the variance of the teacher. We go over this in detail in appendix section 7.0.1. \n", "Thanks for your review of our paper! We try to address a problem we believe is novel and uncharted for generative models. Lifelong learning is a crucial part of transcending machine learning to make it useful in a more general environment. We appreciate your time in noticing the drastically disparate setting that we are operating over!\n", "Hi thanks for your feedback! We hope we can address some of your comments below:\n\nExperiments: The experiments we do are standard experiments in a continual/lifelong setting [1,4] and catastrophic interference [2,3] and are adapted for the generative setting that we address in this paper. We used FashionMNIST to add more diversity in our experiments (we have a similar experiment as 5.1 for MNIST in appendix section 7.5) \n\nRepresentative Sampling & Formulations: in the second to last paragraph of section 4 (and in section 4.2) we do provide a formal description of how to sample the previous distributions represented in the training set of the student. However, we have added some additional text in both sections to make sure that there is no ambiguity. Briefly, we control the representativeness of the samples through the bernoulli distribution described in detail in section 4 (second to last paragraph). The mean of this distribution controls how many samples we sample from the previously learnt distributions vs. the current one. The previous distributions are uniformly sampled through the discrete latent variable of the teacher model which contains the most pertinent information about these distributions (section 4.2). We also provide additional mathematical formulation, such as a more theoretical understanding of the consistency regularizer, in the appendix section 7.0.1 \n\nPrior sampling: for the following let us consider MNIST and its latent space representation given in figure 1. In a standard VAE the point corresponding to the mean of the prior might be mapped by the encoder to a point in latent space that is in between a '9' and a '7'. This will generate an image that does not correspond to a real image. While in the standard VAE this might be ok or desirable, in the lifelong setting, repeating this operation over and over will cause corruption of the true underlying distribution. Since this point will also be sampled far more often, the model will not disambiguate between the '9' and the '7' distributions causing aliasing over time. This is a core issue that needs to be addressed in order to bring VAEs into the life-long setting. We do so through the introduction of the discrete component in the latent variable representation which allows us to disambiguate the true underlying distribution from its variability. This allows us to sample from any of the distributions seen so far without the aliasing problem described above. \n\n \nVarying posteriors: We believe that this is an important misunderstanding due to the way we had written equation 3; we have re- written this to clear up any confusion. Vanilla VAEs model the posterior distribution of the latent variables of any given instance as a normal distribution of which the mean and the variance are parameterized by the learned encoder network. Our consistency regulariser is applied {\\em only} over the synthetic instances that are generated from the teacher model. It constrains the posterior distributions of their latent variables (as these are induced by the encoder of the teacher model) to be close to the respective posterior distributions induced by the encoder of the student model. The consistency regulariser {\\em is not} applied to the instances of the new task. We make no statement and impose no constraint what so ever on the posterior induced by the student encoder on the instances of the {\\em new} task. \n\nDistribution intervals: this is definitely something we considered, however in order to disambiguate the issues of inaccurate anomaly detection from the core problem, we decided to focus on the setting where both of these are provided to us. In future work we will attempt to develop a method that can simultaneously detect a distribution shift and model it into our framework. \n \n[1] Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In International Conference on Machine Learning, 2017. \n \n[2] Ian J. Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. In International Conference on Learning Representations, 2014a. \n \n[3] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, pp. 201611835, 2017. \n \n[4] Lopez-Paz, David. \"Gradient Episodic Memory for Continual Learning.\" Advances in Neural Information Processing Systems. 2017.", "Forward vs. reverse KL: As the reviewer mentions the forward KL divergence does in fact provide a lower variance in theory, however since the underlying true posterior is generally multi-modal (and since we are working with images) it is preferable to work with the mode-seeking (i.e. reverse) KL divergence [2] in order to generate more realistic looking images. We have attached appendix section 7.3 with figures demonstrating that empirically there is almost no difference between both the measures. \n\nMinimum number of samples: Our model derives its sample complexity from standard VAEs. At each learning distributional interval we train our model on a new distribution (all the while ensuring not to forget previously learnt distributions) using early stopping. More precisely, when the negative ELBO on the validation set stops decreasing for 50 steps we consider training to be completed for that interval. We can then move to the next training set/distribution. Within each learning interval we make sure that synthetic instances from all past distributions as well as real instances from the current distribution are seen in the same proportion. We plot the number of learning instances (real and synthetic) seen at each learning episode until the stopping criterion is satisfied. We notice a rapid decrease in the number of real samples needed for learning as the number of observed distributions increases. We have added these graphs in section 7.4 of the appendix. \n\nDetails on J: The dimension of the latent categorical J is grown over time. When we see the first distribution J=1; once there is a distribution transition (to an unobserved distribution) we set J=2.  We discuss this in a little more detail in section 7.2.3 of the appendix. \n\nStreaming benchmarks:  We think that there is a misunderstanding here. Our setting is not online/streaming generative modelling but continuous/life-long generative modelling. Online/streaming modelling seeks to update the model as instances arrive; if there is a distribution shift the model will adapt/shift to represent the current distribution and will forget previously learnt distributions. Instead in continual/life-long learning we do not want to forget the previously learned models because we might need to re-use them in the future as they can make learning of future distributions easier. The benchmarks we used are standard continual/ life-long learning benchmarks [3,6] and catastrophic interference benchmarks [4,5]. The comparison to online learning methods does not make sense and in fact will not be fair to these algorithms, since they do not try to retain all learned distributions; their performance will deteriorate rapidly as we see more and more distributions. \n\nReferences:\n\n[2] Murphy, Kevin P. Machine learning: a probabilistic perspective. MIT press, 2012. pp 733-734 \n \n[3] Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In International Conference on Machine Learning, 2017. \n \n[4] Ian J. Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. In International Conference on Learning Representations, 2014a. \n \n[5] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, pp. 201611835, 2017. \n \n[6] Lopez-Paz, David. \"Gradient Episodic Memory for Continual Learning.\" Advances in Neural Information Processing Systems. 2017. " ]
[ 9, 4, 4, -1, -1, -1, -1 ]
[ 5, 5, 2, -1, -1, -1, -1 ]
[ "iclr_2018_S1fduCl0b", "iclr_2018_S1fduCl0b", "iclr_2018_S1fduCl0b", "rJokGmjgG", "rJuN3apyf", "SJuF9Eqez", "rJokGmjgG" ]
iclr_2018_BkDB51WR-
Learning temporal evolution of probability distribution with Recurrent Neural Network
We propose to tackle a time series regression problem by computing temporal evolution of a probability density function to provide a probabilistic forecast. A Recurrent Neural Network (RNN) based model is employed to learn a nonlinear operator for temporal evolution of a probability density function. We use a softmax layer for a numerical discretization of a smooth probability density functions, which transforms a function approximation problem to a classification task. Explicit and implicit regularization strategies are introduced to impose a smoothness condition on the estimated probability distribution. A Monte Carlo procedure to compute the temporal evolution of the distribution for a multiple-step forecast is presented. The evaluation of the proposed algorithm on three synthetic and two real data sets shows advantage over the compared baselines.
rejected-papers
Thank you for submitting you paper to ICLR. Two of the reviewers are concerned that the paper’s contributions are not significant enough —either in terms of the theoretical or experimental contribution -- to warrant publication. The authors have improved the experimental aspect to include a more comprehensive comparison, but this has not moved the reviewers.
train
[ "rJwjY_1xG", "rJrHPz9lG", "B1fvgpNbz", "HyNuJjMmz", "ryBQ1sGQM", "Hkvi0qMmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Interesting ideas that extend LSTM to produce probabilistic forecasts for univariate time series, experiments are okay. Unclear if this would work at all in higher-dimensional time series. It is also unclear to me what are the sources of the uncertainties captured.\n\n\nThe author proposed to incorporate 2 different discretisation techniques into LSTM, in order to produce probabilistic forecasts of univariate time series. The proposed approach deviates from the Bayesian framework where there are well-defined priors on the model, and the parameter uncertainties are subsequently updated to incorporate information from the observed data, and propagated to the forecasts. Instead, the conditional density p(y_t|y_{1:t-1|, \\theta}) was discretised by 1 of the 2 proposed schemes and parameterised by a LSTM. The LSTM was trained using discretised data and cross-entropy loss with regularisations to account for ordering of the discretised labels. Therefore, the uncertainties produced by the model appear to be a black-box. It is probably unlikely that the discretisation method can be generalised to high-dimensional setting?\n\nQuality: The experiments with synthetic data sufficiently showed that the model can produce good forecasts and predictive standard deviations that agree with the ground truth. In the experiments with real data, it's unclear how good the uncertainties produced by the model are. It may be useful to compare to the uncertainty produced by a GP with suitable kernels. In Fig 6c, the 95pct CI looks more or less constant over time. Is there an explanation for that?\n\nClarity: The paper is well-written. The presentations of the ideas are pretty clear.\n\nOriginality: Above average. I think the regularisation techniques proposed to preserve the ordering of the discretised class label are quite clever.\n\nSignificance: Average. It would be excellent if the authors can extend this to higher dimensional time series.\n\nI'm unsure about the correctness of Algorithm 1 as I don't have knowledge in SMC.", "The papers proposes a recurrent neural network-based model to learn the temporal evolution of a probability density function. A Monte Carlo method is suggested for approximating the high dimensional integration required for multi-step-ahead prediction.\n\nThe approach is tested on two artificially generated datasets and on two real-world datasets, and compared with standard approaches such as the autoregressive model, the Kalman filter, and a regression LSTM.\n\nThe paper is quite dense and quite difficult to follow, also due to the complex notation used by the authors.\n\nThe comparison with other methods is very week, the authors compare their approach with two very simple alternatives, namely a first-order autoregressive mode and the Kalman filter. More sophisticated should have been employed.", "This work proposes an LSTM based model for time-evolving probability densities. The model does not assume an explicit prior over the underlying dynamical systems, instead only uncertainty over observation noise is explicitly considered. Experiments results are good for given synthetic scenarios but less convincing for real data. \n\nClarity: The paper is well-written. Some notations in the LSTM section could be better explained for readers who are unfamiliar with LSTMs. Otherwise, the paper is well-structured and easy to follow.\n\nOriginality: I'm not familiar with LSTMs, it is hard for me to judge the originality here.\n\nSignificance: Average. The work would be stronger if the authors can extend this to higher dimensional time series. There are also many papers on this topic using Gaussian process state-space (GP-SSM) models where an explicit prior is assumed over the underlying dynamical systems. The authors might want to comment on the relative merits between GP-SSMs and DE-RNNs.\n\nThe SMC algorithm used is a sequential-importance-sampling (SIS) method. I think it's correct but may not scale well with dimensions.", "We thank the referee for carefully reading our manuscript and providing helpful comments.\n\n1. Estimated uncertainty: We are aware of the previous studies, notably Kendall & Gal (2017) and Gal & Ghahramani (2015), where both the model (epistemic) and data (aleatoric) uncertainties are carefully studied in a Bayesian framework. However, as pointed out by the referee, we approach the problem from the Frequentist framework. We aim to make an inference of the probability distribution of the data, or aleatoric uncertainty, given the accuracy of the model. The estimated predictive probability distribution will model both: the data probability distribution and the model error. However, once the model is powerful enough and the data size is large enough, the estimated probability distribution converges to the true distribution of the data, meaning that the estimated uncertainties will represent the noise in the data. In fact, this latter situation is shown to be indeed the case during the evaluations on synthetic data when powerful enough RNN model is employed. \n\n2. Multivariate time series. We agree with the referee that a naïve extension of the DE-RNN to a higher-dimension based on the tensor-product space will not be scalable. Instead, we propose to compute the joint probability distribution by using a product rule. The detailed method is presented in Section 2.3 and Appendix A. A new numerical experiment is shown in Section 3.5. The proposed DE-RNN for a multivariate time series scales linearly with the number of the dimensions. \n\n3. Gaussian Processes: We added new comparisons with Gaussian process models. We also used a GP model for the multiple-step forecast on CO2 and CPU temperature data sets. However we did not include the results of GP on the CPU problem since it did not perform well, which might be due the presence of discrete input variables (CPU utilization and clock speed) for which GP is not a suitable approach. \n\n4. Question regarding the uncertainty bound in Fig. 6c: In a multiple-step forecast, the time evolution of a probability density function is essentially a diffusion process. So, in general, it is expected that the prediction uncertainty, represented by 95%-CI, increase with the forecast horizon, which is the case for most of the conventional time series prediction models. But, if we look at the so called “master equation” of the time evolution of probability density function (PDF), or the Fokker-Planck equation, the time evolution of a PDF is determined by two terms, advection in the probability space and the regular diffusion (Brownian) process. The later makes the uncertainty, or the width of a PDF, grows in time. However, it seems that when an RNN is used to model the time series, it makes the first term (advection term) convergent, which counteracts the diffusion process. We have observed (see, for example, figure 3 a), when a noisy input data is given to an RNN, surprisingly the prediction seems always moving toward the ground truth. We have shown that the prediction error with respect to the ground truth becomes smaller than the noise level. This observation suggests that the convergence to the ground truth counterbalances the diffusion by the random process, which explains why the uncertainty bound is no longer a monotonically increasing function of a forecast horizon. Although it is not shown, we have tested DE-RNN for a forced Van der Pol oscillator, which has a stationary state, similar to the experiment in section 3.4. For this kind of deterministic system, it was found that the uncertainty bounds fluctuate but do not grow even for a very long (2,000-step) forecast.\n", "The major purpose of this study is to introduce a new framework to compute the probability distribution of a time series and to compute a time evolution of the probability distribution in the future, which has a direct relevance to many applications in the modeling of physical or industrial processes. Hence, we focused more on stochastic processes with underlying (physical) dynamics systems. \n\n1. Synthetic and real data: Unfortunately, in this application area, most of the data are proprietary or confidential and there are only a limited number of publicly accessible data set for this kind of modeling. Therefore, we focused on synthetic data for thorough model validation and testing. Although we agree that the behavior of the synthetic data may not be exactly replicated in a real problem, the use of synthetic data allows us to have a deeper investigation into the behavior of the model (DE-RNN) under various conditions. Nevertheless, we also used two real data sets and these experiments similarly showed the advantage of our method over the traditional approaches.\n\n2. As pointed out by the referee, we have added new comparison results by using Gaussian processes in sections 3.1 ~ 3.3 and also a new experiment for a multivariate time series in section 3.5.\n", "We thank the referee for carefully reading our manuscript and providing helpful feedback. \n\n1. GP-SSM: We thank the referee for bringing GP-SSM to our attention. As suggested by the referee, we added a comment about GP-SSM in the literature survey. \n\n2. Multivariate time series: We agree with the referee that directly extending the current DE-RNN for a multivariate time series, based on a tensor-product approach, will not be scalable. Instead, we presented a new method to compute the joint probability distribution by using a product rule (section 2.3 and Appendix A). The new method relies on a product of independently trained DE-RNNs to compute the joint probability distribution. The computational complexity of this method increases linearly with the number of the dimension. A new numerical experiment for the multivariate time series is shown in section 3.5.\n" ]
[ 6, 5, 6, -1, -1, -1 ]
[ 2, 4, 4, -1, -1, -1 ]
[ "iclr_2018_BkDB51WR-", "iclr_2018_BkDB51WR-", "iclr_2018_BkDB51WR-", "rJwjY_1xG", "rJrHPz9lG", "B1fvgpNbz" ]
iclr_2018_B1nLkl-0Z
Learning Gaussian Policies from Smoothed Action Value Functions
State-action value functions (i.e., Q-values) are ubiquitous in reinforcement learning (RL), giving rise to popular algorithms such as SARSA and Q-learning. We propose a new notion of action value defined by a Gaussian smoothed version of the expected Q-value used in SARSA. We show that such smoothed Q-values still satisfy a Bellman equation, making them naturally learnable from experience sampled from an environment. Moreover, the gradients of expected reward with respect to the mean and covariance of a parameterized Gaussian policy can be recovered from the gradient and Hessian of the smoothed Q-value function. Based on these relationships we develop new algorithms for training a Gaussian policy directly from a learned Q-value approximator. The approach is also amenable to proximal optimization techniques by augmenting the objective with a penalty on KL-divergence from a previous policy. We find that the ability to learn both a mean and covariance during training allows this approach to achieve strong results on standard continuous control benchmarks.
rejected-papers
Thank you for submitting you paper to ICLR. Two of the reviewers are concerned that the paper’s contributions are not significant enough —either in terms of the theoretical or experimental contribution -- to warrant publication. The authors have improved the experimental aspect to include a more comprehensive comparison, but this has not moved the reviewers. Summary: The approach is very promising, but more experimental work is still required to demonstrate significance.
train
[ "S1qQ8ZFlf", "ByJ1CsYgG", "HkzHPhmWf", "rywix_TQG", "BkBxlO6Qz", "rk6c1_6Qf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "I think I should understand the gist of the paper, which is very interesting, where the action of \\tilde Q(s,a) is drawn from a distribution. The author also explains in detail the relation with PGQ/Soft Q learning, and the recent paper \"expected policy gradient\" by Ciosek & Whiteson. All these seems very sound and interesting.\n\nWeakness:\n1. The major weakness is that throughout the paper, I do not see an algorithm formulation of the Smoothie algorithm, which is the major algorithmic contribution of the paper (I think the major contribution of the paper is on the algorithmic side instead of theoretical). Such representation style is highly discouraging and brings about un-necessary readability difficulties. \n\n2. Sec. 3.3 and 3.4 is a little bit abbreviated from the major focus of the paper, and I guess they are not very important and novel (just educational guess, because I can only guess what the whole algorithm Smoothie is). So I suggest moving them to the Appendix and make the major focus more narrowed down.", "The paper introduces smoothed Q-values, defined as the value of drawing an action from a Gaussian distribution and following a given policy thereafter. It demonstrates that this formulation can still be optimized with policy gradients, and in fact is able to dampen instability in this optimization using the KL-divergence from a previous policy, unlike preceding techniques. Experiments are performed on an simple domain which nicely demonstrates its properties, as well as on continuous control problems, where the technique outperforms or is competitive with DDPG.\n\nThe paper is very clearly written and easy to read, and its contributions are easy to extract. The appendix is quite necessary for the understanding of this paper, as all proofs do not fit in the main paper. The inclusion of proof summaries in the main text would strengthen this aspect of the paper.\n\nOn the negative side, the paper fails to make a strong case for significant impact of this work; the solution to this, of course, is not overselling benefits, but instead having more to say about the approach or finding how to produce much better experimental results than the comparative techniques. In other words, the slightly more stable optimization and slightly smaller hyperparameter search for this approach is unlikely to result in a large impact.\n\nOverall, however, I found the paper interesting, readable, and the technique worth thinking about, so I recommend its acceptance.", "This paper explores the idea of using policy gradients to learn a stochastic policy on complex control problems. The central idea is to frame learning in terms of a new kind of Q-value that attempts to smooth out Q-values by framing them in terms of expectations over Gaussian policies.\n\nTo be honest, I didn't really \"get\" this paper.\n* As far I understand, all of the original work policy gradients involved stochastic policies. Many are/were Gaussian.\n* All Q-value estimators are designed to marginalize out the randomness in these stochastic policies.\n* As far as I can tell, this is equivalent to a slightly different formulation, where the agent emits a deterministic action (\\mu,\\Sigma) and the environment samples an action from that distribution. In other words, it seems that if we just draw the box a bit differently, the environment soaks up the nondeterminism, instead of needing to define a new type of Q-value.\n\nUltimately, I couldn't discern /why/ this was a significant advance for RL, or even a meaningful new perspective on classic ideas.\n\nI thought the little 2-mode MOG was a nice example of the premise of the model.\n\nWhile I may or may not have understood the core technical contribution, I think the experiments can be critiqued: they didn't really seem to work out. Figures 2&3 are unconvincing - the differences do not appear to be statistically significant. Also, I was disappointed to see that the authors only compared to DDPG; they could have at least compared to TRPO, which they mention. They dismiss it by saying that it takes 10 times as long, but gets a better answer - to which I respond, \"Very well, run your algorithm 10x longer and see where you end up!\" I think we need to see a more compelling demonstration of why this is a useful idea before it's ready to be published.\n\nThe idea of penalizing a policy based on KL-divergence from a reference policy was explored at length by Bert Kappen's work on KL-MDPs. Perhaps you should cite that?\n", "We thank the reviewer for their valuable feedback.\n\nR3: “To be honest, I didn't really \"get\" this paper.”\n\nWe hope that our changes to the paper and the rebuttal make the contributions of the paper clearer.\n\nR3: “As far I understand, all of the original work policy gradients involved stochastic policies. Many are/were Gaussian.”\n\nIndeed, most of the original work on policy gradients uses stochastic policies. In such a setting, Q-value or value functions are used for variance reduction when estimating the policy gradient; that is, the policy is trained using a form similar to Eq. (4) in the paper.\n\nHowever, more recently, several algorithms have been proposed (e.g., DDPG, SVG), which use Q-values in a different way. They use the gradient of a Q-function approximator to train the policy. This results in a large improvement in sample efficiency over traditional policy gradient methods. The most widely used of these algorithms, DDPG, is restricted to deterministic policies. Our work extends DDPG to general Gaussian policies, showing that 1) we can directly learn the smoothed Q-values to avoid estimating an additional Monte Carlo sampling step necessary for SVG, 2) the gradient and the Hessian of the smoothed Q-values can be used to update the mean and the covariance parameters of a Gaussian policy. Notably, although SVG uses a stochastic policy, it uses a fixed covariance.\n\nR3: “All Q-value estimators are designed to marginalize out the randomness in these stochastic policies.”\n\nThe smoothed Q-values that we introduce additionally marginalize out the randomness in the first action (a) of a typical Q(s, a) value based on the mean and covariance of the first action. As a result, we avoid an additional Monte Carlo sampling step to draw the first action, as compared to SVG for example.\n\nR3: “As far as I can tell, this is equivalent to a slightly different formulation, where the agent emits a deterministic action (\\mu,\\Sigma) and the environment samples an action from that distribution. In other words, it seems that if we just draw the box a bit differently, the environment soaks up the nondeterminism, instead of needing to define a new type of Q-value.”\n\nAlthough one could pursue such an approach, it is not equivalent to the direction we pursue in the paper. Under the above suggestion, where the agent emits an action (\\mu, \\Sigma), the corresponding Q-value function would be a function of both \\mu and \\Sigma. On the other hand, the smoothed Q-value function we consider only takes in \\mu. A key contribution of the paper is showing that even though \\tilde{Q} is not a direct function of \\Sigma, one can still derive an update for \\Sigma based on the Hessian of \\tilde{Q} with respect to mean action.\n\nR3: “I thought the little 2-mode MOG was a nice example of the premise of the model.”\n\nThank you. We hope our responses contribute to a better understanding of the premise of the approach. We expect that this fundamental smoothing behavior is the source of the improvement over DDPG.\n\nR3: “While I may or may not have understood the core technical contribution, I think the experiments can be critiqued: they didn't really seem to work out. Figures 2&3 are unconvincing - the differences do not appear to be statistically significant.”\n\nTo demonstrate the significance of our experimental results more clearly, we have updated Figure 2 to compare the performance of Smoothie with KL-penalty, DDPG, and TRPO on continuous control benchmarks. Figure 2 makes it clear that our results are statistically significant and Smoothie achieves the state-of-the-art by converging faster and/or achieving better final rewards. On the challenging Hopper and Humanoid tasks, Smoothie achieves double the average reward compared to DDPG without sacrificing sample efficiency. Our previous presentation of the results in two separate figures showing the difference between Smoothie without KL penalty and DDPG, and between Smoothie with and without the KL penalty made the significance of our results less clear.\n\nR3: “I was disappointed to see that the authors only compared to DDPG; they could have at least compared to TRPO, which they mention. They dismiss it by saying that it takes 10 times as long, but gets a better answer - to which I respond, \"Very well, run your algorithm 10x longer and see where you end up!\"\n\nSample-efficient reinforcement learning (RL) is a key challenge for real world applications of RL, and in this paper we focus on the behavior of the algorithms in the practical data regime. That said, we have included a comparison with TRPO in Figure 2, which shows that in this data regime, TRPO is not competitive. Similar conclusions have been made about TRPO by other papers (e.g. https://arxiv.org/abs/1707.06347).\n\nR3: “The idea of penalizing a policy based on KL-divergence from a reference policy was explored at length by Bert Kappen's work on KL-MDPs. Perhaps you should cite that?”\n\nReference added.\n", "We thank the reviewer for their valuable feedback.\n\nR2: “The appendix is quite necessary for the understanding of this paper, as all proofs do not fit in the main paper. The inclusion of proof summaries in the main text would strengthen this aspect of the paper.”\n\nThank you for the suggestion. We have updated the text to include proof summaries.\n\nR2: “On the negative side, the paper fails to make a strong case for significant impact of this work; the solution to this, of course, is not overselling benefits, but instead having more to say about the approach or finding how to produce much better experimental results than the comparative techniques. In other words, the slightly more stable optimization and slightly smaller hyperparameter search for this approach is unlikely to result in a large impact.”\n\nTo demonstrate the significance of our experimental results more clearly, we have updated Figure 2 to compare the performance of Smoothie with KL-penalty, DDPG, and TRPO on continuous control benchmarks. Figure 2 makes it clear that Smoothie achieves the state-of-the-art by converging faster and/or achieving better final rewards. On the challenging Hopper and Humanoid tasks, Smoothie achieves double the average reward compared to DDPG without sacrificing sample efficiency. Our previous presentation of the results in two separate figures showing the difference between Smoothie without KL penalty and DDPG, and between Smoothie with and without the KL penalty made the significance of our results less clear.\n", "We thank the reviewer for their valuable feedback.\n\nR1: \"The major weakness is that throughout the paper, I do not see an algorithm formulation of the Smoothie algorithm, which is the major algorithmic contribution of the paper … Such representation style is highly discouraging and brings about un-necessary readability difficulties.\"\n\nWe take the presentation criticism seriously. To improve the exposition, we have updated the paper to include an algorithm box with a pseudo-code description of the implementation.\n\nR1: \"I think the major contribution of the paper is on the algorithmic side instead of theoretical\"\n\nNot entirely. Note that the derived updates for the mean and covariance parameters of a Gaussian policy in terms of the gradient and Hessian of smoothed Q-values are novel. Also, the relation between the Hessian and the covariance update (Eq. 15) is particularly novel; we are not aware of any similar equations previously used in RL.\n\nR1: \"Sec. 3.3 and 3.4 is a little bit abbreviated from the major focus of the paper, and I guess they are not very important and novel (just educational guess, because I can only guess what the whole algorithm Smoothie is). So I suggest moving them to the Appendix and make the major focus more narrowed down.”\n\nThank you for the suggestion. We agree that Section 3.3 is a theoretical aside, which may not interest most readers. We have updated the paper to move Section 3.3 to the appendix, leaving more room in the main body for the pseudo-code presentation. On the other hand, we believe Section 3.4 is important as it explains the specific technique which yields the best empirical performance.\n" ]
[ 6, 6, 5, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_B1nLkl-0Z", "iclr_2018_B1nLkl-0Z", "iclr_2018_B1nLkl-0Z", "HkzHPhmWf", "ByJ1CsYgG", "S1qQ8ZFlf" ]
iclr_2018_SySisz-CW
On the difference between building and extracting patterns: a causal analysis of deep generative models.
Generative models are important tools to capture and investigate the properties of complex empirical data. Recent developments such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs) use two very similar, but \textit{reverse}, deep convolutional architectures, one to generate and one to extract information from data. Does learning the parameters of both architectures obey the same rules? We exploit the causality principle of independence of mechanisms to quantify how the weights of successive layers adapt to each other. Using the recently introduced Spectral Independence Criterion, we quantify the dependencies between the kernels of successive convolutional layers and show that those are more independent for the generative process than for information extraction, in line with results from the field of causal inference. In addition, our experiments on generation of human faces suggest that more independence between successive layers of generators results in improved performance of these architectures.
rejected-papers
Thank you for submitting you paper to ICLR. The paper presents an interesting analysis, but the utility of this analysis is questionable e.g. it is not clear how this might lead to improved VAEs/GANs. The authors did add an additional experimental result in their revised paper, but questions still remain. In light of this the significance of the paper is on the low side and it is therefore not ready for publication in ICLR without more work.
val
[ "H1CZob5Jz", "rkA0vi8gz", "BJT2Cecgz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper examines the nature of convolutional filters in the encoder and a decoder of a VAE, and a generator and a discriminator of a GAN. The authors treat the inputs (X) and outputs (Y) of each filter throughout each step of the convolving process as a time series, which allows them to do a Discrete Time Fourier Transform analysis of the resulting sequences. By comparing the power spectral density of the input and the output, they get a Spectral Dependency Ratio (SDR) ratio that characterises a filter as spectrally independent (neutral), correlating (amplifies certain frequencies), or anti-correlating (dampens frequencies). This analysis is performed in the context of the Independence of Cause and Mechanism (ICM) framework. The authors claim that their analysis demonstrates a different characterisation of the inference/discriminator and generative networks in VAE and GAN, whereby the former are anti-causal and the latter are causal in line with the ICM framework. They also claim that this analysis can be used to improve the performance of the models.\n\nPros:\n-- SDR characterisation of the convolutional filters is interesting\n-- The authors show that filters with different characteristics are responsible for different aspects of image modelling\n\nCons:\n-- The authors do not actually demonstrate how their analysis can be used to improve VAEs or GANs\n-- Their proposed SDR analysis does not actually find much difference between the generator and the discriminator of the GAN \n-- The clarity of the writing could be improved (e.g. the discussion in section 3.1 seems inaccurate in the current form). Grammatical and spelling mistake are frequent. More background information could be helpful in section 2.2. All figures (but in particular Figure 3) need more informative captions\n-- The authors talk a lot about disentangling in the introduction, but this does not seem to be followed up in the rest of the text. Furthermore, they are missing a reference to beta-VAE (Higgins et al, 2017) when discussing VAE-based approaches to disentangled factor learning\n\n\nIn summary, the paper is not ready for publication in its current form. The authors are advised to use the insights from their proposed SDR analysis to demonstrate quantifiable improvements the VAEs/GANs.", "This work exploits the causality principle to quantify how the weights of successive layers adapt to each other. Some interesting results are obtained, such as \"enforcing more independence between successive layers of generators may lead to better performance and modularity of these architectures\" . Generally, the result is interesting and the presentation is easy to follow. However, the proposed approach and the experiments are not convincible enough. For example, it is hard to obtain the conclusion \"more independence lead to better performance\" from the experimental results. Maybe more justifications are needed.", "The paper presents an application of a measure of dependence between the input power spectrum and the frequency response of a filter (Spectral Density Ratio from [Shajarisales et al 2015]) to cascades of two filters in successive layers of deep convolutional networks. The authors apply their newly defined measure to DCGANs and plain VAEs with ReLUs, and show that dependency between successive layers may lead to bad performance. \n\nThe paper proposed a possibly interesting approach, but I found it quite hard to follow, especially Section 4, which I thought was quite unstructured. Also Section 3 could be improved and simplified. It would be also good to add some more related work. I’m not an expert, but I assume there must be some similar idea in CNNs. \n\nFrom my limited point of view, this seems like a sound, novel and potentially useful application of a interesting idea. If the writing was improved, I think the paper may have even more impact.\n\nSmaller details: some spacing issues, some extra punctuation (pg 5 “. . Hence”), a typo (pg. 7 “training of the VAE did not lead to values as satisfactory AS what we obtained with the GAN”)\n" ]
[ 2, 7, 7 ]
[ 4, 3, 2 ]
[ "iclr_2018_SySisz-CW", "iclr_2018_SySisz-CW", "iclr_2018_SySisz-CW" ]
iclr_2018_rkcya1ZAW
Continuous-Time Flows for Efficient Inference and Density Estimation
Two fundamental problems in unsupervised learning are efficient inference for latent-variable models and robust density estimation based on large amounts of unlabeled data. For efficient inference, normalizing flows have been recently developed to approximate a target distribution arbitrarily well. In practice, however, normalizing flows only consist of a finite number of deterministic transformations, and thus they possess no guarantee on the approximation accuracy. For density estimation, the generative adversarial network (GAN) has been advanced as an appealing model, due to its often excellent performance in generating samples. In this paper, we propose the concept of {\em continuous-time flows} (CTFs), a family of diffusion-based methods that are able to asymptotically approach a target distribution. Distinct from normalizing flows and GANs, CTFs can be adopted to achieve the above two goals in one framework, with theoretical guarantees. Our framework includes distilling knowledge from a CTF for efficient inference, and learning an explicit energy-based distribution with CTFs for density estimation. Experiments on various tasks demonstrate promising performance of the proposed CTF framework, compared to related techniques.
rejected-papers
Thank you for submitting you paper to ICLR. The consensus from the reviewers is that there are some interesting theoretical contributions and some promising experimental support. However, although the paper is moving in the right direction, they believe that it is not quite ready for publication.
train
[ "HkCNqISxM", "r1_fQy9lz", "Hy8TV-qgf", "ryZFenTGf", "SyAtlT_zG", "ByCUxp_GG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors try to use continuous time generalizations of normalizing flows for improving upon VAE-like models or for standard density estimation problems.\n\nClarity: the text is mathematically very sloppy / hand-wavy.\n\n1. I do not understand proposition (1). I do not think that the proof is correct (e.g. the generator L needs to be applied to a function -- the notation L(x) does not make too much sense): indeed, in the case when the volatility is zero (or very small), this proposition would imply that any vector field induces a volume preserving transformation, which is indeed false.\n\n2. I do not really see how the sequence of minimization Eq(5) helps in practice. The Wasserstein term is difficult to hand.\n\n3. in Equation (6), I do not really understand what $\\log(\\bar{\\rho})$ is if $\\bar{\\rho}$ is an empirical distribution. One really needs $\\bar{\\rho}$ to be a probability density to make sense of that.", "\nThe authors propose continuous-time flows as a flexible family of\ndistributions for posterior inference of latent variable models as\nwell as explicit density estimation. They build primarily on the work\nof normalizing flows from Rezende and Mohamed (2015). They derive an\ninteresting objective based on a sequence of sub-optimization\nproblems, following a variational formulation of the Fokker-Planck\nequations.\n\nI reviewed this paper for NIPS with a favorable decision toward weak\nacceptance; and the authors also addressed some of my questions in\nthis newer version (namely, some comparisons to related work; clearer\nwriting).\n\nThe experiments are only \"encouraging\"; they do not illustrate clear\nimprovements over previous methods. However, I think the work\ndemonstrates useful ideas furthering the idea of continuous-time\ntransformations that warrants acceptance.", "The authors propose the use of first order Langevin dynamics as a way to transition from one latent variable to the next in the VAE setting, as opposed to the deterministic transitions of normalizing flow. The extremely popular Fokker-Planck equation is used to analyze the steady state distributions in this setting. The authors also propose the use of CTF in density estimation, as a generator of samples from the ''true'' distribution, and show competitive performance w.r.t. inception score for some common datasets.\n\nThe use of Langevin diffusion for latent transitions is a good idea in my opinion; though quite simple, it has the benefit of being straightforward to analyze with existing machinery. Though the discretized Langevin transitions in \\S 3.1 are known and widely used, I liked the motivation afforded by Lemma 2. \n\nI am not convinced that taking \\rho to be the sample distribution with equal probabilities at the z samples is a good choice in \\S 3.1; it would be better to incorporate the proximity of the langevin chain to a stationary point in the atom weights instead of setting them to 1/K. However to their credit the authors do provide an estimate of the error in the distribution stemming from their choice. \n\nTo the best of my knowledge the use of CTF in density estimation as described in \\S 4 is new, and should be of interest to the community; though again it is fairly straightforward. Regarding the experiments, the difference in ELBO between the macVAE and the vanilla ones with normalizing flows is only about 2%; I wish the authors included a discussion on how the parameters of the discretized Langevin chain affects this, if at all.\n\nOverall I think the theory is properly described and has a couple of interesting formulations, in spite of being not particularly novel. I think CTFs like the one described here will see increased usage in the VAE setting, and thus the paper will be of interest to the community.", "We appreciate your consistent support for our work. The draft is updated again, hopefully it could be easier for the future readers to understand.", "Thank you for valuable feedback, which make us aware of some presentation issues of our original submission. We hope we could engage in constructive discussions to fully clarify and address your concerns and questions. We have fixed the problems by re-writing Section 3, and hopefully address your concerns. We wish to take the opportunities to emphasize that the main proposed methodologies/algorithms are still valid, and the pointed problems are only relates to the writing. Below are our initial responses to your three comments.\n\n1. You are right. Proposition 1 is not correct for all CTFs, but it is correct for some specific CTFs such as the Hamiltonian flow. Sorry for the mistake, we have removed it and re-written this section. Note that the proposed algorithm for inference does not rely on this proposition. This is because the Jacobian term is only necessary in explicit methods ( i.e. maintaining distribution forms) in representing the normalizing flows, while our amortized approach is implicit (i.e. sample-based approximation) in representing flows at each step. Please see Section 3.2 on the detailed learning algorithm.\n\n2. Eq.5 is not directly implemented in practice, We have clarified it in our revision to avoid confusion. Eq.5 is derived from the principle theory of CTF, and presented in the paper to justify (1) some potential advantages of using CTF and (2) the sequential procedure of approximating the unknown \\rho_T. \n\nIn practice, we build the algorithm on the sequential procedure in Eq.5, and amortized the inference in an implicit manner. Specifically, at each step, we (1) first simulated samples from the corresponding diffusion, which is equivalent to optimizing one step in Eq.5, i.e., the resulting sample distribution (implicit) equals that from optimizing eq.5, and (2) proposed to use a neural network to match (i.e., “distill the knowledge\") the simulated sample distributions. Directly handling the optimization problem to obtained its explicit distribution forms is an interesting direction of future work.\n\n3. In the original Eq.6, we meant to show the ELBO, assuming \\bar{\\rho} is continuous (in the infinite-data setting). We agree this is a little misleading, thus we have removed it, and reformulated the objective in our revision (still Eq.6). Thanks for pointing out this issue.", "Thank you for recognizing our work. We are happy to address the two questions raised.\n\nWe agree that our way to approximate \\rho_T is not optimal. We use the simple sample averaging for the convenience of analysis. Better approximation by assigning more weights to the more recent samples leads to more challenges in theoretical analysis. We have added some discussion about this in the Section 3.1.\n\nThe stepsize parameter of the discretized Langevin chain does not affect model performance a lot as long as the stepsize lies in an appropriate range. To verify this, following SteinGAN with a simple Gaussian-Bernoulli Restricted Boltzmann Machines as the energy-based model (https://github.com/DartML/SteinGAN), we conducted an extra experiment on the MNIST dataset with MacGAN. We used the annealed importance sampling to evaluate log-likelihoods. Below are the log-likelihoods by varying the stepsize. More details are included in the appendix D.4.\n\nstepsize: \t\t6e-4\t2.4e-3\t3.6e-3\t6e-3\t1e-2\t1.5e-2\nlog-likelihood:\t-800\t-760\t-752\t-762\t-758\t-775" ]
[ 3, 6, 6, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_rkcya1ZAW", "iclr_2018_rkcya1ZAW", "iclr_2018_rkcya1ZAW", "r1_fQy9lz", "HkCNqISxM", "Hy8TV-qgf" ]
iclr_2018_rJLTTe-0W
Bayesian Time Series Forecasting with Change Point and Anomaly Detection
Time series forecasting plays a crucial role in marketing, finance and many other quantitative fields. A large amount of methodologies has been developed on this topic, including ARIMA, Holt–Winters, etc. However, their performance is easily undermined by the existence of change points and anomaly points, two structures commonly observed in real data, but rarely considered in the aforementioned methods. In this paper, we propose a novel state space time series model, with the capability to capture the structure of change points and anomaly points, as well as trend and seasonality. To infer all the hidden variables, we develop a Bayesian framework, which is able to obtain distributions and forecasting intervals for time series forecasting, with provable theoretical properties. For implementation, an iterative algorithm with Markov chain Monte Carlo (MCMC), Kalman filter and Kalman smoothing is proposed. In both synthetic data and real data applications, our methodology yields a better performance in time series forecasting compared with existing methods, along with more accurate change point detection and anomaly detection.
rejected-papers
Thank you for submitting you paper to ICLR. The consensus from the reviewers is that this is not quite ready for publication. There is also concern about whether ICLR, with its focus on representational learning, is the right venue for this work. One of the reviewers initially submitted an incorrect review, but this mistake has now been rectified. Apologies that this was not done sooner in order to allow you to address their concerns.
train
[ "HJ0Hc82gM", "HJhn9OtxG", "HJL1pxqeG", "rkAJXOnQz", "BJljaSkzf", "rkqRmuhQz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "\n\nSummary:\n\nThis paper develops a state space time series forecasting model in the Bayesian framework, jointly detects anomaly and change points. Integrated with an iterative MCMC method, the authors develop an efficient algorithm and use both synthetic and real data set to demonstrate that their algorithms outperform many other state-of-art algorithms. \n\nMajor comments:\nIn the beginning of section 3, the authors assume that all the terms that characterize the change-points and anomaly points are normally distributed with mean zero and different variance. However, in classic formulation for change-point or anomaly detection, usually there is also a mean shift other than the variance change. For example, we might assume $r_t \\sim N(\\theta, \\sigma_r^2)$ for some $\\theta>0$ to demonstrate the positive mean shift. I believe that this kind of mean shift is more efficient to model the structure of change-point. \n\nMy main concern is with the novelty. The work does not seem to be very novel.\n\nMinor comments:\n\n1. In the end of the page 2, the last panel is the residual, not the spikes. \n\n2. In page 12, the caption of figure 5 should be (left) and (right), not (top) and (bottom).", "The paper introduces a Bayesian model for timeseries with anomaly and change points besides regular trend and seasonality. It develops algorithms for inference and forecasting. The performance is evaluated and compared against state-of-the-art methods on three data sets: 1) synthetic data obtained from the generative Bayesian model itself; 2) well-log data; 3) internet traffic data.\n\nOn the methodological side, this appears to be a solid and significant contribution, although I am not sure how well it is aligned with the scope of ICLR. The introduced model is elegant; the algorithms for inference are non-trivial.\n\nFrom a practical perspective, one cannot expect this contribution to be ground-breaking, since there has been more than 40 years of work on time series forecasting, change point and anomaly detection. In some situations the methodology proposed here will work better than previous approaches (particularly in the situation where the data comes from the Bayesian model itself - in that case, there clearly is no better approach), in other cases (which the paper might have put less emphasis on), previous approaches will work better. To position this kind of work, I think it is important the authors discuss the limitations of their approach. Some guidelines on when or when not to use it would be valuable. Clearly, these days one cannot introduce methodology in this area and expect it to outperform existing methods under all circumstances (and hence practitioners to always choose it over any other existing method).\n\nWhat is surprising is that relatively simple approaches like ETS or STL work pretty much equally well (in some cases even better in terms of MSE) than the proposed approach, while more recent approaches - like BSTS - dramatically fail. It would be good if the authors could comment on why this might be the case.\n\nSummary:\n+ Methodology appears to be a significant, solid contribution.\n- Experiments are not conclusive as to when or when not to choose this approach over existing methods\n- writing needs to be improved (large number of grammatical errors and typos, e.g. 'Mehtods')", "Minor comments:\n- page 3. “The observation equation and transition equations together (i.e., Equation (1,2,3)) together define “ - one “together” should be removed\n- page 4. “From Figure 2, the joint distribution (i.e., the likelihood function ” - there should be additional bracket\n- page 7. “We can further integral out αn “ -> integrate out\n\nMajor comments:\nThe paper is well-written. The paper considers structural time-series model with seasonal component and stochastic trend, which allow for change-points and structural breaks.\n\nSuch type of parametric models are widely considered in econometric literature, see e.g.\n[1] Jalles, João Tovar, Structural Time Series Models and the Kalman Filter: A Concise Review (June 19, 2009). FEUNL Working Paper No. 541. Available at SSRN: https://ssrn.com/abstract=1496864 or http://dx.doi.org/10.2139/ssrn.1496864 \n[2] Jacques J. F. Commandeur, Siem Jan Koopman, Marius Ooms. Statistical Software for State Space Methods // May 2011, Volume 41, Issue 1.\n[3] Scott, Steven L. and Varian, Hal R., Predicting the Present with Bayesian Structural Time Series (June 28, 2013). Available at SSRN: https://ssrn.com/abstract=2304426 or http://dx.doi.org/10.2139/ssrn.2304426 \n[4] Phillip G. Gould, Anne B. Koehler, J. Keith Ord, Ralph D. Snyder, Rob J. Hyndman, Farshid Vahid-Araghi, Forecasting time series with multiple seasonal patterns, In European Journal of Operational Research, Volume 191, Issue 1, 2008, Pages 207-222, ISSN 0377-2217, https://doi.org/10.1016/j.ejor.2007.08.024.\n[5] A.C. Harvey, S. Peters. Estimation Procedures for structural time series models // Journal of Forecasting, Vol. 9, 89-108, 1990\n[6] A. Harvey, S.J. Koopman, J. Penzer. Messy Time Series: A Unified approach // Advances in Econometrics, Vol. 13, pp. 103-143.\n\nThey also use Kalman filter and MCMC-based approaches to sample posterior to estimate hidden components.\n\nThere are also non-parametric approaches to extraction of components from quasi-periodic time-series, see e.g.\n[7] Artemov A., Burnaev E. Detecting Performance Degradation of Software-Intensive Systems in the Presence of Trends and Long-Range Dependence // 16th International Conference on Data Mining Workshops (ICDMW), IEEE Conference Publications, pp. 29 - 36, 2016. DOI: 10.1109/ICDMW.2016.0013\n[8] Alexey Artemov, Evgeny Burnaev and Andrey Lokot. Nonparametric Decomposition of Quasi-periodic Time Series for Change-point Detection // Proc. SPIE 9875, Eighth International Conference on Machine Vision, 987520 (December 8, 2015); 5 P. doi:10.1117/12.2228370;http://dx.doi.org/10.1117/12.2228370\n\nIn some of these papers models of structural brakes and change-points are also considered, see e.g. \n- page 118 in [6]\n- papers [7, 8]\n\nThere were also Bayesian approaches for change-point detection, which are similar to the model of change-point, proposed in the considered paper, e.g.\n[9] Ryan Prescott Adams, David J.C. MacKay. Bayesian Online Changepoint Detection // https://arxiv.org/abs/0710.3742\n[10] Ryan Turner, Yunus Saatçi, and Carl Edward Rasmussen. Adaptive sequential Bayesian change point detection. In Zaïd Harchaoui, editor, NIPS Workshop on Temporal Segmentation, Whistler, BC, Canada, December 2009.\n\nThus,\n- the paper does not provide comparison with relevant econometric literature on parametric structural time-series models,\n- the paper does not provide comparison with relevant advanced change-point detection methods e.g. [7,8,9,10]. The comparison is provided only with very simple methods,\n- the proposed model itself looks very similar to what can be found across econometric literature,\n- the datasets, used for comparison, are very scarce. There are datasets for anomaly detection in time-series data, which should be used for extensive comparison, e.g. Numenta Anomaly Detection Benchmark.\n\nTherefore, also the paper is well-written, \n- it lacks novelty,\n- its topic does not perfectly fit topics of interest for ICLR,\nSo, I do not recommend this paper to be published.", "Dear Reviewer,\n\nThank you for your comments. We have addressed them accordingly. Please see below for our response point by point.\n\n\n\nMinor comments:\n- page 3. “The observation equation and transition equations together (i.e., Equation (1,2,3)) together define “ - one “together” should be removed\n- page 4. “From Figure 2, the joint distribution (i.e., the likelihood function ” - there should be additional bracket\n- page 7. “We can further integral out αn “ -> integrate out\n\n>> Thanks, we corrected the typos. \n\nThus,\n- the paper does not provide comparison with relevant econometric literature on parametric structural time-series models,\n\n>> Econometric literature of refs [1, 2, 3, 4, 5] do not properly consider and process the changing point and anomalies although they perform time-series forecasting.\n\n- the paper does not provide comparison with relevant advanced change-point detection methods e.g. [7,8,9,10]. The comparison is provided only with very simple methods,\n\n>> We compared state-of-the-art Bayesian Structural Time Series (BSTS), Prophet R package by Taylor & Letham (2017), , Exponential Smoothing State Space Model (ETS). The results are shown in Tables 2-6. The idea of [9--10] are quite similar to them. \n\n- the proposed model itself looks very similar to what can be found across econometric literature,\n\n>> The econometric literature ignores the proper treatment of changing point and anomalies. Bayesian modeling part is also different regarding estimation of posterior for hidden components given different prior distributions.\n>>We add the “related work section” to illustrate the differences between our work and the existing works.\n\n- the datasets, used for comparison, are very scarce. There are datasets for anomaly detection in time-series data, which should be used for extensive comparison, e.g. Numenta Anomaly Detection Benchmark.\n\n>> The experimental study demonstrates that our method outperforms the other methods as well on other benchmarks for anomaly detection. Our ultimate goal is time series forecasting conditional on structure changes. It might not be that meaningful to compare in Numenta Anomaly Detection Benchmark since Anomaly Detection is kind of secondary endpoint.\n\nTherefore, also the paper is well-written, \n- it lacks novelty,\n- its topic does not perfectly fit topics of interest for ICLR,\n\n>> There are three goals of our work: (1) time series forecasting; (2) change point detection; (3) anomalies detection; these three goals are jointly put in one unified framework by modeling using state-space bayesian modeling. The change point and anomalies are detected for better forecasting giving time series (structure) input. Due to the strong description power of bayesian state-space model, the results of model prediction and abnormal and change points detection are mutually improved. Compared to the existing bayesian modeling, our work is novel by sampling posterior to estimate hidden components given the individual Bernoulli prior of changing point and anomalies.\n>> The paper is related to structure input representation and state-space modeling, which in fact is relevant to ICLR audience. We also highlighted the novelty of the work in “contribution of this work” on Page 2 in the updated version. \n", "Dear Reviewer,\n\nThank you for your time and effort on reviewing papers. Unfortunately it seems like you uploaded a WRONG review. This is possibly a review for some other paper titled \"Deformation of Bregman divergence and its application\", not for ours.", "Dear Reviewer,\n\nThank you for reviewing our paper and thank you for appreciating our work. We have made changes following your suggestions. Please see below for our response point by point. Thank you.\n\n\nThe paper introduces a Bayesian model for timeseries with anomaly and change points besides regular trend and seasonality. It develops algorithms for inference and forecasting. The performance is evaluated and compared against state-of-the-art methods on three data sets: 1) synthetic data obtained from the generative Bayesian model itself; 2) well-log data; 3) internet traffic data.\n\n>> Thanks for the appreciation of our work. \n\nFrom a practical perspective, one cannot expect this contribution to be ground-breaking, since there has been more than 40 years of work on time series forecasting, change point and anomaly detection. In some situations the methodology proposed here will work better than previous approaches (particularly in the situation where the data comes from the Bayesian model itself - in that case, there clearly is no better approach), in other cases (which the paper might have put less emphasis on), previous approaches will work better. To position this kind of work, I think it is important the authors discuss the limitations of their approach. \n\n>> As most (if not all) of the time series works, our method cannot work in every case. For example, when the time series does not have clear decomposition structure as modeled in Eqs.(1-3), the model may not correctly recover the hidden components and correspondingly perform forecasting.\n\nSome guidelines on when or when not to use it would be valuable. Clearly, these days one cannot introduce methodology in this area and expect it to outperform existing methods under all circumstances (and hence practitioners to always choose it over any other existing method).\n\nWhat is surprising is that relatively simple approaches like ETS or STL work pretty much equally well (in some cases even better in terms of MSE) than the proposed approach, while more recent approaches - like BSTS - dramatically fail. It would be good if the authors could comment on why this might be the case.\n\n>> BSTS fails in some cases due to the mismatch between model assumptions and actual data distribution and generation process. Usually more complicated a model is, more likely it will fail when the data structure does not satisfy its underlying assumptions. In those cases, simple approaches may achieve better performance, which is not surprising. Nevertheless, our proposed method obtains the best result.\n\nSummary:\n+ Methodology appears to be a significant, solid contribution.\n- Experiments are not conclusive as to when or when not to choose this approach over existing methods\n- writing needs to be improved (large number of grammatical errors and typos, e.g. 'Methods')\n\n>> We will incorporate the discussions regarding model strength, application conditions, and the limitations in final version. \n>> We already fix the typos in updated version. \n" ]
[ 5, 6, 4, -1, -1, -1 ]
[ 5, 3, 5, -1, -1, -1 ]
[ "iclr_2018_rJLTTe-0W", "iclr_2018_rJLTTe-0W", "iclr_2018_rJLTTe-0W", "HJL1pxqeG", "HJ0Hc82gM", "HJhn9OtxG" ]
iclr_2018_r1drp-WCZ
State Space LSTM Models with Particle MCMC Inference
Long Short-Term Memory (LSTM) is one of the most powerful sequence models. Despite the strong performance, however, it lacks the nice interpretability as in state space models. In this paper, we present a way to combine the best of both worlds by introducing State Space LSTM (SSL), which generalizes the earlier work \cite{zaheer2017latent} of combining topic models with LSTM. However, unlike \cite{zaheer2017latent}, we do not make any factorization assumptions in our inference algorithm. We present an efficient sampler based on sequential Monte Carlo (SMC) method that draws from the joint posterior directly. Experimental results confirms the superiority and stability of this SMC inference algorithm on a variety of domains.
rejected-papers
Thank you for submitting you paper to ICLR. The consensus from the reviewers is that this is not quite ready for publication. The work is related to (although different from) Gu et al Neural Sequential Monte Carlo NIPS2015 and it would be useful to point this out in the related work section.
train
[ "SJLHg2OxG", "SyI_srKgz", "r1qsxyTlG", "B134ZGMEM", "Bk3ogjt7G", "S1YA4adQz", "rkFg-CTMM", "S1-q0p6Mz", "B1qlAppGG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "This article presents an approach for learning and inference in nonlinear state-space models (SSM) based on LSTMs. Learning is done using a stochastic EM where Particle PMCM is used to sample state trajectories.\n\nThe model is presented assuming that SSMs are linear. This is not necessarily the case since nonlinear SSMs have been used for a long time (see for example Ljung, 1999, \"System Identification, Theory for the User\"). The presented model is a nonlinear SSM with a particular structure that uses LSTMs.\n\nThe model described in the paper is Markovian: if one defines the variable sz_t = {s_t, z_t} there exists a Markov chain for the latent state sz:\n\nsz_t -> sz_{t+1} -> sz_{t+2} -> ...\n\nMarginalizing the latent variables s_t leads to a structure that, in general, is not Markovian. The authors claim that this marginalization \"allows the SSL to have non-Markovian state transition\". The word \"allows\" may mislead the reader in thinking that the model has gained some appealing property whereas the model is still essentially Markovian as evidenced by the Markov chain in sz. Any general algorithm for inference in nonlinear Markovian models could be used for inference of sz.\n\nThe algorithm used for inference and learning is stochastic EM with PMCMC but the authors do not cite important prior work such as: Lindsten (2013) \"An efficient stochastic approximation EM algorithm using conditional particle filters\"\n\n\nPros:\n\nThe model is sound.\n\nThe overall structure of the paper is good.\n\n\nCons:\n\nThe authors formulate the problem in such a way that they are forced to use an algorithm for non-Markovian models when they could have conserved the Markovian structure by choosing the appropriate parameterization.\n\nThe presentation of state-space models, filtering and smoothing shows some lack of familiarity with the literature. The control theory literature has dealt with nonlinear SSMs for decades and there is recent work in the machine learning community on nonlinear SSMs, e.g. Gaussian Process SSMs. \n\nI would advise against the use of non-English expressions unless they are used precisely:\n\n - sine qua non: LSTMs are not literally an indispensable model for sequence modeling nowadays. If the use of Latin was unavoidable, \"de facto standard\" would have been slightly more accurate.\n\n - bona fide: I am not sure what the authors wanted to say.\n\n - naívely: the correct spelling would be naïvely or naively.", "[After author feedback]\nI would suggest that the authors revise the literature study and contributions to more accurately reflect prior work.\n\n[Original review]\nThe authors propose state space models where the transition probabilities are defined using an LSTM. For inference the authors propose to make use of Monte Carlo expectation maximization.\n\nThe model proposed seems to be a special case of previously proposed models that are mentioned in the 2nd paragraph of the related works section, and e.g. the Maddison et al. (2017) paper. The inference method has also been studied previously (but not to my knowledge applied to SSLs/SRNNs), see the following review papers and references therein:\nSchön, Lindsten, Dahlin, W˚agberg, Naesseth, Svensson, Dai, \"Sequential Monte Carlo Methods for System Identification\", 2015\nKantas, Doucet, Singh, Maciejowski, Chopin, \"On Particle Methods for Parameter Estimation in State-Space Models\", 2015\n\nGiven this it is unclear to me what the novel contributions are. Perhaps the authors can elaborate on this?\n\nMinor comments:\n- Note that generally a state space model only has the Markov assumption, there is no restrictions on the transition and observation models.\n- EKF also requires Gaussian noise\n- It is a bit unclear what is meant by \"forward messages\" e.g. below eq. (6). For this model I believe the exact would generally be unavailable (at least for continuous models) because they would depend on previous messages.\n- Eq. (12) and (14) are exactly the same? The text seems to indicate they are not.\n- The optimal proposal is only locally optimal, minimizing the incremental weight variance\n- \"w\" should be \"x\" in eq. (20)\n", "This paper introduces a novel extension of the LSTM which incorporates stochastic inputs at each timestep. These stochastic inputs are themselves dependent on the LSTM state at the previous timestep. Considering the stochastic dependencies, this then yields a highly flexible non-Markov state space model, where the latent variable transitions are partially parameterized by an LSTM update.\n\nNaturally, the challenges are then efficiently estimating parameters and performing inference over the latent states. Here, SMC (and conditional SMC / particle Gibbs) are used for inference over the latent states z. A particularly nice touch is that even when the LSTM model is used for the transitions in the latent space, so long as the conditional distributions p(z_t | z_{1:t-1}) are conjugate with the emission distribution then it is possible to compute the optimal forward filtering proposal distribution in closed form, as done for the conditionally Gaussian (with affine Gaussian observations) and conditionally multinomial models considered here. Note that this really is a special feature of the models under consideration, though: for example, if the emission distribution p(x_t | z_t) is instead a *nonlinear* Gaussian, then one would have to fall back to bootstrap proposals. This probably deserves some mention: equations (13) are not, generally, tractable to integrate or normalize.\n\nI think this paper is missing a few necessary details on how the overall optimization algorithm proceeds, which I would like to see in an update. I understand that particle Gibbs updates (or SMC) are used to approximate the posterior distribution in a Monte Carlo EM algorithm. However, this does leave some questions:\n\n1. For the M step, how are the \\omega parameters (of the LSTM) handled in equation (8)? I understand that due to the particular models considered, maximum likelihood estimates of \\phi can be found in closed form. However, that’s not the case for \\omega. Is a gradient descent algorithm run to convergence? Or is a single gradient step taken, interleaved with a single PG update? Or something else?\n\n2. How reliably does the algorithm as a whole converge? Monte Carlo EM does not in general have convergence guarantees of “standard” EM (i.e. each step is not guaranteed to monotonically improve the lower bound). This might be fine! But, I think requires a bit of discussion.\n\n3. Is it necessary to include a replenishing operation (or independent MCMC steps) in the particle Gibbs algorithm? A known issue when running an iterated conditional SMC algorithm like this is that path degeneracy can make it very difficult for the PG kernel to mix well over the early time steps in the LSTM. Does this issue appear here? How many particles P are needed to efficiently mix, when considering time series of length T?", "Note that the Frigola et al. (2013) does approximate Bayesian inference using PGAS, whereas the 2014 paper I mentioned does it using PSAEM which is highly related to the way you propose to do inference. \n\nThis is not the first paper that proposes particle inference for LSTM/RNN-based models, see e.g. FIVO/AESMC/VSMC papers as well as the Gu et al. \"Neural Adaptive Sequential Monte Carlo\". \n\nNote further that many of the methods for particle filter-based learning described in the two references in the original review can be applied to the model without a P^2 complexity. The model proposed can (as R2 pointed out) be interpreted as Markovian with a degenerate transition distribution. It is well-known that SMC-based methods can be straightforwardly applied in this case.", "\nWe thank the reviewer for the comment and the point is taken. \n\nHowever, we would like to mention that we are aware of the papers referred to above. For example, in the paragraph above Eq. (19), (Frigola et al. 2013) and (Lindsten et al. 2014) are cited as examples for the application of particle methods in non-Markov models. To re-emphasize, by **no** means we are claiming that the proposed algorithm is the first application of particle methods for non-Markov models. \n\nWe believe that matching inference procedures to models is not trivial and is an art. This is exemplified by the plethora of papers being published, c.f. (Lindsten & Schön, 2013) inter alia, including the ones pointed out by the knowledgeable reviewer, for applying particle filters and PMCMC methods to various different but aptly chosen models. Every small detail matters.\n\nAs stated earlier, our goal is to enhance classical Bayesian model with the flexibility of deep neural networks, and apply the right inference algorithm, which is both rigorous and practical, and furthermore does not need strong assumptions (e.g. factorized conditionals, biased gradient approximation, etc). Moreover, to the best of our knowledge, the use of particle inference methods in neural sequence models (RNN/LSTM) is novel.\n\nWe did not use particle smoothing because it requires O(P^2) complexity for P particles. Kindly refer to the second last paragraph of our response to AnonReviewer2. \n\nReferences:\nLindsten, F., & Schön, T. B. (2013). Backward Simulation Methods for Monte Carlo Statistical Inference. Foundations and Trends in Machine Learning, 6(1), 1–143.\n", "Thank you for your response. Note that while the papers mentioned focus on Markovian models there is nothing limiting their use to this specific class of models:\n\nFor examples where PSAEM has been applied to non-Markovian models see e.g.\n* Frigola et al., Identification of Gaussian Process State-Space Models with Particle Stochastic Approximation EM, 2014\n* Svensson et al., Identification of jump Markov linear models using particle filters, 2014\n\nFor examples where particle filters and PMCMC methods have been applied to non-Markovian models, see e.g.\n* Wood et al., A new approach to probabilistic programming inference, 2014\n* Naesseth et al., Sequential Monte Carlo for Graphical Models, 2014\n* Lindsten et al., Particle Gibbs with Ancestor Sampling, 2014", "We thank the reviewer for detailed comments, particularly the important references and the usage of latin phrases. However, we would also like to clarify some misunderstandings regarding the paper. \n\nIn the paper, we proposed an instantiation of non-linear non-Markov state space model where the transition probabilities are defined using an LSTM. There are several prior work about non-linear SSM, like those referenced in first paragraph of Related Works and and the second paragraph of Section 3. For example we discussed EKF for general nonlinear transition/emission functions (Reviewer1 also points out limitation of EKF: only applicable to Gaussian noise). However, such models did not cater to our need of being able to handle structured discrete data while at the same time have long history dependency. Therefore we are certainly not claiming that the proposed model is the first “nonlinear extension” to SSMs. Rather, we consider LSTM as yet another form of nonlinear transition function, but a particularly interesting one that shows outstanding performance in sequence modeling.\n\nFurthermore, the LSTM transition function brings not only nonlinearity, but also non-Markovianity, which is the next point we would like to clarify. Indeed, it is true that in the joint space of LSTM state and SSM state, the model is Markov. However, it does not do justice to say such jointly Markov model does not bring in any appealing property. Consider LSTM: in the joint space of LSTM state and observation, the model is Markov as well, since conditioned on the pair (state, observation) at time t, the pair at time t-1 is independent of the pair at time t+1, but this view of LSTM gives no insight and no inference has been proposed using this view. One can compare this to a vanilla Markov chain over the observation space, for instance the bigram language model. The gain brought by the jointly Markov model (LSTM) over the marginally Markov model (bigram) is apparent. \n\nLast but not least, the question arises that since the proposed model is jointly Markov in (s_t, z_t), why not use algorithm that assumes Markovianity. Indeed, one could derive a particle smoothing algorithm for the pair (s_t, z_t), however it has O(P^2) time complexity, where P is the number of particles (Schön et al. 2015). Although there exist methods that reduce the time complexity of particle smoothers, such as (Klaas et al., 2006), they still rely on asymptotics over P. As noted in the paper, the choice of particle gibbs is not only to accommodate non-Markov transition, but also to avoid simulating too many particles. \n\nWe appreciate the suggestions on the latin phrases. They are fixed in the updated draft. \n\n\nReferences:\nSchön, Thomas Bo, et al. \"Sequential Monte Carlo Methods for System Identification.\" Proceedings of the 17th IFAC Symposium on System Identification, Beijing, China, October 19-21, 2015.. Vol. 48. 2015.\nKlaas, M., Briers, M., De Freitas, N., Doucet, A., Maskell, S., & Lang, D. \"Fast particle smoothing: If I had a million particles.\" Proceedings of the 23rd international conference on Machine learning. ACM, 2006.\n", "We thank the reviewer for valuable comments and detailed feedback.\n\nWe would like to highlight the main contributions of the work:\n\n- As correctly pointed out by the reviewer, we proposed a simple framework of state space models where the transition probabilities are defined using an LSTM and observation probabilities are parametric. Among other advantages, this design enables ease in handling structured discrete data and discrete latent variables, unlike plethora of existing work on stochastic RNNs and its variants. These latter models extend RNN by combining with a deep generative model such as VAE at the output layer, which allows for impressive performance on structured continuous data such as image and sound, handling structured discrete data, but handling discrete latent variables is not as straightforward as in SSM. (In fact, stochastic gradient estimator for discrete latent variables is an active research direction, for instance Gumbel-softmax/Concrete distribution, REBAR, RELAX estimators.)\n\n- In the proposed model as the transition probabilities are defined using an LSTM (as correctly pointed out by the reviewer), consequently the model is not Markovian. Thus, existing works for nonlinear SSMs, for instance in (Lindsten, 2013) and (Schön et al., 2015) assume a Markov transition in the derivation of the algorithm, which is not suitable for our proposed model. We show that even under non-Markov state transition, particle filter or particle gibbs can be used, and furthermore not only the bootstrap proposal, but also the locally optimal proposal can be efficiently evaluated in some examples.\n\nAt a high level, we demonstrated a way to enhance a classical Bayesian model (good for interpretability and structured discrete data) with the flexibility of deep neural networks.\n\nRegarding the confusion on the “forward messages”, we will clarify them by clearly defining them to be the quantities that are computable in the forward pass, as in forward-backward message passing algorithm. As noted in Example 4.1, the messages are available in closed form for linear Gaussian case. Note that this does not necessarily mean restricted flexibility of the state transition, since the rich function class of LSTM is encoded in g(s). This is a similar to VAE in spirit, which also uses Gaussian as the variational distribution. \n\nWe also thank the reviewer for pointing out the typos. In the equation for factorization assumption, the conditioned past z variables was meant to be the assignments from the previous iteration. Indeed, such factorization does not hold, which is why it is an “assumption”. This was fixed in the updated draft. Also we appreciate the reviewer for pointing out another plus point for the proposed work that EKF is limited to Gaussian noise, but no such limitation exists for SSL.\n\n\nReferences:\nLindsten, Fredrik. \"An efficient stochastic approximation EM algorithm using conditional particle filters.\" Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE, 2013.\nSchön, Thomas Bo, et al. \"Sequential Monte Carlo Methods for System Identification.\" Proceedings of the 17th IFAC Symposium on System Identification, Beijing, China, October 19-21, 2015.. Vol. 48. 2015.\n\n", "We thank the reviewer for the insightful comments and raising important questions. We are glad that reviewer found the work to be novel and having a “nice touch”. Kindly find below response to the question.\n\nThe inference procedure presented in the paper is not an ad-hoc method. We would be happy to provide more discussion about this in the paper. Our overall inference scheme is an instantiation of stochastic generalized EM (Neal et al, 1998). Such methods have been theoretically studied in detailed c.f. (Nielsen, 2000), (Delvon et al. 1999). We agree with the reviewer that such methods do not possess the property of monotonically increasing the lower bound, however under certain regularity conditions (which are met if we have LSTM and exponential family observation) these method in expectation reach a critical point. With more assumptions, even stronger results have been proved.\n\nFurther in the M step one need not find the optimizer but just improve the likelihood in expectation. This can be achieved, e.g., by taking a few number of stochastic gradient steps, as we did for LSTM updates. To be specific in case of application to discrete data (Example 2), we made a pass over the dataset whereas for phi we used the closed form optimizer (Note the optimization for LSTM and phi are independent given z).\n\nInitially we also suspected some kind of path degeneracy to occur. However in our experiments, we did not see the need for a replenishing operation. In particular, we started off with a variant of PG called PGAS (particle gibbs ancestral sampling) by (Lindsten et al. 2014), which specifically targets to resolve the path degeneracy issue in PG. We tried the approximation for non-Markovian model as mentioned in (Lindsten et al. 2014) with lag = 1, however it did not provide significant improvement over much faster and simple strategy of increasing the number of particles P from 1 to K during training. In general we observed that in the initial phase the particles do not collapse towards a single path; however after 100 epochs the proposed particle paths agree at most of the time points (Please refer to Figure 5 for an illustration).\n\nAlso we will fix small typos and add clarifications regarding the non-conjugate cases when the marginalization in alpha message cannot be computed in closed form and the normalization cannot be performed efficiently, that one would have to resort to methods like bootstrap proposals.\n\n\nReferences:\nNeal, Radford M., and Geoffrey E. Hinton. \"A view of the EM algorithm that justifies incremental, sparse, and other variants.\" Learning in graphical models. Springer Netherlands, 1998. 355-368.\nNielsen, Søren Feodor. \"The stochastic EM algorithm: estimation and asymptotic results.\" Bernoulli 6.3 (2000): 457-489.\nDelyon, Bernard, Marc Lavielle, and Eric Moulines. \"Convergence of a stochastic approximation version of the EM algorithm.\" Annals of statistics (1999): 94-128.\nLindsten, Fredrik, Michael I. Jordan, and Thomas B. Schön. \"Particle gibbs with ancestor sampling.\" Journal of Machine Learning Research 15.1 (2014): 2145-2184.\n\n" ]
[ 3, 5, 7, -1, -1, -1, -1, -1, -1 ]
[ 5, 5, 5, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1drp-WCZ", "iclr_2018_r1drp-WCZ", "iclr_2018_r1drp-WCZ", "Bk3ogjt7G", "S1YA4adQz", "S1-q0p6Mz", "SJLHg2OxG", "SyI_srKgz", "r1qsxyTlG" ]
iclr_2018_S1tWRJ-R-
Joint autoencoders: a flexible meta-learning framework
The incorporation of prior knowledge into learning is essential in achieving good performance based on small noisy samples. Such knowledge is often incorporated through the availability of related data arising from domains and tasks similar to the one of current interest. Ideally one would like to allow both the data for the current task and for previous related tasks to self-organize the learning system in such a way that commonalities and differences between the tasks are learned in a data-driven fashion. We develop a framework for learning multiple tasks simultaneously, based on sharing features that are common to all tasks, achieved through the use of a modular deep feedforward neural network consisting of shared branches, dealing with the common features of all tasks, and private branches, learning the specific unique aspects of each task. Once an appropriate weight sharing architecture has been established, learning takes place through standard algorithms for feedforward networks, e.g., stochastic gradient descent and its variations. The method deals with meta-learning (such as domain adaptation, transfer and multi-task learning) in a unified fashion, and can easily deal with data arising from different types of sources. Numerical experiments demonstrate the effectiveness of learning in domain adaptation and transfer learning setups, and provide evidence for the flexible and task-oriented representations arising in the network.
rejected-papers
Thank you for submitting you paper to ICLR. ICLR. The consensus from the reviewers is that this is not quite ready for publication. In particular, the experimental results are promising, but further work is required to fully demonstrate the efficacy of the approach.
train
[ "H1cgp9qxf", "rkseIXolf", "HJzFmcNZM", "ByQN_opGz", "By6WviTfz", "ByOi8oaGf", "Byv5BsTzM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The work proposed a generic framework for end-to-end transfer learning / domain adaptation with deep neural networks. The idea is to learn a joint autoencoders, containing private branch with task/domain-specific weights, as well as common branch consisting of shared weights used across tasks/domains, as well as task/domain-specific weights. Supervised losses are added after the encoders to utilize labeled samples from different tasks. Experiments on the MNIST and CIFAR datasets showed improvements over baseline models. Its performance is comparable to / worse than several existing deep domain adaptation works on the MNIST, USPS and SVHN digit datasets.\n\nThe structure of the paper is good, and easy to read. The idea is fairly straight-forward. It reads as an extension of \"frustratingly easy domain adaptation\" to DNN (please cite this work). Different from most existing work on DNN for multi-task/transfer learning, which focuses on weight sharing in bottom layers, the work emphasizes the importance of weight sharing in deeper layers. The overall novelty of the work is limited though. \n\nThe authors brought up two strategies on learning the shared and private weights at the end of section 3.2. However, no follow-up comparison between the two are provided. It seems like most of the results are coming from the end-to-end learning. \n\nExperimental results:\nsection 4.1: Figure 2 is flawed. The colors do not correspond to the sub-tasks. For example, there are digits 1, 4 in color magenta, which is supposed to be the shared branch of digits of 5~9. Vice versa. \nFrom reducing the capacity of JAE to be the same as the baseline, most of the improvement is gone. It is not clear how much of the improvement will remain if the baseline model gets to see all the samples instead of just those from each sub-task. \n\nsection 4.2.1: The authors demonstrate the influence of shared layer depth in table 2. While it does seem to matter for tasks of dissimilar inputs, have the authors compare having a completely shared branch or sharing more than just a single layer?\n\nThe authors suggested in section 4.1 CIFAR experiment that the proposed method provides more performance boost when the two tasks are more similar, which seems to be contradicting to the results shown in Figure 3, where its performance is worse when transferring between USPS and MNIST, which are more similar tasks vs between SVHN and MNIST. Do the authors have any insight?", "The paper addresses the question of identifying 'shared features' in neural networks trained on different datasets. Concretely, suppose you have two datasets X1, X2 and you would like to train auto-encoders (with potential augmentation with labeled examples) for the two datasets. One could work on the two separately; here, the authors propose sharing some of the weights to try and exploit/identify common features between the two datasets. The authors formalize by essentially looking to optimize an auto-encoder that take inputs of the form (x1, x2) and employing architectures that allow few nodes to interact with both x1,x2. The authors then try to minimize an appropriate loss function by standard methods. \n\nThe authors then apply the above methodology to transfer learning between various datasets. The empirical results here are interesting but not particularly striking; the most salient feature perhaps is that the architectures and training algorithms are perhaps a bit simpler but the overall improvements over existing methods are not too exciting. ", "\n\nThe paper focuses on learning common features from multiple domains data in a unsupervised and supervised learning scheme. Setting this as a general multi task learning, the idea consists in jointly learning autoecnoders, one for each domain, for the multiples domain data in such a way that parts of the parameters of the domain autoencoder are shared. Each domain/task autoencoder then consists in a shared part and a private part. The authors propose a variant of the model in the case of supervised learning and end up with a general architecture for multi-task, semi-supervised and transfer learning.\n\nThe presentation of the paper is good and the paper is easy to follow and explores the rather intuitive and simple idea of sharing parameters between related tasks.\n\nExperimental show some interesting results. First unsupervised experiments on Mnist data show improved MSe of joint autoecnoders but are these differences really significant (e.g. from 0.56 to 5.52) ? Moreover i am not sure to understand the meaning of separation criterion computed on t-sne of hidden representations. Results of Table 1 show improved reconstruction performance (MSE?) of joint auto encoders over independent ones for unrelated pairs such as airplane and horses. I a not sure ti understand why this improvement occurs even with very different classes. The investigation on the depth where sharing should occur is quite interesting and related to the usual idea of higher transferable property low level features. Results on transfer are the most interesting ones actually but do not seem to improve so much over baselines. \n\n", "* It reads as an extension of \"frustratingly easy domain adaptation\" to DNN (please cite this work).\n\nThere is a sizable list of references we intend to include in the next version, including the work you refer to, as well as (among others) [4], [5], [6], [7]. We attempted to comply by the \"strong recommendation\" to keep the references to a single page, and had to retain only those works we were explicitly influenced by, or recent state-of-the-art deep learning papers focusing on domain adaptation and explicit extraction of separate shared and task-related features.\n\n* The authors brought up two strategies on learning the shared and private weights at the end of section 3.2. However, no follow-up comparison between the two are provided. It seems like most of the results are coming from the end-to-end learning.\n\nThe first paragraph in Section 4.2 provides precisely the sought-for comparison. We find that the end-to-end learning approach is both simpler and better, and thus use it for the rest of the experiments. We will add a reference to that conclusion in Section 2.3. \n\nExperiment results:\n\n* section 4.1: Figure 2 is flawed. The colors do not correspond to the sub-tasks. For example, there are digits 1, 4 in color magenta, which is supposed to be the shared branch of digits of 5~9. Vice versa.\n\nIn Figure 2a, all branches are applied to all digits, with the colors representing the data that the branch was exposed to. The idea is that a branch should be more ‘inclined’ to treat digits it never saw as noise. This phenomenon can be observed clearly in the red digits: 0-4 are more dispersed (consider the 0's, 1's and 3' for the most obvious examples) than the rather cluttered 5-9. The fact that the shared branches map 0-4 and 5-9 much more closely than the private ones is quantified in the paper. Note that this is distinct from the observation that the common branches containing\nthe shared layer (green and magenta) are much more mixed between themselves than the private branches (red and black). See also reply to AnonReviewer3. However, we agree that the figure is confusing, and it will be reworked. In particular, we intend to split it into four separate ones, for each branch, as well as add more visual evidence for our beliefs.\n\n* From reducing the capacity of JAE to be the same as the baseline, most of the improvement is gone.\n\nThe reduced-capacity JAEs still retain over two thirds (22-24% vs 33-37%) of the observed advantage, therefore most of the advantage remains. \n\n* It is not clear how much of the improvement will remain if the baseline model gets to see all the samples instead of just those from each sub-task\n\nThe baseline models, as a pair, see all of the samples the JAE model sees. \n\n* section 4.2.1: The authors demonstrate the influence of shared layer depth in table 2. While it does seem to matter for tasks of dissimilar inputs, have the authors compare having a completely shared branch or sharing more than just a single layer?\n\nWe did perform various comparisons between different sharing strategies, but so far could not discern an obviously superior option. However, it remains an intriguing question that we will be paying attention to in future research.\n\n* The authors suggested in section 4.1 CIFAR experiment that the proposed method provides more performance boost when the two tasks are more similar, which seems to be contradicting to the results shown in Figure 3, where its performance is worse when transferring between USPS and MNIST, which are more similar tasks vs between SVHN and MNIST. Do the authors have any insight?\n\nRegarding the surprisingly good performance on the SVHN->MNIST task (vs. the CIFAR experiments), the explanation is the setting. Following established protocol (e.g., [3]), we perform the MNIST<->USPS tasks with small subsets of the datasets, whereas SVHN->MNIST is done using the entire dataset. \n\nSee also our reply concerning labeled set size flexibility and transfer learning with multiple tasks - challenges we are able to handle far more naturally than competing approaches.\n\n[4] Weston, Jason, et al. \"Deep learning via semi-supervised embedding.\" Neural Networks: Tricks of the Trade. Springer Berlin Heidelberg, 2012. 639-655.\n[5] S. Parameswaran and K. Q. Weinberger, “Large margin multi-task metric learning,” NIPS 23, pp. 1867–1875, 2010.\n[6] Dumoulin at al., Adversarially Learned Inference, https://arxiv.org/abs/1606.00704\n[7] Devroye, L., Gyoörfi, L., and Lugosi, G. (1996). A Probabilistic Theory of Pattern Recognition. Springer.", "* The empirical results here are interesting but not particularly striking; the most salient feature perhaps is that the architectures and training algorithms are perhaps a bit simpler but the overall improvements over existing methods are not too exciting.\n\nWe believe the architectures and training we use are a lot simpler than most comparable methods. For instance, our model for SVHN->MNIST is an order of magnitude smaller than [1], and we do not require a GAN. \n\nSee also our reply concerning labeled set size flexibility and transfer learning with multiple tasks - challenges we are able to handle far more naturally than competing approaches.\n", "* First unsupervised experiments on Mnist data show improved MSe of joint autoecnoders but are these differences really significant (e.g. from 0.56 to 5.52) ?\n\nWe agree that MNIST does not show a lot of improvement, due to its simplicity. Note that our experiments with CIFAR-10 display a significant advantage for the JAE scheme. \n\n* Moreover i am not sure to understand the meaning of separation criterion computed on t-sne of hidden representations.\n\nWe expect the shared branches to map the inputs to relatively similar hidden states, as they both capture the joint features from both datasets. Following the same logic, the task-specific branches should map inputs to relatively distinctly – they learn different mappings and should not be similar. The statistical measure of this difference is given by the Fisher separation criterion, which is indeed small for the shared branches and large for the private ones. \n\n* Results of Table 1 show improved reconstruction performance (MSE?) of joint auto encoders over independent ones for unrelated pairs such as airplane and horses. I a not sure ti understand why this improvement occurs even with very different classes.\n\nOur explanation for the experienced improvement, even with very different classes, is that the various classes of natural images as captured by the CIFAR-10 dataset share \"deep\" features necessary for successful reconstruction. We certainly agree that more similar classes should share more of these features, and our results support this intuition. \n\n* Results on transfer are the most interesting ones actually but do not seem to improve so much over baselines.\n\nWe agree that some of the improvements over existing methods are modest, though by no means all (e.g., SVHN->MNIST, Fig. 3.c). However, we would like to point out that the methods we compare ourselves to either use large, complicated architectures, require computationally expensive training, or both. We believe that the fact that we out-perform such state-of-the-art approaches with a simple concept while also employing much smaller models is compelling evidence in favor of the shared-subspace hypothesis. Moreover, the ability to perform domain adaptation without training a GAN should be of interest, as most successful state-of-the-art methods require training at least one GAN, a notoriously challenging task.\n\nSee also our reply concerning labeled set size flexibility and transfer learning with multiple tasks - challenges we are able to handle far more naturally than competing approaches.\n", "We thank the reviewers for the various points raised. We will reply to each review separately; however, we would like first to point out a contribution of our work that we believe bears stressing. Among the works with similar approach and comparable performance to ours, most seem to be unable to handle more than two tasks (e.g., transfer learning from two sources to a target ) without either a significant increase in complexity or some novel ideas. [1] would require a number of loss functions growing quadratically in the task number, and an even more demanding architecture than they already use. [2] would require a quadratically growing amount of discriminators, or else a novel idea to perform efficient domain adaptation for multiple tasks. It is even less clear how to extend [3] to such scenarios. \nIn contrast, the approach we propose handles this task in stride, simply adding a branch to the joint autoencoder. The experiments in Sec. 4.2.3 support this claim. We believe that this property of joint autoencoders is not matched by any comparable approach, and consider this to be a key advantage of the proposed method.\n\nIn addition, we are able to deal with a more flexible range of labeled sample sizes than the aforementioned papers, some of which are not capable of making immediate use of labeled data.\n\n[1] Bousmalis, K. et al. (2016). Domain separation networks. Advances in Neural Information Processing Systems 29 (NIPS 2016)\n[2] Liu, M.-Y. and Tuzel, O. (2016). Coupled generative adversarial networks. In Advances in Neural Information Processing Systems, pages 469–477.\n[3] Tzeng, E., Hoffman, J., Saenko, K., and Darrell, T. (2017). Adversarial discriminative domain adaptation. CoRR abs/1702.05464.\n" ]
[ 4, 5, 5, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_S1tWRJ-R-", "iclr_2018_S1tWRJ-R-", "iclr_2018_S1tWRJ-R-", "H1cgp9qxf", "rkseIXolf", "HJzFmcNZM", "iclr_2018_S1tWRJ-R-" ]
iclr_2018_HkbJTYyAb
Convolutional Normalizing Flows
Bayesian posterior inference is prevalent in various machine learning problems. Variational inference provides one way to approximate the posterior distribution, however its expressive power is limited and so is the accuracy of resulting approximation. Recently, there has a trend of using neural networks to approximate the variational posterior distribution due to the flexibility of neural network architecture. One way to construct flexible variational distribution is to warp a simple density into a complex by normalizing flows, where the resulting density can be analytically evaluated. However, there is a trade-off between the flexibility of normalizing flow and computation cost for efficient transformation. In this paper, we propose a simple yet effective architecture of normalizing flows, ConvFlow, based on convolution over the dimensions of random input vector. Experiments on synthetic and real world posterior inference problems demonstrate the effectiveness and efficiency of the proposed method.
rejected-papers
Thank you for submitting you paper to ICLR. ICLR. Although there revision has improved the paper, the consensus from the reviewers is that this is not quite ready for publication.
train
[ "HJNuic4gz", "By8otd_ef", "S1IcSyqxM", "SkQLkdTXM", "SJx9SdpXG", "ry0g4ua7M", "Bk5dG_pmf", "BylwwsmlG", "HyQ4crGlG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "In this paper, the authors propose a type of Normalizing Flows (Rezende and Mohamed, 2015) for Variational Autoencoders (Kingma and Welling, 2014; Rezende et al., 2014) they call Convolutional Normalizing Flows.\nMore particularly, it aims at extending on the Planar Flow scheme proposed in Rezende and Mohamed (2015). The authors notice an improvement through their method over Normalizing Flows, IWAE with diagonal gaussian approximation, and standard Variational Autoencoders. \nAs noted by AnonReviewer3, several baselines are missing. But the authors partly address that issue in the comment section for the MNIST dataset.\nThe requirement of h being bijective seems wrong. For example, if h was a rectifier nonlinearity in the zero-derivative regime, the Jacobian determinant of the ConvFlow would be 1. \nMore importantly, the main issue is that this paper might need to highlight the fundamental difference between their proposed method and Inverse Autoregressive Flow (Kingma et al., 2016). The proposed connectivity pattern proposed for the convolution in order to make the Jacobian determinant computation is exactly the same as Inverse Autoregressive Flow and the authors seems to be aware of the order dependence of their architecture which is every similar to autoregressive models. This presentation of the paper can be misleading concerning the true innovation in the model trained. Proposing ConvFlow as a type of Inverse Autoregressive Flow would be more accurate and would allow to highlight better the innovation of the work.\nSince this work does not offer additional significant insight over Inverse Autoregressive Flow, its value should be on demonstrating the efficiency of the proposed method. MNIST and Omniglot seems insufficient for that purpose given currently published work.\nIn the current state, I can't recommend the paper for acceptance. \n\n\nDanilo Jimenez Rezende, Shakir Mohamed: Variational Inference with Normalizing Flows. ICML 2015\nDanilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra: Stochastic Back-propagation and Variational Inference in Deep Latent Gaussian Models. ICML 2014\nDiederik P. Kingma, Max Welling: Auto-Encoding Variational Bayes. ICLR 2014\nDiederik P. Kingma, Tim Salimans, Rafal Józefowicz, Xi Chen, Ilya Sutskever, Max Welling: Improving Variational Autoencoders with Inverse Autoregressive Flow. NIPS 2016", "The paper proposes to increase the expressivity of the variational approximation in VAEs using a new convolutional parameterization of normalizing flows. Starting from the planar flow proposed in Rezende & Mohammed 2015 using a vector inner product followed by a nonliniarity+element-wise scaling the authors suggests to replace inner product with a shifted 1-D convolution. This reduces the number of parameters used from 2*d to k + d and importantly still maintains the linear time computation of the determinant. This approach feels so straightforward that i’m surprised that it have not been tried before. The authors present results on a synthetic task as well as MNIST and OMNIGLOT. Please find some more detailed comments/questions below\n\n\nQ1) I feel that section 3 could be more detailed about how the convolution normalizing flow relate to normalizing flow, inverse autoregressive flow and the masked-convolution used in real NVP? Especifically a) is it correct that convolutional normalizing flow trades global connectivity for more expressivity locally? b) Can convolutional flow be seen as faster but ´more restricted version of the LSTM implemented inverse autoregressive flow (full lower triangular jacobian vs k off diagonal elements per row in convolutional normalizing flow) \n\nQ2) I miss some more baselines in the experimental section. Did the authors compare the convolutional normalizing flow with e.g. Inverse Autoregressive flow or Auxiliary latent variables? \n\n\nQ3) Albeit the MNIST results seems convincing - and to a lesser degree the OMNIGLOT ones - I miss results on larger natural image benchmark datasets like cifar10 and ImageNet or preferably other modalities like text? Would it be possible to include results on any of these datasets?\n\nOverall i think the idea is nice and potentially useful due to the ease of implementation and speed of convolutional operations. However I think the authors needs to 1) better describe how their method differs from prior work and 2) compare their method to more baselines for the experiments to fully convincing\n", "The authors propose a new method for improving the flexibility of the encoder in VAEs, called ConvFlow. If I understand correctly (please correct me if not) the proposed method is a simplification of Inverse Autoregressive Flow as proposed by Kingma et al. Both of these methods use causal convolution to construct a normalizing flow with tractable Jacobian determinant. The difference is that Kingma et al. used 2d convolution (as well a fully connected architectures) where the authors of this paper propose to use 1d convolution. The novelty therefore seems limited.\n\nThe current version of the paper does not present convincing experimental results. The proposed method performs less well than previously proposed methods. If the authors were to update the experimental results to show equal or better performance to SOTA, with an analysis showing their method is indeed computationally less expensive, I would be willing to increase my rating.", "1. We added a section detailing the differences of ConvFlow with Inverse Autoregressive Flows (IAF) (Section 3.3);\n\n2. We added updated experimental results on MNIST and OMNIGLOT, as well as comparisons with IAF. (Section 4.2.2)", "Thank you for your comments and suggestions. Please find our response as follows:\n\n1. Regarding other types of normalizing flows, particularly Inverse Autoregressive Flow (IAF), we would likediscuss two major differences enjoyed by ConvFlow:\n a. The number of parameters required for IAF is O(d^2), where d is the input dimension; while ConvFlow only needs k+d, where k is the convolution kernel size, and typically k<d, due to the adoption of 1d convolution\n b. More importantly, as we shown in the paper revision, the autoregressive NN used in IAF involves singular transformation, thus causing a subspace issue, which effectively limits the representative power for the resulting variable. \n\n The proposed ConvFlow is able to address the above two drawbacks of IAF and manages to achieve strong results. Please refer to Section 3.3 of the updated paper for a detailed discussion about the differences with IAF.\n\n2. We have updated with our latest experiments on MNIST and added comparison to IAF based on the same VAE architecture. The latest experiments achieves even slightly better results than thebest published ones, with a best NLL of 78.51 compared to 79.10 achieved by PixelRNN. Also it's also 1 nat better than the best reported IAF result. \n\n Please refer to Section 4.2.2 for details about the updated experimental results.\n\n3. Thanks for your suggestions in conducting experiments on larger image datasets, and we have done some preliminary experiments on cifar10. We found out that the simple VAE architecture which achieves strong results on MNIST and OMNIGLOT doesn't give great results on cifar10 compared to PixelRNN, because natural images is much more complicated than MNIST, thus calls for a more sophisticated VAE and we are actively working on that. However, with the simple VAE, we still compare the performance of ConvFlow to IAF on cifar10, and we found out that ConvFlow still gives much better results than IAF. We didn't include the numbers in the paper, as a sophiscated VAE is needed and we plan to add them soon.\n ", "Thank you for your comments and suggestions. Please find our response as follows:\n\n1. You are right that the activation function in ConvFlow doesn't have to be bijective, as there is a skip link from z to the output to account for the bijection even when h returns 0. Thanks for pointing this out and we have updated our revision accordingly;\n\n2. We would like to clarify that ConvFlow is not a specific version of IAF, as there are two major differences enjoyed by ConvFlow:\n a. The number of parameters required for IAF is O(d^2), where d is the input dimension; while ConvFlow only needs k+d, where k is the convolution kernel size, and typically k<d, due to the adoption of 1d convolution\n b. More importantly, as we shown in the paper revision, the autoregressive NN used in IAF involves singular transformation, thus causing a subspace issue, which effectively limits the representative power for the resulting variable. \n\n The proposed ConvFlow is able to address the above two drawbacks of IAF and manages to achieve strong results. Please refer to Section 3.3 of the updated paper for a detailed discussion about the differences with IAF.\n\n3. We have updated with our latest experiments on MNIST and added comparison to IAF based on the same VAE architecture. The latest experiments achieves even slightly better results than thebest published ones, with a best NLL of 78.51 compared to 79.10 achieved by PixelRNN. Also it's also 1 nat better than the best reported IAF result. \n\n Please refer to Section 4.2.2 for details about the updated experimental results.\n ", "Thank you for your comments and suggestions. We would like to address your comments as follows:\n\n1. Regarding IAF, there are two major differences:\n a. The number of parameters required for IAF is O(d^2), where d is the input dimension; while ConvFlow only needs k+d, where k is the convolution kernel size, and typically k<d;\n b. More importantly, as we shown in the paper revision, the autoregressive NN used in IAF involves singular transformation, thus causing a subspace issue, which effectively limits the representative power for the resulting variable. \n\n The proposed ConvFlow is able to address the above two drawbacks of IAF and manages to achieve strong results. Please refer to Section 3.3 of the updated paper for a detailed discussion about the differences with IAF.\n\n2. We updated with our latest experiments on MNIST and add comparison to IAF based on the same VAE architecture,\n and our latest experiments achieves slightly better results compared to best published ones, with a best NLL of 78.51 compared to 79.10 achieved by PixelRNN. Also it's 1 nat better than the best IAF result. \n\n Please refer to Section 4.2.2 for details about updated results.\n ", "Thank your for your comment. The results of the methods mentioned in your comment are currently not included in the manuscript, because we would to emphasize that we are NOT putting our focus on optimizing for more sophisticated encoder and decoder architectures as these methods do, but rather on modeling a much richer family of variational posteriors to capture complex distribution of the latent codes on top of a standard encoder network. In fact, we only used a 2-layer MLP to model the mapping between the input data x to the initial Gaussian latent code z, which is then to be fed into the proposed ConvFlow network to construct complex posteriors. In other words, the proposed method and recent methods, including PixelVAE, PixelCNN, etc, are in orthogonal directions to improve generative modeling and can be potentially combined. \n\nEven with the aforementioned simple encoder and decoder network, our latest experimental results after this submission actually show that we are able to get a NLL of 78.51 with 8 layers of ConvBlock attached on top of the initial Gaussian encoder. This actually surpasses the reported best results of the above methods on statically bainarized MNIST (Deep IAF-VAE: 79.88 and PixelCNN: 79.2) which assumed a much more sophisticated encoder and decoder network (Deep ResNet-like architecture for IAF-VAE and CNN on pixel levels for PixelCNN).\n\nHowever, we fully agree that providing comparisons to those existing methods helps make the paper a complete story. We will update the new results in the revised paper and release the codes to reproduce the above results. ", "The conclusion of the paper says \"density estimates on MNIST show significant improvements over state-of-the-art methods\". This is misleading, as the results table ignores all recent results in this area. E.g. PixelVAE, Lossy VAE, PixelCNN, and IAF-VAE (some of which are cited) all obtain much better results. Is there any reason the proposed method should not be compared against these newer methods?" ]
[ 3, 5, 3, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HkbJTYyAb", "iclr_2018_HkbJTYyAb", "iclr_2018_HkbJTYyAb", "iclr_2018_HkbJTYyAb", "By8otd_ef", "HJNuic4gz", "S1IcSyqxM", "HyQ4crGlG", "iclr_2018_HkbJTYyAb" ]
iclr_2018_rkhCSO4T-
Distributed non-parametric deep and wide networks
In recent work, it was shown that combining multi-kernel based support vector machines (SVMs) can lead to near state-of-the-art performance on an action recognition dataset (HMDB-51 dataset). In the present work, we show that combining distributed Gaussian Processes with multi-stream deep convolutional neural networks (CNN) alleviate the need to augment a neural network with hand-crafted features. In contrast to prior work, we treat each deep neural convolutional network as an expert wherein the individual predictions (and their respective uncertainties) are combined into a Product of Experts (PoE) framework.
rejected-papers
Thank you for submitting you paper to ICLR. ICLR. The consensus from the reviewers is that this is not quite ready for publication.
test
[ "rkG37k9xM", "SJ2q6Rqgf", "HJ_YkBz-G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "- The paper is fairly written and it is clear what is being done\n- There is not much novelty in the paper; it combines known techniques and is a systems paper, so I \n would judge the contributions mainly in terms of the empirical results and messsage conveyed (see\n third point)\n- The paper builds on a previous paper (ICCV Workshops, https://arxiv.org/pdf/1707.06923.pdf),\n however, there is non-trivial overlap between the two papers, e.g. Fig. 1 seems to be almost the\n same figure from that paper, Sec 2.1 from the previous paper is largely copied \n- The message from the empirical validation is also not novel, in the ICCVW paper it was shown that\n the combination of different modalities etc. using a multiple kernel learning framework improved\n results (73.3 on HMDB51), while in the current paper the same message comes across with another\n kind of (known) method for combining different classifiers and modality (without iDT their best\n results are 73.6 for CNN+GP-PoE) ", "This paper, although titled \"Distributed Non-Parametric Deep and Wide Networks\", is mostly about fusion of existing models for action recognition on the HMDB51 dataset. The fusion is performed with Gaussian Processes, where each of the i=1,..,4 inspected models (TSN-Inception RGB, TSN-Inception Flow, ResNet-LSTM RGB, ResNet-LSTM Flow) returns a (\\mu_i, \\sigma_i), which are then combined in a product of experts formulation, optimized w.r.t. maximum likelihood.\n\nAt its current form this paper is unfit for submission. First, the novelty of the paper is not clear. It is stated that a framework is introduced for independent deep neural networks. However, this framework, the Gaussian Processes, already exists. Also, it is stated that the method can classify video snippets that have heterogeneity regarding camera angle, video quality, pose, etc. This is something characterizes also all other methods that report similar results on the same dataset. The third claim is that deep networks are combined with non-parameteric Bayesian models. That is a good claim, which is also shared between papers at http://bayesiandeeplearning.org/. The last claim is that model averaging taking into account uncertainty is shown to be useful. That is not true, the only result are the final accuracies per GP model, there is no experiment that directly reports any results regarding uncertainty and its contribution to the final accuracy.\n\nSecond, it is not clear that the proposed method is the one responsible for the reported improvements in the experiments. Currently, the training set is split into 7 sets, and each set is used to train 4 models, totalling 28 GP experts. It is unclear what new is learned by the 7 GP expert models for the 7 splits. Why is this better than training a single model on the whole dataset? Also, why is difference bigger between ResNet Fusion-1 and Resnet SVM-SingleKernel?\n\nThird, the method reports results only on a single dataset, HMDB51, which is also rather small. Deriving conclusions from results on a single dataset is suboptimal. Other datasets that can be considered are (mini) Kinetics or Charades.\n\nForth, the paper does not have the structure of a scientific publication. It rather looks like an unofficial technical report. There is no related work. The methodology section reads more like a tutorial of existing methods. And the discussion section is larger than any other section in the paper.\n\nAll in all, there might be some interesting ideas in the paper, specifically how to integrate GPs with deep nets. However, at the current stage the submission is not ready for publication.", "Summary: the paper considers an architecture combining neural networks and Gaussian processes to classify actions in a video stream for *one* dataset. The neural network part employs inception networks and residual networks. Upon pretraining these networks on RGB and optical flow data, the features at the final layer are used as inputs to a GP classifier. To sidestep the intractability, a model using a product of independent GP experts is used, each expert using a small subset of data and the Laplace approximation for inference and learning.\n\nAs it stands, I think the contributions of this paper is limited:\n\n* the paper considers a very specific architecture for a specific task (classifying actions in video streams) and a specific dataset (the HMDB-51 dataset). There is no new theoretical development.\n\n* the elements of the neural network architecture are not new/novel and, as cited in the paper, they have been used for action classification in Wang et al (2016), Ma et al (2017) and Sengupta and Qian (2017). I could not tell if there is any novelty on this part of the paper and it seems that the only difference between this paper and Sengupta and Qian (2017) is that Sengupta and Qian used SVM with multi-kernel learning and this paper uses GPs.\n\n* the paper considers a product of independent GP experts on the neural net features. It seems that combining predictions provided by the GPs helps. It is, however, not clear from the paper how the original dataset was divided into subsets.\n\n* it seems that the paper was written in a rush and many extensions and comparisons are only discussed briefly and left as future work, for example: using the Bayesian committee machine or modern sparse GP approximation techniques, end-to-end training and training with fewer training points." ]
[ 3, 3, 3 ]
[ 4, 5, 4 ]
[ "iclr_2018_rkhCSO4T-", "iclr_2018_rkhCSO4T-", "iclr_2018_rkhCSO4T-" ]
iclr_2018_HJjePwx0-
Better Generalization by Efficient Trust Region Method
In this paper, we develop a trust region method for training deep neural networks. At each iteration, trust region method computes the search direction by solving a non-convex subproblem. Solving this subproblem is non-trivial---existing methods have only sub-linear convergence rate. In the first part, we show that a simple modification of gradient descent algorithm can converge to a global minimizer of the subproblem with an asymptotic linear convergence rate. Moreover, our method only requires Hessian-vector products, which can be computed efficiently by back-propagation in neural networks. In the second part, we apply our algorithm to train large-scale convolutional neural networks, such as VGG and MobileNets. Although trust region method is about 3 times slower than SGD in terms of running time, we observe it finds a model that has lower generalization (test) error than SGD, and this difference is even more significant in large batch training. We conduct several interesting experiments to support our conjecture that the trust region method can avoid sharp local minimas.
rejected-papers
There are two parts to this paper (1) an efficient procedure for solving trust-region subproblems in second-order optimization of neural nets, and (2) evidence that the proposed trust region method leads to better generalization performance than SGD in the large-batch setting. In both cases, there are some promising leads here. But it feels like two separate papers here, and I'm not sure either individual contribution is well enough supported to merit publication in ICLR. For (1), the contribution is novel and potentially useful, to the best of my knowledge. But as there's been a lot of work on trust region solvers and second-order optimization of neural nets more generally, claims about computational efficiency would require comparisons against existing methods. The focus on efficiency also doesn't seem to fit with the experiments section, where the proposed method optimizes less efficiently than SGD and is instead meant to provide a regularization benefit. For (2), it's an interesting empirical finding that the method improves generalization, but the explanation for this is very hand-wavy. If second-order optimization in general turned out to help with sharp minima, this would be an interesting finding indeed, but it doesn't seem to be supported by other work in the area. The training curves in Table 1 are interesting, but don't really distinguish the claims of Section 4.5 from other possible hypotheses.
test
[ "B1BvO3Bez", "HybcslTyf", "HJK45bIez", "H1S9sm8MG", "S1CpgVUMz", "rJKPRmIfM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "**I am happy to see some good responses from the authors to my questions. I am raising my score a bit higher. \n\nSummary: \nA new stochastic method based on trust region (TR) is proposed. Experiments show improved generalization over mini-batch SGD, which is the main positive aspect of this paper. The main algorithm has not been properly developed; there is too much focus on the convergence aspects of the inner iterations, for which there are many good algorithms already in the optimization literature. There are no good explanations for why the method yields better generalization. Overall, TR seems like an interesting idea, but it has neither been carefully expanded or investigated. \n\nLet me state the main interesting results before going into criticisms:\n1. TR method seems to generalize better than mini-batch SGD. \n2. TR seems to lose generalization more gracefully than SGD when batch size is increased. [But note here that mini-batch SGD is not a closed chapter. With better ways of adjusting the noise level via step-size control (larger step sizes mean more noise) the loss of generalization associated with large mini-batch sizes can be brought down. See, for example: https://arxiv.org/pdf/1711.00489.pdf.]\n3. Hybrid method is even better. This only means that more understanding is needed as to how TR can be combined with SGD.\n\nTrust region methods are generally batch methods. Algorithm 1 is also stated from that thinking and it is a well-known optimization algorithm. The authors never mention mini-batch when Algorithm 1 is introduced. But the authors clearly have only the stochastic min-batch implementation of the algorithm in mind. \n\nOne has to wait till we go into the experiments section to read something like:\n\"Lastly, although in theory, we need full gradient and full Hessian to guarantee convergence, calculating them in each iteration is not practical, so we calculate both Hessian and gradient on subsampled data to replace the whole dataset\"\nfor readers to realize that the authors are talking about a stochastic mini-batch method. This is a bad way of introducing the main method. This stochastic version obviously requires a step size; so it would have been proper to state the stochastic version of the algorithm instead of the batch algorithm in Algorithm 1.\n\nInstead of saying that in passing why not explicitly state it in key places, including the abstract and title? I suggest TR be replaced by \"Stochastic TR\" everywhere. Also, what does \"step size\" mean in the TR method? I suggest that all these are fully clarified as parts of Algorithm 1 itself. \n\nTrust region subproblem (TRS) has been analyzed and developed so much in the optimization literature. For example, the conjugate gradient-based method leading to the Steihaug-Toint point is so much used. [Note: Here, the gradient refers to the gradient of the quadratic model, and it uses only Hessian-vector products.] http://www.ii.uib.no/~trond/publications/papers/trust.pdf. The authors spend so much effort developing their own algorithm! Also, in actual implementation, they only use a crude version of the inner algorithm for reasons of efficiency.\n\nThe paper does not say anything about the convergence of the full algorithm. How good are the trust region updates based on q_t given the huge variability associated with the mini-batch operation? The authors should look at several existing papers on stochastic trust region and stochastic quasi-Newton methods, e.g., papers from Katya Scheinberg (Lehigh) and Richard Byrd (Colorado)'s groups.\n\nThe best-claimed method of the method, called \"Hybrid method\" is also mentioned only in passing, and that too in a scratchy fashion (see end of subsec 4.3):\n\"To enjoy the best of both worlds, we also introduce a “hybrid” method in the Figure 3, that is, first run TR method for several epochs to get coarse solution and then run SGD for a while until fully converge. Our rule of thumb is, when the training accuracy raises slowly, run SGD for 10 epochs (because it’s already close to minimum). We find this “hybrid” method is both fast and accurate, for both small batch and large batch.\"\n\nExplanations of better generalization properties of TR over SGD are important. I feel this part is badly done in the paper. For example, there is this statement:\n\"We observe that our method (TR) converges to solutions with much better test error but\nworse training error when batch size is larger than 128. We postulate this is because SGD is easy to overfit training data and “stick” to a solution that has a high loss in testing data, especially with the large batch case as the inherent noise cannot push the iterate out of loss valley while our TR method can.\"\nFrankly, I am unable to decipher what is being said here.\n\nThere is an explanation indicating that switching from SGD to TR causes an uphill movement (which I presume, is due to the trust region radius r being large); but statements such as - this will lead to climbing over to a wide minimum etc. are too strong; no evidence is given for this.\n\nThere is a statement - \"even if the exact local minima is reached, the subsampled Hessian may still have negative curvature\" - again, there is no evidence.\n\nOverall, the paper only has a few interesting observations, but there is no good and detailed experimental analysis that help explain these observations.\n\nThe writing of the paper needs a lot of improvement.\n\n\n\n\n\n\n", "The paper proposes training neural networks using a trust region method, in which at each iteration a (non-convex) quadratic approximation of the objective function is found, and the minimizer of this quadratic within a fixed radius is chosen as the next iterate, with the radius of the trust region growing or shrinking at each iteration based on how closely the gains of the quadratic approximation matched those observed on the objective function. The authors claim that this approach is better at avoiding \"narrow\" local optima, and therefore will tend to generalize better than minibatched SGD. The main novelty seems to be algorithm 2, which finds the minimizer of the quadratic approximation within the trust region by performing GD iterations until the boundary is hit (if it is--it might not, if the quadratic is convex), and then Riemannian GD along the boundary.\n\nThe paper contains several grammatical mistakes, and in my opinion could explain things more clearly, particularly when arguing that the algorithm 2 will converge. I had particular difficulty accepting that the phase 1 GD iterates would never hit the boundary if the quadratic was strongly convex, although I accept that it is true due to the careful choice of step size and initialization (assumptions 1 and 2).\n\nThe central claim of the paper, that a trust region method will be better at avoiding narrow basins, seems plausible, since if the trust region is sufficiently large then it will simply pass straight over them. But if this is the case, wouldn't that imply that the quadratic approximation to the objective function is poor, and therefore that line 5 of algorithm 1 should shrink the trust region radius? Additionally, at some times the authors seem to indicate that the trust region method should be good at escaping from narrow basins (as opposed to avoiding them in the first place), see for example the left plot of figure 4. I don't see why this is true--the quadratic approximation would be likely to capture the narrow basin only.\n\nThis skepticism aside, the experiments in figure 2 do clearly show that, while the proposed approach doesn't converge nearly as quickly as SGD in terms of training loss, it does ultimately find a solution that generalizes better, as long as both SGD and TR use the same batch size (but I don't see why they should be using the same batch size). How does SGD with a batch size of 1 compare to TR with the batch sizes of 512 (CIFAR10) or 1024 (STL10)?\n\nSection 4.3 (Figure 3) contain a very nice experiment that I think directly explores this issue, and seems to show that SGD with a batch size of 64 generalizes better than TR at any of the considered batch sizes (but not as well as the proposed TR+SGD hybrid). Furthermore, 64 was the smallest batch size considered, but SGD was performing monotonically better as the batch size decreased, so one would expect it to be still better for 32, 16, etc.\n\nSmaller comments:\n\nYou say that you base the Hessian and gradient estimates on minibatched samples. I assume that the same is true for the evaluations of F on line 4 of Algorithm 1? Do these all use the same minibatch, at each iteration?\n\nOn the top of page 3: \"M is the matrix size\". Is this the number of elements, or the number of rows/columns?\n\nLemma 1: This looks correct to me, but are these the KKT conditions, which I understand to be first order optimality conditions (these are second order)? You cite Nocedal & Wright, but could you please provide a page number (or at least a chapter)?\n\nOn the top of page 5, \"Line 10 of Algorithm 1\": I think you mean Line 11 of Algorithm 2.", "The paper develops an efficient algorithm to solve the subproblem of the trust region method with an asymptotic linear convergence guarantee, and they demonstrate the performances of the trust region method incorporating their efficient solver in deep learning problems. It shows better generation errors by trust region methods than SGD in different tasks, despite slower running time, and the authors speculate that trust-region method can escape sharp minima and converge to wide minima and they illustrated that through some hybrid experiment.\nThe paper is organized well.\n\n1. The result in Section 4.3 empirically showed that Trust Region Method could escape from sharp local minimum. The results are interesting but not quite convincing. The terms about sharp and wide minima are ambiguous. At best, this provides a data point in an area that has received attention, but the lack of precision about sharp and wide makes it difficult to know what the more general conclusions are. It might help to show the distance between the actual model parameters that those algorithms converge to.\n\n2. As well know, VGG16 with well training strategy (learning rate decay) could achieve at least 92 percent accuracy. In the paper, the author only got around 83 percent accuracy with SGD and 85 percent accuracy with TR. Why is this.\n\n3. In section 4.2, it said \"Although we can also define Hessian on ReLU function, it is not well supported on major platforms (Theano/PyTorch). Likewise, we find max-pooling is also not supported by platforms to calculate higher order derivative, one way to walk around is to change all the max-pooling layers to avg- pooling, it hurts accuracy a little bit, albeit this is not our primary concern.\" It is my understanding that Pytorch support higher order derivative both for ReLu and Max-pooling. Hence, it is not an explanation for not using ReLu and Max-pooling. Please clarify\n\n4. In section 4.3, the authors claimed that numerical diffentiation only hurts 1 percent error for second derivative. Please provide numerical support.\n\n5. The setting of numerical experiments is not clear, e.g. value of N1 and N2. This makes it hard to reproduce results.\n\n5. It's not clear whether this is a theoretical paper or an empirical paper. For example, there is a lot of math, but in Section 4.5 the authors seem to hedge and say \"We give an intuitive explanation ... and leave the rigorous analysis to future works.\" Please clarify.\n\n", "Thanks for your comments! We updated our submission according to your suggestions.\n\n1. “The results are interesting but not quite convincing. The terms about sharp and wide minima are ambiguous. At best, this provides a data point in an area that has received attention, but the lack of precision about sharp and wide makes it difficult to know what the more general conclusions are. It might help to show the distance between the actual model parameters that those algorithms converge to.”\n---\nTo show more evidence of the wide/sharp local minima, we added a similar experiment in section 4.4 (Figure 4) following [Keskar, 2016] that shows the loss and accuracy curve. We believe this could be served as the direct evidence of sharpness. Also, as you suggested, we calculated the distance between those models in Figure 4. As expected, the model computed by Hybrid is very close to TR, and both of them are far away from the model computed by SGD. This indicates that Hybrid/TR models are quite different from the SGD model. \n\n2. “In the paper, the author only got around 83 percent accuracy with SGD and 85 percent accuracy with TR. Why is this.”\n---\nFor small batch (B=128), both of these methods get 87%-88% accuracy, ADAM has 86.6% accuracy. You said SGD has only 83% accuracy, so we think you were probably looking at large batch case. \nThe loss of accuracy is because: 1) we didn’t do data augmentation, 2) we replace the max-pooling to avg-pooling and ReLU to our proposed SReLU. Both of them turn out to be worse than original layers. However, this accuracy is still decent compared with results in [Keskar 2016] (https://openreview.net/pdf?id=H1oyRlYgg) which uses standard VGG16 and has 89.24% accuracy.\n\n3. “It is my understanding that Pytorch support higher order derivative both for ReLu and Max-pooling. Hence, it is not an explanation for not using ReLu and Max-pooling. Please clarify.”\n---\nAlthough Pytorch support higher order derivative, we tested it by comparing the numerical computation of Hv with the auto-differentiation computed by Pytorch (0.2.0), and we find the results are inconsistent if the network contains ReLU and Max-Pooling. To be safe, we changed those two layers to make sure both methods give the same value. \nThe inconsistency might be due to numerical issues or some bugs in higher order auto-differentiation in Pytorch 0.2.0. \n\n4. “In section 4.3, the authors claimed that numerical differentiation only hurts 1 percent error for second derivative. Please provide numerical support.”\n---\nWe guess you mean “Experiments show that the relative error is controllable (~1%)”. This is measured by || Hv_analytic - Hv_numeric || / || Hv_analytic || < 0.01. For the fixed network, we calculate Hv product by both forward-backward and numerical differentiation, and then compute the average error (defined above) by randomly choosing many vectors v. \n\n5. “The setting of numerical experiments is not clear, e.g. value of N1 and N2. This makes it hard to reproduce results.”\n---\nN1 and N2 are hyperparameters that depends on the problem scale. For the settings in our deep neural net experiments, the choice is discussed in the third paragraph of page 8, actually we do two inner iterations (N1=N2=1). We will release our code after reviewing process so the experiments will be reproducible. \n\n6. “It's not clear whether this is a theoretical paper or an empirical paper. For example, there is a lot of math, but in Section 4.5 the authors seem to hedge and say \"We give an intuitive explanation ... and leave the rigorous analysis to future works.\" Please clarify.”\n---\nThe first part of our paper proposes a new subproblem solver that is theoretically faster than [Hazan, 2014](https://arxiv.org/abs/1401.6757). Our method converges linearly while [Hazan 2014] converges sublinearly, and the proof for this part is rigorous (so it is quite theoretical for this part). \nThe second part is mainly about empirical findings when we apply our method to deep neural networks, and we think this is as important as the first part. Since few papers about second order method evaluate their algorithms on deep networks, it is still unclear whether second order methods are useful to practitioners. However, currently we don’t have theoretical justification of why our method can avoid sharp local minima, so we only provide empirical evidence. Therefore for the second part we mentioned “We give an intuitive explanation of why trust region method avoids sharp local minima, and leave the rigorous analysis to future works.”\nBut please note that in our updated version, we removed Sec. 4.5 since it doesn't have either theoretical or empirical guarantee.", "Thanks for your comments! We updated our submission according to your suggestions. \n\nWe think you may have two major concerns: one is about how our trust region method avoids sharp minima, another is about the batch size. Here is our clarification:\n\n1. “I had particular difficulty accepting that the phase 1 GD iterates would never hit the boundary if the quadratic was strongly convex”. \n---\nActually we claimed that “If the global minimum lies inside of the sphere then gradient descent itself is guaranteed to find it” (page 4, after the algorithm box). This means if the quadratic is strongly convex and the minimum lies in the interior of the ball, gradient descent in phase 1 will converge to it without hitting the boundary. This is because when we already know minimizer lies inside of the sphere, then the constraint ||s||<=1 is useless, so GD is equivalent to Prox-GD, and furthermore in theorem 5 (proven in appendix A5) we know the norm of GD iterate {||z_t||} is non-decreasing under our assumptions. If at some T, ||x_T||>=1 then ||x^*||<||x_T||, this contradicts to non-decreasing property. So we know it would never hit the boundary.\n\n2. Honestly we don’t have theoretical explanations as to why stochastic TR method is better at generalization, since our claims are mostly based on empirical findings. However this is verified in all our experiments. To make it more clear, we plot the landscape of the function in Figure 4 in the revised version, and find that TR/Hybrid method converges to wide local minimums while SGD converges to a sharp local minimum. As an intuitive explanation, we think one reason might be the larger noise brought by subsampled Hessian.\n\n3. “I don't see why they should be using the same batch size”\n---\nSince the main point of our experiments is to compare the generalization ability across different algorithms, the effect of batch size should be controlled. Furthermore, under the same batch size, SGD and TR have similar computational costs per iteration and have the same level of parallelism. Also, this paper concerns especially about large batch case, since it is widely observed that SGD on large batch tends to find sharp minimum. We found our TR/hybrid method can mitigate this problem. \n\n4. “How does SGD with a batch size of 1 compare to TR with the batch sizes of 512 (CIFAR10) or 1024 (STL10)”?\n---\nSince deep networks rely heavily on batch normalization layer, it is not good to run batch size of 1. But we do have smaller batch result: for batch size=16 (which is nearly the minimum samples to make batch norm work), the accuracy lies between 83.3% on cifar-10(VGG), which is worse than SGD with batch size 64. \n\n5. “Comparing accuracy on smaller batch size, like 32, 16, etc.”\n---\nThis is partly answered above that large batch case is more interesting and more actively researched. When we want to make use of many GPUs, one has to do large batch training. Even on smaller batch, the hybrid method still outperforms SGD, but the gap becomes small. In fact, when further decreasing the batch size, the test accuracy will drop, so the small batch size is not that interesting. For example, when batch size=16, accuracy of SGD is 83.3%, while Hybrid method is 83.4%. Typically the batch size is set to be 64~128 for one GPU, and larger (e.g., 1024 or larger) when training on multiple GPUs. \n\nAs to your smaller comments:\n1. “Evaluation of f(x_t)”. \n---\nPlease note that algorithm 1 is a very basic trust region method, we put it on to give readers a example of how our subproblem solver can be used. Actually, there are some stochastic version, for example:\n“Blanchet, J., Cartis, C., Menickelly, M. and Scheinberg, K., 2016. Convergence rate analysis of a stochastic trust region method for nonconvex optimization. arXiv preprint”\nWhere they allow an estimation of function value, for example, calculated on minibatch. In our experiment, we use the same minibatch as computing gradient and Hessian.\n(In the updated version, we modify algorithm 1 to include this stochastic version)\n\n2. M is the number of non-zero elements of matrix, or equivalently, complexity of matrix vector product.\n \n3. Lemma 1 come from [Nocedal&Wright] Theorem 4.1, page 70 (in 2nd version).\n\n4. Thanks for pointing out! We have fixed the typo in the updated version.", "Thanks for your comments! We updated our submission according to your suggestions.\n\n[Ref1] Kohler, J.M. and Lucchi, A. Sub-sampled Cubic Regularization for Non-convex Optimization. ICML 2017.\n[Ref2] Blanchet, J., Cartis, C., Menickelly, M. and Scheinberg, K., 2016. Convergence rate analysis of a stochastic trust region method for nonconvex optimization. arXiv preprint\n[Ref3] Xu, P., Roosta-Khorasani, F. and Mahoney, M.W., 2017. Newton-type methods for non-convex optimization under inexact hessian information. arXiv preprint.\n[Ref4] Hazan, E. and Koren, T., 2016. A linear-time algorithm for trust region problems. Mathematical Programming, 158(1-2), pp.363-381.\n\n1. First of all, let us explain the motivation of our work, this is not an article about the whole trust region method; like you said, its stochastic version with convergence rate is already developed in [Ref 1,2,3]. Independently, our innovation is mainly on developing the new solver for the trust region subproblem. For solving the trust region subproblem, several approximation methods such as Steihaug and Dogleg methods mentioned in section 1 and section 2.1 are used, but they can’t converge to the global minimum in nonconvex problems. More recently [Ref 4] proposed algorithms that converge to \\epsilon-suboptimal solution in O(1/\\sqrt{\\epsilon}) time. We propose a new trust region subproblem solver (Algorithm 2), which converges faster than [Ref 4] in theory (linear convergence, O(log(1/\\epsilon)) time), and we give rigorous theoretical proof for this subproblem solver. \n\n2. “it would have been proper to state the stochastic version of the algorithm instead of the batch algorithm in Algorithm 1.”\n---\nAs to the organization of paper: The stochastic version of trust region method is very similar to the classical method, except that the stochastic one uses approximation of function value/gradient/Hessian. In the updated version, we elaborate more on stochastic version in Section 2.1 and mention in Algorithm 1 that g, H can be stochastic gradient and Hessian. \n\n3. “The convergence of full-algorithm”. \n---\nSince we do not specify the outer trust region method, the convergence property of the outer loop is guaranteed by standard analysis such as [Ref 1,2,3]. Our focus is on developing a better sub-problem solver, and apply the trust region method to solve deep neural networks. We have added the discussion of convergence rate in section 2.1. \n\n4. “The best-claimed method of the method, called \"Hybrid method\" is also mentioned only in passing, and that too in a scratchy fashion”\n---\nWe agree that the hybrid method is important and should be formally introduced before experiments. We have added more details at the beginning of section 4. \n\n5. “but statements such as - this will lead to climbing over to a wide minimum etc. are too strong; no evidence is given for this.”\n---\nWe notice that AnonReviewer 3 also has such concern, so we added another experiment in section 4.4 and Figure 4 to examine the wideness of local minimum, the result supports our claim that TR indeed helps to get a much wider minimum. We also calculate the distance of model parameters, we find solution of Hybrid method is very close to TR method, and both are far from SGD method. This supports our claim that Hybrid method is a refinement of TR method, and TR can escape the sharp local minimum.\n\n6. “Frankly, I am unable to decipher what is being said here.”\n---\nWe have revised the sentence. We just want to explain “We observe that our method (TR) converges to solutions with much better test error when batch size is larger than 128. We postulate this is because SGD does not have enough noise to escape from a sharp local minimum, especially when large batch is used. “\n\n7. “There is a statement - \"even if the exact local minima is reached, the subsampled Hessian may still have negative curvature\" - again, there is no evidence.”\n---\nYes, we agree that more evidence is needed to support this claim. Actually this is just our guess --- we think even in the local minimum where full Hessian is positive definite, the subsampled Hessian will not be positive since it is just a very rough approximation. We have withdrawn this claim. We also removed section 4.6 in our original version since it was just our intuition and guess but not very rigorous. " ]
[ 6, 5, 6, -1, -1, -1 ]
[ 5, 2, 3, -1, -1, -1 ]
[ "iclr_2018_HJjePwx0-", "iclr_2018_HJjePwx0-", "iclr_2018_HJjePwx0-", "HJK45bIez", "HybcslTyf", "B1BvO3Bez" ]
iclr_2018_Byk4My-RZ
Flexible Prior Distributions for Deep Generative Models
We consider the problem of training generative models with deep neural networks as generators, i.e. to map latent codes to data points. Whereas the dominant paradigm combines simple priors over codes with complex deterministic models, we argue that it might be advantageous to use more flexible code distributions. We demonstrate how these distributions can be induced directly from the data. The benefits include: more powerful generative models, better modeling of latent structure and explicit control of the degree of generalization.
rejected-papers
This paper presents a method for learning more flexible prior distributions for GANs by learning another distribution on top of the latent codes for training examples. It's reminiscent of layerwise training of deep generative models. This seems like a reasonable thing to do, but it's probably not a substantial enough contribution given that similar things have been done for various other generative models. Experiments show improvement in samples compared with a regular GAN, but don't compare against various other techniques that have been proposed for fixing mode dropping. For these reasons, as well as various issues pointed out by the reviewers, I don't recommend acceptance.
train
[ "H1k_ZpFlf", "SyoujCYgG", "S1TJm_gZG", "SJ31YST7f", "ByCPOSp7M", "BJwavHp7z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Summary:\n\nThe paper proposes to learn new priors for latent codes z for GAN training. for this the paper shows that there is a mismatch between the gaussian prior and an estimated of the latent codes of real data by reversal of the generator . To fix this the paper proposes to learn a second GAN to learn the prior distributions of \"real latent code\" of the first GAN. The first GAN then uses the second GAN as prior to generate the z codes. \n \nQuality/clarity:\n\nThe paper is well written and easy to follow.\n\nOriginality:\n\npros:\n-The paper while simple sheds some light on important problem with the prior distribution used in GAN.\n- the second GAN solution trained on reverse codes from real data is interesting \n- In general the topic is interesting, the solution presented is simple but needs more study\n\ncons:\n\n- It related to adversarial learned inference and BiGAN, in term of learning the mapping z ->x, x->z and seeking the agreement. \n- The solution presented is not end to end (learning a prior generator on learned models have been done in many previous works on encoder/decoder)\n\nGeneral Review:\n\nMore experimentation with the latent codes will be interesting:\n\n- Have you looked at the decay of the singular values of the latent codes obtained from reversing the generator? Is this data low rank? how does this change depending on the dimensionality of the latent codes? Maybe adding plots to the paper can help.\n\n- the prior agreement score is interesting but assuming gaussian prior also for the learned latent codes from real data is maybe not adequate. Maybe computing the entropy of the codes using a nearest neighbor estimate of the entropy can help understanding the entropy difference wrt to the isotropic gaussian prior?\n\n- Have you tried to multiply the isotropic normal noise with the learned singular values and generate images from this new prior and compute inceptions scores etc? Maybe also rotating the codes with the singular vector matrix V or \\Sigma^{0.5} V?\n\n- What architecture did you use for the prior generator GAN?\n\n- Have you thought of an end to end way to learn the prior generator GAN? \n\n****** I read the authors reply. Thank you for your answers and for the SVD plots this is helpful. *****\n\n", "The paper demonstrates the need and usage for flexible priors in the latent space alongside current priors used for the generator network. These priors are indirectly induced from the data - the example discussed is via an empirical diagonal covariance assumption for a multivariate Gaussian. The experimental results show the benefits of this approach. \nThe paper provides for a good read. \n\nComments:\n\n1. How do the PAG scores differ when using a full covariance structure? Diagonal covariances are still very restrictive. \n2. The results are depicted with a latent space of 20 dimensions. It will be informative to see how the model holds in high-dimensional settings. And when data can be sparse. \n3. You could consider giving the Discriminator, real data etc in Fig 1 for completeness as a graphical summary. \n", "The paper proposes, under the GAN setting, mapping real data points back to the latent space via the \"generator reversal\" procedure on a sample-by-sample basis (hence without the need of a shared recognition network) and then using this induced empirical distribution as the \"ideal\" prior targeting which yet another GAN network might be trained to produce a better prior for the original GAN.\n\nI find this idea potentially interesting but am more concerned with the poorly explained motivation as well as some technical issues in how this idea is implemented, as detailed below.\n\n1. Actually I find the entire notion of an \"ideal\" prior under the GAN setting a bit strange. To start with, GAN is already training the generator G to match the induced P_G(x) (from P(z)) with P_d(x), and hence by definition, under the generator G, there should be no better prior than P(z) itself (because any change of P(z) would then induce a different P_G(x) and hence only move away from the learning target).\n\nI get it that maybe under different P(z) the difficulty of learning a good generator G can be different, and therefore one may wish to iterate between updating G (under the current P(z)) and updating P(z) (under the current G), and hopefully this process might converge to a better solution. But I feel this sounds like a new angle and not the one that is adopted by the authors in this paper.\n\n2. I think the discussions around Eq. (1) are not well grounded. Just as you said right before presenting Eq. (1), typically the goal of learning a DGM is just to match Q_x with the true data distrubution P_x. It is **not** however to match Q(x,z) with P(x,z). And btw, don't you need to put E_z[ ... ] around the 2nd term on the r.h.s. ?\n\n3. I find the paper mingles notions from GAN and VAE sometimes and misrepresents some of the key differences between the two.\n\nE.g. in the beginning of the 2nd paragraph in Introduction, the authors write \"Generative models like GANs, VAEs and others typically define a generative model via a deterministic generative mechanism or generator ...\". While I think the use of a **deterministic** generator is probably one of the unique features of GAN, and that is certainly not the case with VAE, where typically people still need to specify an explicit probabilistic generative model.\n\nAnd for this same reason, I find the multiple references of \"a generative model P(x|z)\" in this paper inaccurate and a bit misleading.\n\n4. I'm not sure whether it makes good sense to apply an SVD decomposition to the \\hat{z} vectors. It seems to me the variances \\nu^2_i shall be directly estimated from \\hat{z} as is. Otherwise, the reference \"ideal\" distribution would be modeling a **rotated** version of the \\hat{z} samples, which imo only introduces unnecessary discrepancies.\n\n5. I don't quite agree with the asserted \"multi-modal structure\" in Figure 2. Let's assume a 2d latent space, where each quadrant represents one MNIST digit (e.g. 1,2,3,4). You may observe a similar structure in this latent space yet still learn a good generator under even a standard 2d Gaussian prior. I guess my point is, a seemingly well-partitioned latent space doesn't bear an obvious correlation with a multi-modal distribution in it.\n\n6. The generator reversal procedure needs to be carried out once for each data point separately, and also when the generator has been updated, which seems to be introducing a potentially significant bottleneck into the training process.", "Thanks for the comments. Please see our responses below.\n\n1. How do the PAG scores differ when using a full covariance structure? Diagonal covariances are still very restrictive.\n\nWe have attempted to use full covariances, but more often than not, we ran into numerical issues that made the resulting scores unusable. Note that the use of diagonal covariances for calculating the scores is purposefully chosen to be just a single step in complexity above the naive prior.\n\n\n2. The results are depicted with a latent space of 20 dimensions. It will be informative to see how the model holds in high-dimensional settings. And when data can be sparse. \n\nThe improvement gained from using PGAN slightly decreases in higher dimensions (we have tried up to 200) in terms of visual results, simply because the data induced prior becomes less complex in higher dimensions. However, a discrepancy between the naive prior and the data induced prior remains and is equally measurable.\n\n\n3. You could consider giving the Discriminator, real data etc in Fig 1 for completeness as a graphical summary.\n\nWe originally designed the Figure as you suggested but found the graphic to be too cluttered. Since we assume basic familiarity with GANs throughout the text, we therefore decided to use the “simplified” version provided in our submission.", "Thank you for the comments. We invite you to have a look at our appendix, which now includes experiments you suggested.\n\n- It related to adversarial learned inference and BiGAN, in term of learning the mapping z ->x, x->z and seeking the agreement. \n\nWe agree that there is a relation, but also there are fundamental differences in our motivation and the approach itself. Most importantly, we do not learn the mapping x -> z, but we instead rely on a deterministic procedure for doing so.\n\n\n- Have you looked at the decay of the singular values of the latent codes obtained from reversing the generator? Is this data low rank? how does this change depending on the dimensionality of the latent codes? Maybe adding plots to the paper can help.\n\nWe have updated the paper to include plots of the distribution of singular values in different dimensional latent spaces (see appendix, figure 8). It appears that the reconstructed latent codes are not low rank, agreeing with what one would expect from a well-trained generator.\n\n\n- the prior agreement score is interesting but assuming gaussian prior also for the learned latent codes from real data is maybe not adequate. Maybe computing the entropy of the codes using a nearest neighbor estimate of the entropy can help understanding the entropy difference wrt to the isotropic gaussian prior?\n\nWe have experimented with nearest neighbor methods, but found them unreliable for high-dimensional spaces. Note that the reason we use a diagonal gaussian for the PAG scores is not that we propose this to be the best prior, but because it is a single step in complexity above the naive prior. If we find a discrepancy between these two, than we also know that the naive prior is inferior to any even more complex prior.\n\n\n- Have you tried to multiply the isotropic normal noise with the learned singular values and generate images from this new prior and compute inceptions scores etc? Maybe also rotating the codes with the singular vector matrix V or \\Sigma^{0.5} V?\n\nAs mentioned, our intention is not to use the non-isotropic gaussian as a prior in practice, but we have indeed tried this and have not found a significant improvement in either inception scores or visual results.\n\n\n- What architecture did you use for the prior generator GAN?\n\nWe briefly describe this in the appendix to be four fully connected layers. We’ve updated the section to clarify that the rest of the architecture (nonlinearities, batch norm, etc.) matches the original GAN.\n\n\n- Have you thought of an end to end way to learn the prior generator GAN? \nIt is certainly possible to learn the data induced prior continuously along with the training procedure and we have had good results when trying this ourselves. However, this requires running the reversal procedure in a continuous fashion, rather than just once, and introduces an impractical overhead. Further, we regard such a procedure as a separate contribution from this paper.\n", "Thank you for the detailed feedback. We have made changes to the writeup and would like to address your comments below:\n\n1. Notion of “Ideal” prior:\n\nWe do agree that using the terminology “ideal prior” to refer to the data induced prior might cause confusions and we have now adjusted the writeup accordingly.\nHowever, we disagree with the statement “there should be no better prior than P(z) itself” where P(z) refers to what we call “naive” prior. The reason is that the generator does not have infinite capacity to map any distribution to any other distribution, but is restricted by its architecture and by the training procedure. We highlighted the resulting discrepancy in our experiments by showing that there exist “empty” regions under the naive prior (figure 3).\nFor a perfect generator, moving away from the naive prior would indeed move the generated data away from the learning target, but in practice, we have shown that replacing the naive prior with the data induced prior can actually improve the results significantly (figure 5).\n\n\n1.5 “one may wish to iterate between updating G (under the current P(z)) and updating P(z) (under the current G), and hopefully this process might converge to a better solution.”\n\nThis is indeed a valid procedure and we have done this successfully, but we would like to keep the contribution of this paper focused to justifying a single step in this procedure and therefore did not include these results.\n\n\n2. I think the discussions around Eq. (1) are not well grounded.\n\nWe implicitly argue that matching the joint distributions relates to matching the marginals.\nIndeed, the KL divergence between the joint distributions is trivially a lower bound on the KL divergence between the marginals and since training the generator to convergence will minimize the conditional KL, further improvement can only be made by matching the priors.\n\n2.5 don't you need to put E_z[ ... ] around the 2nd term on the r.h.s. ?\n\nAbsolutely. We have updated the writeup.\n\n\n3. the paper mingles notions from GAN and VAE sometimes\n\nWe have updated the writeup to focus our discussion on GANs (expect in the first paragraph). \n\n\n4. I'm not sure whether it makes good sense to apply an SVD decomposition to the \\hat{z} vectors. It seems to me the variances \\nu^2_i shall be directly estimated from \\hat{z} as is. Otherwise, the reference \"ideal\" distribution would be modeling a **rotated** version of the \\hat{z} samples, which imo only introduces unnecessary discrepancies.\n\nThe SVD is only used to compute the prior agreement score and the use of it is resulting from the definition of the KL between multivariate normals. When we learn the data induced prior, our targets are the reconstructed latent codes as is.\n\n\n5. I don't quite agree with the asserted \"multi-modal structure\" in Figure 2. Let's assume a 2d latent space, where each quadrant represents one MNIST digit (e.g. 1,2,3,4). You may observe a similar structure in this latent space yet still learn a good generator under even a standard 2d Gaussian prior. I guess my point is, a seemingly well-partitioned latent space doesn't bear an obvious correlation with a multi-modal distribution in it.\n\nWe agree with your statement, but Figure 2 shows a latent space that is not only well-partitioned, but also has empty regions that shouldn’t be empty under the original prior. If there are regions in the latent space that are never used when explicitly reconstructing the data manifold, but the generator samples from all regions equally when learning to match the same data manifold, there must be a multi-modal structure that disagrees with the given prior.\n\n\n6. The generator reversal procedure needs to be carried out once for each data point separately, and also when the generator has been updated, which seems to be introducing a potentially significant bottleneck into the training process.\n\nThe reversal procedure is carried out once per data point indeed, but this only happens once, after the generator has finished training using the naive prior. In addition, this can be carried out using very large batches of data (since no learning takes place during reversal). Thus, the overhead essentially amounts to one large-batch pass over the data in the entire duration of learning.\n" ]
[ 6, 6, 5, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1 ]
[ "iclr_2018_Byk4My-RZ", "iclr_2018_Byk4My-RZ", "iclr_2018_Byk4My-RZ", "SyoujCYgG", "H1k_ZpFlf", "S1TJm_gZG" ]
iclr_2018_H1rRWl-Cb
An information-theoretic analysis of deep latent-variable models
We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference. This framework emphasizes the need to consider latent-variable models along two dimensions: the ability to reconstruct inputs (distortion) and the communication cost (rate). We derive the optimal frontier of generative models in the two-dimensional rate-distortion plane, and show how the standard evidence lower bound objective is insufficient to select between points along this frontier. However, by performing targeted optimization to learn generative models with different rates, we are able to learn many models that can achieve similar generative performance but make vastly different trade-offs in terms of the usage of the latent variable. Through experiments on MNIST and Omniglot with a variety of architectures, we show how our framework sheds light on many recent proposed extensions to the variational autoencoder family.
rejected-papers
This paper gives a coding theory interpretation of VAEs and uses it to motivate an additional knob for tuning and evaluating VAEs: namely, the tradeoff between the rate and the distortion. This is a useful set of dimensions to investigate, and past work on variational models has often found it advantageous to penalize the latent variable and observation coding terms differently, for broadly similar motivations. This paper includes some careful experiments analyzing this tradeoff for various VAE formulations, and provides some interesting visualizations. However, as the reviewers point out, it's difficult to point to a single clear contribution here, as the coding theory view of variational inference is well established, and the VAE case has been discussed in various other works. Therefore, I recommend rejection.
train
[ "SkPsD5dxz", "HJoM4eclG", "H1Je5milG", "Skkvvu3mM", "rJbQPO37f", "Hk83yunQf", "B1Gpxdn7M", "BJL5eunmz", "S10iNmsJM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "Summary:\n\nThis paper optimizes the beta-VAE objective and analyzes the resulting models in terms of the two components of the VAE loss: the reconstruction error (which the authors refer to as distortion, “D”) and the KL divergence term (which the authors refer to as rate, “R”). Various VAEs using either PixelCNN++ or a simpler model for the encoder, decoder, or marginal distribution of a VAE are trained on MNIST (with some additional results on OMNIGLOT) and analyzed in terms of samples, reconstructions, and their rate-distortion trade-off.\n\nReview:\n\nI find it difficult to point my finger to novel conceptual or theoretical insights in this paper. The idea of maximizing information for unsupervised learning of representations has of course been explored a lot (e.g., Bell & Sejnowski, 1995). Deeper connections between variational inference and rate-distortion have been made before (e.g., Balle et al., 2017; Theis et al., 2017), while this paper merely seems to rename the reconstruction and KL terms of the ELBO. Variational lower and upper bounds on mutual information have been used before as well (e.g., Barber & Agakov, 2003; Alemi et al., 2017), although they are introduced like new results in this paper. The derived “sandwich equation” only seems to be used to show that H - D - R <= 0, which also follows directly from Gibbs’ inequality (since the left-hand side is a negative KL divergence). The main contributions therefore seem to be the proposed analysis of models in the R-D plane, and the empirical contribution of analyzing beta-VAEs.\n\nBased on the R-D plots, the authors identify a potential problem of generative models, namely that none of the trained models appear to get close to the “auto-encoding limit” where the distortion is close zero. Wouldn’t this gap easily be closed by a model with identity encoder, identity decoder, and PixelCNN++ for the marginal distribution? Given that autoregressive models generally perform better than VAEs in terms of log-likelihood, the model’s performance would probably be closer to the true entropy than the ELBO plotted in Figure 3a). What about increasing the capacity of the used in this paper? This makes me wonder what exactly the R-D plot can teach us about building better generative models.\n\nThe toy example in Figure 2 is interesting. What does it tell us about how to build our generative models? Should we be using powerful decoders but a lower beta?\n\nThe authors write: “we are able to learn many models that can achieve similar generative performance but make vastly different trade-offs in terms of the usage of the latent variable”. Yet in Figure 3b) it appears that changing the rate of a model can influence the generative performance (ELBO) quite a bit?", "EDIT: I have reviewed the authors revisions and still recommend acceptance. \n\n\nSummary\n\nThis paper proposes assessing VAEs via two quantities: rate R (E[ KLD[q(z|x) || p(z)] ]) and distortion D (E[ log p(x|z) ]), which can be used to bound the mutual information (MI) I(x,z) from above and below respectively (i.e. H[x] - D <= I(x,z) <= R). This fact then implies the inequality H[x] <= R + D, where H[x] is the entropy of the true data distribution, and allows for the construction of a phase diagram (Figure 1) with R and D on the x and y axis respectively. Models can be plotted on the diagram to show the degree to which they favor reconstruction (D) or sampling diversity (R). The paper then reports several experiments, the first being a simulation to show that a VAE trained with vanilla ELBO cannot recover the true rate in even a 1D example. For the second experiment, 12 models are trained by varying the encoder/decoder strength (CNN vs autoregressive) and prior (fact. Gauss vs autoregressive vs VampPrior). Plots of the D vs R and ELBO vs R are shown for the models, revealing that the same ELBO value can be decomposed into drastically different R and D values. The point is further made through qualitative results in Figure 4. \n\n\nEvaluation\n\nPros: While no one facet of the paper is particularly novel (as similar observations and discussion has been made by [1-4]), the paper, as far as I’m aware, is the first to formally decompose the ELBO into the R vs D tradeoff, which is natural. As someone who works with VAEs, I didn’t find the conclusions surprising, but I imagine the paper would be valuable to someone learning about VAEs. Moreover, it’s nice to have a clear reference for the unutilized-latent-space-behavior mentioned in various other VAE papers. The most impressive aspect of the paper is the number of models trained for the empirical investigation. Placing such varied models (CNN vs autoregressive vs VampPrior etc) onto the same plot from comparison (Figure 3) is a valuable contribution. \n\nCons: As mentioned above, I didn’t find the paper conceptually novel, but this isn’t a significant detraction as its value (at least for VAE researchers) is primarily in the experiments (Figure 3). I do think the paper---as the ‘Discussion and Further Work’ section is only two sentences long---could be improved by providing a better summary of the findings and recommendations moving forward. Should generative modeling papers be reporting final R and D values in addition to marginal likelihood? How should an author demonstrate that their method isn’t doing auto-decoding? The conclusion claims that “[The rate-distortion tradeoff] provides new methods for training VAE-type models which can hopefully advance the state of the art in unsupervised representation learning.” Is this referring to the constrained optimization problem given in Equation #4? It seems to me that the optimal R-vs-D tradeoff is application dependant; is this not always true? \n\nMiscellaneous / minor comments: Figure 3 would be easier to read if the dots better reflected their corresponding tuple (although I realize representing all tuple combinations in terms of color, shape, etc is hard). I had to keep referring to the legend, losing my place in the scatter plot. I found sections 1 and 2 rather verbose; I think some text could be cut to make room for more final discussion / recommendations. For example, I think the first two whole paragraphs could be cut or at least condensed and moved to the related works section, as they are just summarizing research history/trends. The paper’s purpose clearly starts at the 3rd paragraph (“We are interested in understanding…”). The references need cleaned-up. There are several conference publications that are cited via ArXiv instead of the conference (IWAE should be ICLR, Bowman et al. should be CoNLL, Lossy VAE should be ICLR, Stick-Breaking VAE should be ICLR, ADAM should be ICLR, Inv Autoregressive flow should be NIPS, Normalizing Flows should be ICML, etc.), and two different versions of the VAE paper are cited (ArXiv and ICLR). \n\n\nConclusions\n\nI found this paper to present valuable analysis of the ELBO objective and how it relates to representation learning in VAEs. I recommend the paper be accepted, although it could be substantially improved by including more discussion at the end. \n\n\n\n1. S. Zhao, J. Song, and S. Ermon. “InfoVAE: Information Maximizing Variational Autoencoders.” ArXiv 2017.\n\n2. X. Chen, D. Kingma, T. Salimans, Y. Duan, P. Dhariwal, J. Shulman, I. Sutskever, and P. Abbeel. “Variational Lossy Autoencoder.” ICLR 2017.\n\n3. I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner. “Beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework.” ICLR 2017\n\n4. S. Bowman, L. Vilnis, O. Vinyas, A. Dai, R. Jozefowicz, and S. Bengio. “Generating Sentences from a Continuous Space.” CoNLL 2016.", "\n- I think that VAEs are rather forced to be interpreted from an information theoretic point of view for the sake of it, rather than for the sake of a clear and unequivocal contribution from the perspective of VAEs and latent-variable models themselves. How is that useful for a VAE? \n\n- \"The left vertical line corresponds to the zero rate setting. ...\": All these limits are again from an information theoretic point of view and no formulation nor demonstration is provided on how this can actually be as useful. As mentioned earlier in the paper, there are well-known problems with taking this information theory perspective, e.g. difficulties in estimating MI values, etc.\n\n- Breaking (some of) the long sentences and paragraphs in page 3 with an unequivocal mathematical formulation would smooth the flow a bit.\n\n- \"(2) an upper bound that measures the rate, or how costly it is to transmit information about the latent variable.\": I am not entirely sure about this one and why it is massively important to be compromised against the obviously big first term.\n\n- Toy Model experiment: I do not see any indication of how this is not just a lucky catch and that VAEs consistently suffer from a problem leading to such effect.\n\n- Section 5: \"can shed light on many different models and objectives that have been proposed in the literature... \": Again the contribution aspect is not so clear through the word \"shed light\".\n\n\nMinor\n- Although apparently VAEs represent the main and most influential latent-variable model example, I think switching too much between citing them as VAEs and then as latent-variable models in general was a bit confusing. I propose mentioning in the beginning (as happened) that VAEs are the seminal example of latent-variable models and then going on from this point onwards with VAEs without too much alternation between latent-variable models and VAEs.\n\n- page 8: \"as show\"", "Thanks for the compliments and the feedback on the proofs. We've left in the detailed derivations in the Appendix for completeness and clarity.", "Based on your feedback, we have taken more care to clarify that the core variational bounds we present at the beginning of Section 2 and in some of the appendices were originally derived in Barber and Agakov 2003, Agakov 2006, and Alemi et al. 2017. We have kept the proofs in the appendix for clarity.\n\nWe view the core contributions of our work as (1) providing a clear theoretical justification for the beta-VAE objective, and (2) demonstrating that the beta-VAE objective can be used to target a task-specific target rate independent of architecture. This alleviates a known problem with the ELBO objective: achieving different rates requires modifying the architecture and is difficult to control. We have attempted to make those points more clearly in the current version of the paper.\n\nUnderstanding the origin of the beta-VAE objective opens the door to other approaches for training VAEs. The updated conclusion presents some of these approaches, such as constrained optimization on the variational bounds of rate and distortion, and using more powerful mutual information predictors.\n\nFinally, analyzing the performance of VAE models in terms of the RD plane can make problems with models immediately clear (for example, poor distortion at high rates or poor use of latent variables with complex decoders). We also hope that we've clearly established some connections between information theory, latent variable models, rate-distortion theory, and compression, which could spawn new results. \n\n> Based on the R-D plots, the authors identify a potential problem of generative models, \n> namely that none of the trained models appear to get close to the “auto-encoding limit” \n> where the distortion is close zero. Wouldn’t this gap easily be closed by a model with identity\n> encoder, identity decoder, and PixelCNN++ for the marginal distribution? \n\nWe agree that models with more powerful PixelCNN++ type marginals could help to close the gap in ELBO for models at high rates. The recent VQ-VAE work shows this qualitatively, and future work should more extensively explore these models to identify their frontier in the RD plane. \n\n> The toy example in Figure 2 is interesting. What does it tell us about how to build our \n> generative models? Should we be using powerful decoders but a lower beta?\n\nThe toy example shows that the most powerful models (in this case, models that are able to perfectly represent all relevant distributions) will perform very poorly when trained using ELBO, but constraining the optimization to some rate, either using something like a beta-VAE with beta < 1, or explicitly targeting a known rate, can result in using the model capacity optimally. As seen on MNIST and Omniglot, it seems that the best models we currently have use powerful decoders and powerful marginals, but it is necessary to use beta < 1 to avoid rate collapse.\n\n> “we are able to learn many models that can achieve similar generative performance but \n> make vastly different trade-offs in terms of the usage of the latent variable”. \n\nWe meant to speak more directly to the performance over intermediate rate values, from 0 to around 10 nats on MNIST. In terms of ELBO these models are all the same, but as demonstrated in Figure 4, over these rates we move from an unconditional generative model to an effective compression model for MNIST that seems to preserve the salient features of the input. As we describe elsewhere in the paper, the high-rate models we explored cannot attain equivalently good marginal log likelihood.", "We thank all the reviewers for their valuable feedback. We have made a number of improvements and clarifications that we believe amount to a substantially improved version of the paper. Those changes are summarized and discussed below.\n\nThe main concern from all three reviewers was that of novelty and the “forced” perspective of the ELBO in terms of information theory. To address these issues, we have extended the discussion and introduction sections to highlight the utility of this perspective. In summary, we view the primary contributions of our paper to be the following, and we think the new version improves on the presentation of most of these points. We have:\n * Clarified our core point, which is that the best rate is task-dependent, and if you only optimize the ELBO the architecture determines the rate.\n * Motivated the beta VAE objective as a natural objective for exploring the entire frontier of rate-distortion tradeoffs for a model.\n * Demonstrated the use of rate-distortion diagrams to visualize tradeoffs in latent variable models.\n * Motivated reporting of rate and distortion in future papers on generative models since different applications require different tradeoffs which cannot be extracted from the ELBO alone.\n* Provided a simpler explanation of and solution to the problem of powerful decoders ignoring the latent variables than the explanations and solutions previously proposed in the literature (e.g., Chen et al., Variational Lossy Autoencoder, 2017).\n * Comprehensively explored the relative performance and tradeoffs of a wide range of current modelling choices on MNIST and Omniglot\n\nAdditionally, we:\n * Fixed the issues noted, including a substantial cleanup of the References.\n * Added many more Omniglot experimental results in Appendix A.\n\nAs highlighted by R1, our paper is the first to extensively evaluate the tradeoffs made by different architecture choices by varying the complexity of the posterior, prior, and decoder. Many recent papers have been published on subsets of these architecture choices, for example more complex priors (VampPrior), latent variable models with autoregressive decoders (PixelGAN, PixelVAE), or more complex posteriors (IAF, normalizing flows). Our experiments highlight that different models dominate in different regimes, in particular autoregressive decoders are critical at low rates while VampPrior works well at intermediate and high rates.\n\nFinally, as suggested by R2, we have added a call for researchers to report rate and distortion separately in future publications. In particular, this highlights the tradeoffs being made by new approaches and points to models that are learning useful representations (with non-zero rate). For practitioners interested in learning representations, our paper highlights the need to look beyond the ELBO as there exists a continuum of models with the same ELBO but dramatically different rates.\n\nThese changes represent a substantial improvement in the paper, and we hope the reviewers will take these into consideration.", "Thanks to your feedback we have substantially improved our References and Discussion section.\n\n > It seems to me that the optimal R-vs-D tradeoff is application dependant; is this not always true? \n\nWe agree, and we have clarified that in the current revision. The optimal R-D tradeoff is application specific. Originally we set out on this work and the information theoretic treatment precisely to try to investigate how current VAEs arrived at their particular rates. When we did our analysis and followed the natural steps to turn it into an easy-to-optimize objective, we discovered that the result was the beta-VAE objective. We feel that giving a principled motivation for why the beta-VAE objective itself is useful and what it can accomplish is novel. By better understanding the origin of the beta-VAE objective, we hopefully open the door to more and better work on theoretically-motivated objective functions in the future.\n", "> I think that VAEs are rather forced to be interpreted from an information theoretic point \n> of view for the sake of it, rather than for the sake of a clear and unequivocal contribution \n> from the perspective of VAEs and latent-variable models themselves. How is that useful for a\n> VAE?\n\nOur analysis explains why VAEs with strong decoders can learn to ignore the learned latent variables. The analysis directly leads to a derivation of the beta-VAE objective, which can force any particular VAE architectural choice to make appropriate use of the latent variables. The value to practitioners using VAEs is to encourage them to use the beta-VAE objective to overcome this shortcoming of standard ELBO optimization.\n\nWe believe the information theoretic perspective gives a natural motivation for the beta-VAE objective, not just as a simple modification of the objective with observed effect, but demonstrates that it allows you to explore the entire rate-distortion frontier for a particular model family. This is useful and necessary since the relative power of the encoder / decoder and marginal are hard to tune. As observed in the current literature, powerful autoregressive decoders tend to collapse to vanishing rate at beta=1. \n\n> \"The left vertical line corresponds to the zero rate setting. ...\": All these limits are again from \n> an information theoretic point of view and no formulation nor demonstration is provided on\n> how this can actually be as useful. All these limits are again from an information theoretic \n> point of view and no formulation nor demonstration is provided on how this can actually be \n> as useful.\n\nand\n\n> \"(2) an upper bound that measures the rate, or how costly it is to transmit information about \n> the latent variable.\": I am not entirely sure about this one and why it is massively important\n> to be compromised against the obviously big first term.\n\nWe hope that the modifications to the paper make these points more clear. We think that our experiments convincingly demonstrate that low rates and high rates give qualitatively different model behavior, as seen in Figures 4 and 6, and it is exactly the tradeoff between rate and distortion that produces that different behavior, since the models are otherwise held constant. Very low rate models fail at reconstruction, as the information theory predicts. Low rate models manage to capture the semantics of the digits, but ignore the style in reconstruction, and higher rate models provide more precise reconstructions, but fail to provide variation during generation.\n\n> Toy Model experiment: I do not see any indication of how this is not just a lucky catch and \n> that VAEs consistently suffer from a problem leading to such effect.\n\nWe have added some text indicating that both the VAE and target rate results were stable across all of the random initializations we performed (many dozens). The core point of the toy model was to illustrate that the normal ELBO (beta=1 VAE) objective does not target any rate in particular. The rate it ends up at is a complicated function of the relative powers of the component models. For a given problem, we as practitioners usually have some inductive bias for how much information we believe is relevant and should be retained; by targeting the rate we knew to be relevant in the toy model, we were able to nearly perfectly invert the true generative model. We believe our experiments on both MNIST and Omniglot further demonstrate that for most model architectures, the rates achieved optimizing ELBO with powerful models are often low.\n\n> - Section 5: \"can shed light on many different models and objectives that have been \n> proposed in the literature... \": Again the contribution aspect is not so clear through the word \n> \"shed light\".\n\nOur discussion section was lacking. We've expanded it, thank you.\n", "Really enjoyed your paper, gave very useful insights. \n\nOne small thing, the proofs for the optimal encoder/decoder are much more complicated then they need be. The only inequality is from the KL divergence \"positive semidefinite quality\", so the bound is tight exactly when the KL divergence is zero i.e. when the probabilities (a.s.) match and that is all you need." ]
[ 5, 7, 5, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 5, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1rRWl-Cb", "iclr_2018_H1rRWl-Cb", "iclr_2018_H1rRWl-Cb", "S10iNmsJM", "SkPsD5dxz", "iclr_2018_H1rRWl-Cb", "HJoM4eclG", "H1Je5milG", "iclr_2018_H1rRWl-Cb" ]
iclr_2018_rkMt1bWAZ
Bias-Variance Decomposition for Boltzmann Machines
We achieve bias-variance decomposition for Boltzmann machines using an information geometric formulation. Our decomposition leads to an interesting phenomenon that the variance does not necessarily increase when more parameters are included in Boltzmann machines, while the bias always decreases. Our result gives a theoretical evidence of the generalization ability of deep learning architectures because it provides the possibility of increasing the representation power with avoiding the variance inflation.
rejected-papers
This paper presents a bias/variance decomposition for Boltzmann machines using the generalized Pythagorean Theorem from information geometry. The main conclusion is that counterintuitively, the variance may decrease as the model is made larger. There are probably some interesting ideas here, but there isn't a clear take-away message, and it's not clear how far this goes beyond previous work on estimation of exponential families (which is a well-studied topic). Some of the reviewers caught mathematical errors in the original draft; the revised version fixed these, but did so partly by removing a substantial part of the paper about hidden variables. The analysis, then, is limited to fully observed Boltzmann machines, which have less practical interest to the field of deep learning.
train
[ "HknVXoHez", "rkv5HU5gG", "Sy_JLe5lz", "rJFkOigXM", "rkp5vjgQM", "ryDvPolQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Summary of the paper:\nThe paper derives a lower bound on the expected squared KL-divergence between a true distribution and the sample based maximum likelihood estimate (MLE) of that distribution modelled by an Boltzmann machine (BM) based on methods from information geometry. This KL-divergence is first split into the squared KL-divergence between the true distribution and MLE of that distribution, and the expected squared KL-divergence between the MLE of the true distribution and the sample based MLE (in a similar spirit to splitting the excess error into approximation and estimation error in statistical learning theory). The letter is than lower bounded (leading to a lower bound on the overall KL-divergence) by a term which does not necessarily increase if the number of model parameters is increased. \n\n\nPros:\n- Using insights from information geometry opens up a very interesting and (to my knowledge) new approach for analysing the generalisation ability of ML models.\n- I am not an expert on information geometry and I did not find the time to follow all the steps of the proof in detail, but the analysis seems to be correct.\n\nCons:\n- The fact that the lower bound does not necessary increase with a growing number of parameters does not guarantee that the same holds true for the KL-divergence (in this sense an upper bound would be more informative). Therefore, it is not clear how much of insights the theoretical analysis gives for practitioners (it could be nice to analyse the tightness of the bound for toy models).\n- Another drawback reading the practical impact is, that the theorem bounds the expected squared KL-divergence between a true distribution and the sample based MLE, while training minimises the divergence between the empirical distribution and the model distribution ( i.e. the sample based MLE in the optimal case), and the theorem does not show the dependency on the letter. \n\nI found some parts difficulty to understand and clarity could be improved e.g. by\n- explaining why minimising KL(\\hat P, P_B) is equivalent to minimising the KL-divergence between the empirical distribution and the Gibbs distribution \\Phi.\n- explaining in which sense the formula on page 4 is equivalent to “the learning equation of Boltzmann machines”.\n- explaining what is the MLE of the true distribution (I assume the closest distribution in the set of distributions that can be modelled by the BM).\n\nMinor comments:\n- page 1: and DBMs….(Hinton et al., 2006) : The paper describes deep belief networks (DBNs) not DBMs \n- \\theta is used to describe the function in eq. (2) as well as the BM parameters in Section 2.2 \n- page 5: “nodes H is” -> “nodes H are” \n\n\n\nREVISION:\nThanks to the reviewers for replying to my comments and making the changes. I think they improved the paper. On the other hand the other reviewers raised valid questions, that led to my decision to not change the overall rating of the paper.", "Summary: The goal of this paper is to analyze the effectiveness and generalizability of deep learning. This authors present a theoretical analysis of bias-variance decomposition for hierarchical graphical models, specifically Boltzmann Machines (BM). The analysis follows a geometric formulation of hierarchical probability distributions. The authors describe a general log-linear model and other variations of it such as the standard BM, arbitrary-order BM and Restricted BM to motivate their approach. \n\nThe authors first define the bias-variance decomposition of KL divergence using Pythagorean theorem followed by applying Cramer-Rao bound and show that the variance decreases when adding more parameters in the model. \n\nPositives:\n-The paper is clearly written and the analysis is helpful to show the effect of adding more parameters on the variance and bias in a general architecture (the Boltzmann Machines)\n-The authors did a good job covering general probabilistic models and progression of models starting with the log-linear model.\n-The authors provided an example to illustrate the theory, by showing that the variance decreases with the increase of model parameters.\n\nQuestions:\n-How does this analysis apply to other deep learning architectures such as Convolutional Neural Networks?\n-How does this analysis apply to other frameworks such as variational auto-encoders and generative adversarial networks?", "This paper uses an information geometric view on hierarchical models to discuss a bias - variance decomposition in Boltzmann machines, presenting interesting conclusions, whereby some more care appears to be needed for making these claims. \n\nThe paper arrives at the main conclusion that it is possible to reduce both the bias and the variance in a hierarchical model. The discussion is not specific to deep learning nor to Boltzmann machines, but actually addresses hierarchical exponential family models. The methods pertaining hierarchical models are interesting and presented in a clear way. My concern are the following points: \n\nThe main theorem presents only a lower bound, meaning that it provides no guarantee that the variance can indeed be reduced. \n\nThe paper seems to ignore that a model with hidden variables may be singular, in which case the Fisher metric is not positive definite and the Cramer Rao bound has no meaning. This interferes with the claims and derivations made in the paper in the case of models with hidden variables. The problem seems to lie in the fact that the presented derivations assume that an optimal distribution in the data manifold is given (see Theorem 1 and proof), effectively making this a discussion about a fully observed hierarchical model. In particular, it is not further specified how to obtain θˆB(s) in page 6 before (13). \n\nAlso, in page 5 the paper states that ``it is known that the EM-algorithm can obtain the global optimum of Equation (12) (Amari, 2016, Section 8.1.3)''. However, what is shown in that reference is only that: (Theorem 8.2., Amari, 2016) ``The KL-divergence decreases monotonically by repeating the E-step and the M-step. Hence, the algorithm converges to an equilibrium.'' A model with hidden variables can have several global and local optimisers (see, e.g. https://arxiv.org/abs/1709.05276). The critical points of the EM algorithm can have a non trivial structure, as has been observed in the case of non negative rank matrix varieties (see, e.g., https://arxiv.org/pdf/1312.5634.pdf). \n\nOTHER\n\nIn page 3, ``S_\\beta is e-flat and S_\\alpha ... '', should this not be the other way around? (See also page 5 last paragraph of Section 2.) Please also indicate the precise location in the provided reference. \n\nAll pages up to page 5 are introduction. Section 2.3. as presented is very vague and does not add much to the discussion. \n\nIn page 7, please explain E ψ(θˆ )^2 −ψ(θ∗ )^2=0 \n", "Thank you for your valuable comments. We have revised our manuscript according to reviewers’ comments and corrected mistakes. In particular, the bound provided in Theorem 1 has been revised and the example provided Theorem 1 has been updated. Moreover, we have newly added empirical evaluation of our theoretical result. In the following, we answer each question.\n\n> REVIEWER 1: it is not clear how much of insights the theoretical analysis gives for practitioners (it could be nice to analyse the tightness of the bound for toy models).\n\n> ANSWER: We have additionally conducted empirical evaluation of the tightness of our theoretical lower bound in the revised version. Please check Section 4 and Figure 2. We confirm that our lower bound is quite tight in practice.\n\n> REVIEWER 1: the theorem bounds the expected squared KL-divergence between a true distribution and the sample based MLE, while training minimises the divergence between the empirical distribution and the model distribution, and the theorem does not show the dependency on the letter. \n\n> ANSWER: The KL-divergence between the empirical distribution and the model distribution in each training monotonically decreases if we include more parameters (see Equation (15)). But overfitting surely occurs if we include too many parameters and this is our motivation of performing bias-variance decomposition to analyze the generalizability of BMs. We have added this discussion in the first paragraph in P.5.\n\n> REVIEWER 1: explaining why minimising KL(\\hat P, P_B) is equivalent to minimising the KL-divergence between the empirical distribution and the Gibbs distribution \\Phi.\n\n> ANSWER: This is because \\hat P is the empirical distribution and P_B coincides with the Gibbs distribution \\Phi.\n\n> REVIEWER 1: explaining in which sense the formula on page 4 is equivalent to “the learning equation of Boltzmann machines”.\n\n> ANSWER: This is because \\hat{\\eta}(x) and \\eta_B(x) coincide with the expectation for the outcome x with respect to the empirical distribution obtained from data and the model distribution represented by the Boltzmann Machine B, respectively. We have revised the text to clarify this point.\n\n> REVIEWER 1: explaining what is the MLE of the true distribution (I assume the closest distribution in the set of distributions that can be modelled by the BM).\n\n> ANSWER: You are right. The MLE of the true distribution is the closest distribution in the set of distributions that can be modelled by the BM in terms of the KL divergence. We have revised the text to clarify this point.\n\n> REVIEWER 1: page 1: and DBMs….(Hinton et al., 2006) : The paper describes deep belief networks (DBNs) not DBMs \n\n> ANSWER: We have removed this citation and replaced with [Goodfellow et al. (2016, Chapter 20)].\n\n> REVIEWER 1: \\theta is used to describe the function in eq. (2) as well as the BM parameters in Section 2.2 \n\n> ANSWER: We have changed the symbol in Eq.(2) for consistency.\n\n> REVIEWER 1: page 5: “nodes H is” -> “nodes H are” \n\n> ANSWER: We have corrected this.\n", "Thank you for your valuable comments. We have revised our manuscript according to reviewers’ comments and corrected mistakes. In particular, the bound provided in Theorem 1 has been revised and the example provided after Theorem 1 has been updated. Moreover, we have newly added empirical evaluation of our theoretical result. In the following, we answer each question.\n\n> REVIEWER 2: The main theorem presents only a lower bound, meaning that it provides no guarantee that the variance can indeed be reduced. \n\n> ANSWER: We have additionally conducted empirical evaluation of our theoretical lower bound in the revised version. Please check Section 4 and Figure 2. We confirm that our lower bound is quite tight in practice when the sample size N becomes large and the variance reduction actually happens.\n\n> REVIEWER 2: The paper seems to ignore that a model with hidden variables may be singular.\n\n> ANSWER: Thank you very much for pointing this out. You are right and our theoretical results cannot be directly applied to models with hidden variables. Thus we have removed models with hidden variables from our paper and newly added discussion about this issue in the last paragraph in Section 2.3. Please note that our main theoretical contribution is still fully valid.\n\n> REVIEWER 2: A model with hidden variables can have several global and local optimisers. In particular, it is not further specified how to obtain θˆB(s) in page 6 before (13).\n\n> ANSWER: Thank you very much for pointing this out. You are right and we have revised our text (now in Appendix as we have removed the section of models with hidden variables).\n\n> REVIEWER 2: In page 3, ``S_\\beta is e-flat and S_\\alpha ... '', should this not be the other way around? (See also page 5 last paragraph of Section 2.) Please also indicate the precise location in the provided reference. \n\n> ANSWER: You are right. This should be the other way around. We have corrected this in the revised version and also clarified the location of the reference (Appendix, after Eq.(17)).\n\n> REVIEWER 2: All pages up to page 5 are introduction. Section 2.3. as presented is very vague and does not add much to the discussion. \n\n> ANSWER: Thank you for pointing this out. We have revised and extended Section 2.3. Although Section 2.1 is preliminary, the other parts of Section 2 are not introduction but necessary discussion to formulate the family of Boltzmann machines as the log-linear model.\n\n> REVIEWER 2: In page 7, please explain E ψ(θˆ )^2 −ψ(θ∗ )^2=0 \n\n> ANSWER: Thank you for pointing this out. This was wrong and now corrected. This is indeed irreducible error as the Fisher information vanishes. Please check the revised Theorem 1 and its proof.\n", "Thank you for your valuable comments. We have revised our manuscript according to reviewers’ comments and corrected mistakes. In particular, the bound provided in Theorem 1 has been revised and the example provided after Theorem 1 has been updated. Moreover, we have newly added empirical evaluation of our theoretical result.\nAnalyzing the relationship between our model and such neural network models suggested in you comments, in particular probabilistic models of variational auto-encoders and generative adversarial networks, is not in the scope of this paper but our exciting future topic." ]
[ 5, 7, 5, -1, -1, -1 ]
[ 2, 5, 5, -1, -1, -1 ]
[ "iclr_2018_rkMt1bWAZ", "iclr_2018_rkMt1bWAZ", "iclr_2018_rkMt1bWAZ", "HknVXoHez", "Sy_JLe5lz", "rkv5HU5gG" ]
iclr_2018_SJOl4DlCZ
Classifier-to-Generator Attack: Estimation of Training Data Distribution from Classifier
Suppose a deep classification model is trained with samples that need to be kept private for privacy or confidentiality reasons. In this setting, can an adversary obtain the private samples if the classification model is given to the adversary? We call this reverse engineering against the classification model the Classifier-to-Generator (C2G) Attack. This situation arises when the classification model is embedded into mobile devices for offline prediction (e.g., object recognition for the automatic driving car and face recognition for mobile phone authentication). For C2G attack, we introduce a novel GAN, PreImageGAN. In PreImageGAN, the generator is designed to estimate the the sample distribution conditioned by the preimage of classification model f, P(X|f(X)=y), where X is the random variable on the sample space and y is the probability vector representing the target label arbitrary specified by the adversary. In experiments, we demonstrate PreImageGAN works successfully with hand-written character recognition and face recognition. In character recognition, we show that, given a recognition model of hand-written digits, PreImageGAN allows the adversary to extract alphabet letter images without knowing that the model is built for alphabet letter images. In face recognition, we show that, when an adversary obtains a face recognition model for a set of individuals, PreImageGAN allows the adversary to extract face images of specific individuals contained in the set, even when the adversary has no knowledge of the face of the individuals.
rejected-papers
This paper addresses the very important problem of ensuring that sensitive training data remain private. It proposes an attack whereby the attacker can reconstruct information about the training data given only the trained classifier and an auxiliary dataset. If done well, such an attack would be a useful contribution that helps make discussion of differential privacy more complete. But as the reviewers pointed out, it's not clear from the paper whether the attack has succeeded. It works only when the auxiliary data is very similar to the training data, and it's not clear if it leaks information about the training set itself, or is just summarizing the auxiliary data. This work doesn't seem quite ready for publication, but could be a strong paper if it's convincingly demonstrated that information about the training set has been leaked.
train
[ "r1-fz-MJG", "r1IcQO8lz", "BkQD60b-f", "HyaMD51Qf", "rJHgvqJ7z", "SJz5U517G", "HkmZ89kXz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper considers a new problem : given a classifier f trained from D_tr and a set of auxillary samples from D_aux, find D_tr conditioned on label t*. Its solution is based on a new GAN: preImageGAN. Three settings of the similarity between auxillary distribution and training distribution is considered: exact same, partly same, mutually exclusive. Experiments show promising results in generating examples from the original training distribution, even in the \"mutually exclusive\" setting.\n\nQuality: \n1. It is unclear to me if the generated distribution in the experiments is similar to the original distribution D_tr given y = t^*, either from inception accuracy or from pictorial illustration. Since we have hold out the training data, perhaps we can measure the distance between the generated distribution and D_tr given y = t^* directly.\n\n2. It would be great if we can provide experiments quantifying the utility of the auxillary examples. For example, when they are completely noise, can we still get sensible generation of images? \n\n3. How does the experimental result of this approach compare with model attack? For example, we can imagine generating labels by e_t^* + epsilon, where epsilon is random noise. If we invert these random labels, do we get a distribution of examples from class t^*?\n\nClarity:\n1. I think the key here is to first generate auxillary labels (as in Figure 2), then solve optimization problem (3) - this causes my confusion at first sight. (My first impression is that all labels, training or auxillary, are one-hot encoding - but this makes no sense since the dimension of f and y_aux does not match.)\n\nOriginality: I am not familiar with relevant literature - and I think the GAN formulation here is original.\n\nSignificance: I see this as a nice step towards inferring training data from trained classifiers. \n\n", "This paper proposed to learn a generative GAN model that generates the training data from the labels, given that only the black-box mapping $f$ from data to label is available, as well as an aux dataset that might and might not overlap with the training set. This approach can be regarded as a transfer learning version of ACGAN that generates data conditioned on its label.\n\nOverall I feel it unclear to judge whether this paper has made substantial contributions. The performance critically relies on the structure of aux dataset and how the supervised model $f$ interacts with it. It would be great if the author could show how the aux dataset is partitioned according to the function $f, and what is the representative sample from aux dataset that maximizes a given class label. In Fig. 4, the face of Leonardo DiCaprio was reconstructed successfully, but is that because in the aux dataset there are other identities who look very similar to him and is classified as Leonardo, or it is because GAN has the magic to stitch characteristics of different face identities together? Given the current version of the paper, it is not clear at all. From the results on EMNIST when the aux set and the training set are disjoint, the proposed model simply picks the most similar shapes as GAN generation, and is not that interesting. In summary, a lot of ablation experiments are needed for readers to understand the proposed method better.\n\nThe writing is ok but a bit redundant. For example, Eqn. 1 (and Eqn. 2) which shows the overall distribution of the training samples (and aux samples) as a linear combinations of the samples at each class, are not involved in the method. Do we really need Eqn. 1 and 2?", "The paper proposes the use of a GAN to learn the distribution of image classes from an existing classifier, that is a nice and straightforward idea. From the point of view of forensic analysis of a classifier, it supposes a more principled strategy than a brute force attack based on the classification of a database and some conditional density estimation of some intermediate image features. Unfortunately, the experiments are inconclusive. \n\nQuality: The key question of the proposed scheme is the role of the auxiliary dataset. In the EMNIST experiment, the results for the “exact same” and “partly same” situations are good, but it seems that for the “mutually exclusive” situation the generated samples look like letters, not numbers, and raises the question on the interpolation ability of the generator. In the FaceScrub experiment is even more difficult to interpret the results, basically because we do not even know the full list of person identities. It seems that generated images contain only parts of the auxiliary images related to the most discriminative features of the given classifier. Does this imply that the GAN models a biased probability distribution of the image class? What is the result when the auxiliary dataset comes from a different kind of images? Due to the difficulty of evaluating GAN results, more experiments are needed to determine the quality and significance of this work.\n\nClarity: The paper is well structured and written, but Sections 1-4 could be significantly shorter to leave more space to additional and more conclusive experiments. Some typos on Appendix A should be corrected.\n\nOriginality: the paper is based on a very smart and interesting idea and a straightforward use of GANs. \n\nSignificance: If additional simulations confirm the author’s claims, this work can represent a significant contribution to the forensic analysis of discriminative classifiers.\n", ">In Fig. 4, the face of Leonardo DiCaprio was reconstructed successfully, but is that because in the aux dataset there are other identities who look very similar to him and is classified as Leonardo, or it is because GAN has the magic to stitch characteristics of different face identities together?\n\nTo show that image generation by PreImageGAN is NOT a naive cherry-picking of image pieces in the auxiliary images, we conducted the following two experiments.\nFirst, to show that the auxiliary images do not contain face images that look very similar to the target person (say, Keanu Reeves), we evaluated the probability that each image in the auxiliary dataset is recognized as the target in Figure 6. In Figure 6, only a few images are classified as Keanu Reeves with prob>0.8, while most of the images generated by PreImageGAN are recognized as Keanu Reeves with prob>0.95. This indicates that PreImageGAN can generate images recognized as the target only from auxiliary images that are not recognized as the target.\nSecond, to show that PreImageGAN used the \"magic\" to stitch characteristics of different face identities, we generated images of interpolation between two people in Figure 8. As shown in the figures, the faces are smoothly changed from one person to another.\nThis indicates that PreImageGAN is NOT a naive cherry-picking of image pieces in the auxiliary images.\n\n\n>From the results on EMNIST when the aux set and the training set are disjoint, the proposed model simply picks the most similar shapes as GAN generation, and is not that interesting.\n\nWe agree that the C2G attack targeting numeric characters in the mutually exclusive (disjoint) setting (Figure 4) does not successfully work for some characters.\nWe investigated the reason experimentally in detail and found that the PreImageGAN cannot correctly generate images of the target if the classifier recognizes non-target images as a target. For example, in Figure 4, images targeting \"7\" look \"T\". This is because the given classifier recognizes images of \"T\" as \"7\" falsely (Table 8). As long as the given classifier recognizes non-target images as target images falsely, the C2G attack cannot correctly reconstruct target images.\nIn contrast, if the classifier recognizes target images as the target, and at the same time, the classifier recognizes non-target images as non-target (Table 6), the C2G attack can generate images of the target successfully even in the mutually exclusive setting (Figure 3).\nIn the revised version, we added these points in Section 5.2 and Appendix B.", "> 1. It is unclear to me if the generated distribution in the experiments is similar to the original distribution D_tr given y = t^\\*, either from inception accuracy or from pictorial illustration.\n\nWe agree that it is preferable to demonstrate the performance of the proposed method quantitatively. Unfortunately, in our problem setting, the true generative distribution is unknown, and it is impossible to measure the utility of the resulting generative model. Instead, we employed the inception accuracy as employed in ACGAN.\n\n>Since we have hold out the training data, perhaps we can measure the distance between the generated distribution and D_tr given y = t^\\* directly.\n\nI guess the method you suggested is to measure the divergence between the model obtained by the PreImageGAN and the GAN model learned from the training dataset. Even when we have a holdout dataset, it would be difficult to measure the difference between two GANs because GANs often do not give densities or likelihoods. This topic itself is an attractive future direction but is out of the scope of this study.\n\n\n>2. It would be great if we can provide experiments quantifying the utility of the auxiliary examples. For example, when they are completely noise, can we still get sensible generation of images?\n\nTo see how much the quality of the auxiliary dataset affects to the generated images, we tried the C2G attack with using meaningless auxiliary images, that is, uniform noise images.\nAs clearly shown in the results (Figure 7), meaningless auxiliary images cannot give meaningful results.\nFrom these results, we could experimentally confirm that the auxiliary dataset affects the quality of the resulting images significantly.\n\n\n>3. How does the experimental result of this approach compare with model attack? For example, we can imagine generating labels by e_t^\\* + epsilon, where epsilon is random noise. If we invert these random labels, do we get a distribution of examples from class t^\\*?\n\nWe guess the reviewer mentioned the model inversion attack [A,B].\nIf the model is shallow as already tried in [A] and [B], model inversion will give the distribution of the target images as suggested by the reviewer.\nHowever, unfortunately, model inversion does not work with deep architecture as already tested by [C].\nThis is a major motivation that we designed the C2G attack using PreImageGAN.\n\n\n[A] Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22Nd ACM SIGSAC Conference on Computer and Communications Security, CCS ’15, pp. 1322–1333, New York, NY, USA, 2015. ACM. ISBN 978-1-4503-3832-5. doi: 10.1145/2810103.2813677. URL http://doi.acm.org/10.1145/2810103.2813677.\n[B] Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In 23rd USENIX Security Symposium (USENIX Security 14), pp. 17–32, San Diego, CA, 2014. USENIX Association. ISBN 978-1-931971-15-7.\n[C] Briland Hitaj, Giuseppe Ateniese, and Fernando Pérez-Cruz. Deep models under the GAN: information leakage from collaborative deep learning. CoRR, abs/1702.07464, 2017. URL http://arxiv.org/abs/1702.07464.", ">In the EMNIST experiment, the results for the “exact same” and “partly same” situations are good, but it seems that for the “mutually exclusive” situation the generated samples look like letters, not numbers, and raises the question on the interpolation ability of the generator.\n\nWe agree that the C2G attack targeting numeric characters in the mutually exclusive setting (Figure 4) does not successfully work for some characters.\nWe investigated the reason experimentally in detail and found that the PreImageGAN cannot correctly generate images of the target if the classifier recognizes non-target images as a target. For example, in Figure 4, images targeting \"7\" look \"T\". This is because the given classifier recognizes images of \"T\" as \"7\" falsely (Table 8). As long as the given classifier recognizes non-target images as target images falsely, the C2G attack cannot correctly reconstruct target images.\nIn contrast, if the classifier recognizes target images as the target, and at the same time, the classifier recognizes non-target images as non-target (Table 6), the C2G attack can generate images of the target successfully even in the mutually exclusive setting (Figure 3).\nIn the revised version, we added these points in Section 5.2 and Appendix B.\n\n>In the FaceScrub experiment is even more difficult to interpret the results, basically because we do not even know the full list of person identities.\n\nWe add the list of person identities used for the training dataset and auxiliary dataset in Appendix E.\n\n>It seems that generated images contain only parts of the auxiliary images related to the most discriminative features of the given classifier.\n\nWe do not argue that PreImageGAN makes use of \"the auxiliary images related to the most discriminative features of the given classifier\" because this is the only clue that the adversary can exploit. To show that image generation by PreImageGAN is NOT a naive cherry-picking of image pieces in the auxiliary images, we conducted the following two experiments.\nFirst, to show that the auxiliary images do not contain face images that exactly look like the target person (say, Keanu Reeves), we evaluated the probability that each image in the auxiliary dataset is recognized as the target. In Figure 6, we see that only a few images are classified as Keanu Reeves with prob>0.8, while most of the images generated by PreImageGAN are recognized as Keanu Reeves with prob>0.95. This indicates that PreImageGAN can generate images recognized as the target only from the given classification model and auxiliary images that are not recognized as the target.\nSecond, to demonstrate that generated images are NOT a naive cherry-picking of image pieces in the auxiliary images, we generated images of interpolation between two people in Figure 8. As shown in the figures, the faces are smoothly changed from one person to another.\n\n>What is the result when the auxiliary dataset comes from a different kind of images?\n\nTo see how much the quality of the auxiliary dataset affects to the generated images, we tried the C2G attack with using meaningless auxiliary images, that is, uniform noise images.\nAs clearly shown in the results (Figure 7), meaningless auxiliary images cannot give meaningful results.\nFrom these results, we could experimentally confirm that the auxiliary dataset affects the quality of the resulting images significantly.\n", "We thank all the reviewers for giving important comments and discussions to improve our manuscript.\nAccording to the comments, we revised our manuscript as follows:\n\n- We added a new experiment on EMNIST to investigate the behavior of the C2G attack in the mutually exclusive setting (Figure 3 and Figure 4. Figure 3 is new). Additional experiments reveal the situation that the C2G attack succeeds and fails (Table 5 - Table 8).\n\n- To show that the C2G attack is NOT a simple cherry-picking of images similar to the targets from the auxiliary dataset, we added the following two new results:\n (1) We measured the probability with which each image in the auxiliary data is recognized as the targets. The probabilities with which images in the auxiliary data are recognized as the targets are low while images generated by the C2G attack are recognized as the target with very high probability (Figure 6). This indicates that the C2G attack can generate images recognized as the targets using images not recognized as the targets.\n (2) We performed the C2G attack with using random images as the auxiliary dataset. The results show that the C2G attack generates meaningless images when the auxiliary dataset consists of meaningless images (Figure 7)\n\n- We investigated the interpolation ability of the generative model obtained by the PreImageGAN (Figure 8). The results showed that the PreImageGAN has a good interpolation ability.\n\nBy reflecting comments from reviewers with additional experiments, the paper becomes longer.\nIn order to keep the consistency of the manuscript before and after revision, we did not change the structure of the manuscript.\nIf the manuscript is accepted, we would like to shorten the paper (especially in Section 2 and Section 3) so that the entire paper becomes more compact." ]
[ 7, 4, 4, -1, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2018_SJOl4DlCZ", "iclr_2018_SJOl4DlCZ", "iclr_2018_SJOl4DlCZ", "r1IcQO8lz", "r1-fz-MJG", "BkQD60b-f", "iclr_2018_SJOl4DlCZ" ]
iclr_2018_ByuI-mW0W
Towards a Testable Notion of Generalization for Generative Adversarial Networks
We consider the question of how to assess generative adversarial networks, in particular with respect to whether or not they generalise beyond memorising the training data. We propose a simple procedure for assessing generative adversarial network performance based on a principled consideration of what the actual goal of generalisation is. Our approach involves using a test set to estimate the Wasserstein distance between the generative distribution produced by our procedure, and the underlying data distribution. We use this procedure to assess the performance of several modern generative adversarial network architectures. We find that this procedure is sensitive to the choice of ground metric on the underlying data space, and suggest a choice of ground metric that substantially improves performance. We finally suggest that attending to the ground metric used in Wasserstein generative adversarial network training may be fruitful, and outline a concrete pathway towards doing so.
rejected-papers
This paper proposes a method for quantitatively evaluating GANs. Better quantitative metrics for GANs are badly needed, as the field is being held back by excessive focus on generated samples. This paper proposes to estimate the Wasserstein distance to the data distribution. A paper which does this well would be a significant contribution, but unfortunately (as the reviewers point out) the experimental validation in this paper seems insufficient. To be convincing, a paper would first need to demonstrate the ability to accurately estimate Wasserstein distance -- not an easy task, but one which receives little mention in this paper. Then it would need to validate that the method can either quantitatively confirm known results about GANs or uncover previously unknown phenomena. As it stands, I don't think this submission is ready for publication in ICLR, but I'd encourage resubmission after more careful experimental validation along the lines suggested by the reviewers.
train
[ "rJk_SwYxz", "B1P-gBclf", "ry88vO5ez", "rk1McFaQf", "r1qCuET7G", "HkY5m4aQM", "HyY42ZLgf", "BJKBqG7xf", "BJJWvcexG", "HkExdq2JG", "rJOyommkf", "Hka0Iobyf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "public", "public" ]
[ "`The papers aims to provide a quality measure/test for GANs. The objective is ambitious an deserve attention. As GANs are minimizing some f-divergence measure, the papers remarks that computing a Wasserstein distance between two distributions made of a sum of Diracs is not a degenerate case and is tractable. So they propose evaluate the current approximation of a distribution learnt by a GAN by using this distance as a baseline performance (in terms of W distance and computed on a hold out dataset). \n\nA first remark is that the papers does not clearly develop the interest of puting things a trying to reach a treshold of performance in W distance rather than just trying to minimize the desired f-divergence. More specifically as they assess the performance in terms of W distance I would would be tempted to just minimize the given criterion. This would be very interesting to have arguments on why being better than the \"Dirac estimation\" in terms of W2 distance would lead to better performance for others tasks (as other f-divergences or image generation).\n\nAccording to the authors the core claims are:\n\"1/ We suggest a formalisation of the goal of GAN training (/generative modelling more broadly) in terms of divergence minimisation. This leads to a natural, testable notion of generalisation. \"\nFormalization in terms of divergence minimization is not new (see O. Bousquet & all https://arxiv.org/pdf/1701.02386.pdf ) and I do not feel like this paper actually performs any \"test\" (in a statistical sense). In my opinion the contribution is more about exhibiting a baseline which has to be defeated for any algorithm interesting is learning the distribution in terms of W2 distance.\n\n\"2/ We use this test to evaluate the success of GAN algorithms empirically, with the Wasserstein distance as our divergence.\"\nHere the distance does not seems so good because the performance in generation does not seems to only be related to W2 distance. Nevertheless, there is interesting observations in the paper about the sensitivity of this metric to the bluring of pictures. I would enjoyed more digging in this direction. The authors proposes to solve this issue by relying to an embedded space where the L2 distance makes more sense for pictures (DenseNet). This is of course very reasonable but I would expect anyone working on distribution over picture to work with such embeddings. Here I'm not sure if this papers opens a new way to improve the embedding making use on non labelled data. One could think about allowing the weights of the embeddings to vary while f-divergence is minimized but this is not done in the submitted work.\n\n \"3/ We find that whether our proposed test matches our intuitive sense of GAN quality depends heavily on the ground metric used for the Wasserstein distance.\"\nThis claim is highly biased by who is giving the \"intuitive sense\". It would be much better evaluated thought a mechanical turk test.\n\n \"4/ We discuss how to use these insights to improve the design of WGANs more generally.\"\nAs our understanding of the GANs dynamics are very coarse, I feel this is not a good thing to claim that \"doing xxx should improve things\" without actually trying it. \n", "The quality of the paper is good, and clarity is mostly good. The proposed metric is interesting, but it is hard to judge the significance without more thorough experiments demonstrating that it works in practice.\n\nPros:\n - clear definitions of terms\n - overall outline of paper is good\n - novel metric\n\nCons\n - text is a bit over-wordy, and flow/meaning sometimes get lost. A strict editor would be helpful, because the underlying content is good\n - odd that your definition of generalization in GANs appears immediately preceding the section titled \"Generalisation in GANs\"\n - the paragraph at the end of the \"Generalisation in GANs\" section is confusing. I think this section and the previous (\"The objective of unsupervised learning\") could be combined, removing some repetition, adding some subtitles to improve clarity. This would cut down the text a bit to make space for more experiments.\n - why is your definition of generalization that the test set distance is strictly less than training set ? I would think this should be less-than-or-equal\n - there is a sentence that doesn't end at the top of p.3: \"... the original GAN paper showed that [ends here]\"\n - should state in the abstract what your \"notion of generalization\" for gans is, instead of being vague about it\n - more experiments showing a comparison of the proposed metric to others (e.g. inception score, Mturk assessments of sample quality, etc.) would be necessary to find the metric convincing\n - what is a \"pushforward measure\"? (p.2)\n - the related work section is well-written and interesting, but it's a bit odd to have it at the end. Earlier in the work (e.g. before experiments and discussion) would allow the comparison with MMD to inform the context of the introduction\n - there are some errors in figures that I think were all mentioned by previous commentators.", "This paper proposed a procedure for assessing the performance of GANs by re-considering the key of observation. And using the procedure to test and improve current version of GANs. It demonstrated some interesting stuff. \n\nIt is not easy to follow the main idea of the paper. The paper just told difference stories section by section. Based on my understanding, the claims are 1) the new formalization of the goal of GAN training and 2) using this test to evaluate the success of GAN algorithms empirically? I suggested that the author should reform the structure, ignore some unrelated content and make the clear claims about the contributions on the introduction part. \n\nRegarding the experimental part, it can not make strong support for all the claims. Figure 2 showed almost similar plots for all the varieties. Meanwhile, the results are performed on some specific model configurations (like ResNet) and settings. It is difficult to justify whether it can generalize to other cases. Some of the figures do not have the notations of curvey, making people hard to compare. \n\nTherefore, I think the current version is not ready to be published. The author can make it stronger and consider next venue. ", "Thanks very much for your comments. We have uploaded a revised version of the paper that hopefully addresses many of the points that you making regarding the writing of the paper. Regarding your other points - we use a strict inequality in our condition because, if a GAN were merely *as good* as the training set, then it seems hard to justify all the effort in implementing it. (However, we would expect equality to hold with probability 0, so this is probably an edge case.) We also definitely agree that further experimental investigation is necessary, but we think that the implications of our findings about the Wasserstein GAN (namely, that we do not even get close to generalising - see Figure 5) and the significance of the ground metric (which has largely been overlooked) are still of interest to the community.", "Thanks a lot for your review.\n\nOur reason for not requiring D = D_\\Gamma (and indeed for contradicting this in the case of the DCGAN, where D_\\Gamma is the Jenson-Shannon divergence rather than the Wasserstein distance) is that we believe a GAN may still useful for minimising D even when D_\\Gamma != D, due to the regularising effect that optimising over \\mathcal{Q} entails. This is why we distinguish between the GAN objective (section 3), and our *overall* objective (section 2), which is what we ultimately care about. We definitely don't claim to be the first to talk about divergence minimisation in the context of GAN training, but we think that this distinction (between D and D_\\Gamma) - as well as our direct treatment of the finiteness of our dataset, and our establishment of an intuitive performance baseline for generalisation - are useful contributions.\n\nWe also believe that the uninituitive behaviour of W_L^2 (e.g. figure 2) is largely fixed by changing the ground metric as we describe. In this case, we obtain the much more plausible figure 5, where the value does appear to correspond to image quality. We agree that this is subjective, but think the result is still compelling. We are also not aware of any related Wasserstein GAN work in which the ground metric is defined in such a way, though welcome any such references.\n\nFinally, we certainly do not intend to claim that changing the ground metric will (or even should) improve GAN training. However, our results do suggest that this largely overlooked component of the WGAN is indeed significant, and our discussion simply aims to promote further consideration of this issue in a slightly more concrete way.\n\nPlease also note that we have uploaded a revised copy of our paper which may clarify things further.", "Thank you very much for your review. We have uploaded a revised version of the paper that significantly improves upon the issues of clarity you have mentioned. We hope this addresses some of the concerns you have raised. Regarding the experimental results: Figure 2 shows the results of applying our methodology to one specific case - the DCGAN trained on CIFAR-10 - so we would not expect different plots to vary significantly. We provided multiple runs to give an idea of how much variance there is in our method, but only one representative run is necessary to convey the result (and we have switched to the latter in the revised version). We agree that further experimental investigation is necessary, but we think that the implications of our findings about the Wasserstein GAN (namely, that we do not even get close to generalising - see Figure 5) and the significance of the ground metric (which has largely been overlooked) are still of interest to the community.", "Hi, thank you for your \tquestions and comments. We answer these in turn:\n\n* Our abstract and introduction lay out the main points of the paper. To give a quick summary here:\n - We suggest a formalisation of the goal of GAN training (/generative modelling more broadly) in terms of divergence minimisation. This leads to a natural, testable notion of generalisation.\n - We use this test to evaluate the success of GAN algorithms empirically, with the Wasserstein distance as our divergence.\n - We find that whether our proposed test matches our intuitive sense of GAN quality depends heavily on the ground metric used for the Wasserstein distance.\n - We discuss how to use these insights to improve the design of WGANs more generally.\n\nRegarding the figure (I assume you meant Fig. 2 rather than Fig. 3?): we think that the problem in Fig. 2 is not in (1) itself, but rather in (4), which is our empirical approximation of (1), and which Fig. 2 depicts. We believe that, if the terms in (1) were plotted instead, we would obtain a curve resembling Fig. 5. (Note that the dashed line on Fig. 5 was left out by mistake, but appears significantly below the line plotted, just like in Fig 4.)\n\nNow, as the number of samples goes to infinity, (4) will converge to (1) almost surely (at least when D is a Wasserstein distance), which we use to justify our empirical approximation. However, we think that in Fig. 2 we simply didn't have enough samples for (4) to approximate (1) accurately yet. We also believe that changing the ground metric to the embedded L2 improved the convergence rate, hence allowing us to achieve the more desirable Fig. 5 for the same experiment.\n\nWe will clarify the distinction between (1) and (4) at greater length.\n\n* Our key reasons for choosing the Wasserstein distance are given in section 4. In particular, this choice makes D sensitive to the underlying topology of our data space (through the choice of ground metric). For example, the Wasserstein distance between two Dirac distributions varies continuously according to the distance between their masses, i.e. \n\nW_d(dirac_x, dirac_y) = d(x, y),\n\nas opposed to (say) KL(dirac_x, dirac_y), which is infinite if x != y and 0 otherwise. The Wasserstein distance also metricises weak convergence, allowing its approximation via the Wasserstein distance between empirical distributions (and it is also useful that this is tractable to compute, as you mention). This also means we make no density assumptions about the distributions involved -- all we need to compute this approximation is the ability to sample. (We will emphasise this last point more in the paper.)\n\n* We believe that the threshold (1) serves as a useful test of whether a GAN has generalised beyond its training data. As we say in section 2, if (1) holds, then “using alpha here actually achieved something: in a sense, it has injected additional information about pi into X_hat (perhaps through some sort of smoothing or regularisation), and brought us closer to pi than we already were a priori”.\n\nOtherwise put: we always have the option of choosing alpha(X) to be the empirical distribution of our dataset X_hat. If our aim is to choose a distribution alpha(X) that is as close (as measured by D) to the true distribution pi as possible, and if alpha(X) != X_hat, then (1) had better hold -- otherwise we could have done better by choosing alpha(X) = X_hat.\n\n* This question is incomplete - would you please clarify?\n\n* The proposal in section 5 essentially consists of training a WGAN in the usual way, but with the discriminator given by h(eta(x)), where h is learned but eta is a fixed embedding. We justify in that section why this corresponds to changing the ground metric from the standard L2 to the eta-embedded L2. We will clarify this further.\n\n* Yes, we agree this would be useful (particularly highlighting the proposed algorithm) and will modify the paper paper accordingly.\n\nThanks again for your input.\n", "\n- I'd like the claims of the paper to be more clearly apparent. As an example it seems that the papers claims that using the (1) criterion is a good idea for testing the generalization quality of a generative model but fig 3 clearly shows that it is not always the case. \n\n- In section (4) What is special about using a W distance in (1) to perform the GAN quality assertion? Is it just because it can be estimated easily and is meaningful when the two distributions are a sum of Diracs?\n \n- There is a direct link between the cost optimized by a GAN and the minimization of f-divergences (see §2.1 of https://arxiv.org/pdf/1701.02386.pdf for details). Why would it not be enough to estimate this divergence on the test set ? In other word, what is the interest to set a threshold on the divergence rather than just minimizing it?\n\n- Is the idea proposed in §5 consist in solving the optimal transport problem in an embedding space (ResNet one as an example) rather than on the image space?\n\n- About the form of the paper, I usually think that following the convention of ML papers and stating core claims as theorems whenever possible and highlighting proposed algorithms. ", "Hi, thanks a lot for your helpful suggestions and comments.\n\nRegarding your first few points:\n\n* We agree that Fig. 2 seems wrong. However, we think that the problem is not in (1) itself, but rather in (4), which is our empirical approximation of (1), and which Fig. 2 depicts. As the number of samples goes to infinity, (4) will converge to (1) almost surely (at least when D is a Wasserstein distance), but we think that, for Fig. 2, we simply didn't have enough samples for (4) to approximate (1) accurately yet. At present, we don't have results that indicate whether (4) will be an accurate approximation of (1) for a particular D given some budget of samples -- what we can do at present is run experiments and see if the results make sense, and then conjecture that this applies to other similar cases also.\n \nWe also mention that we do not think changing the ground metric to the embedded L2 is particularly hacky, since the resulting D produced is still a valid Wasserstein distance (assuming the embedding is injective). In fact, we see the choice of L2 (which is almost always made implicitly) as somewhat arbitrary anyway -- especially for comparing images as in this context -- and believe that drawing attention to this issue is one of the main contributions of our work.\n\n* It is definitely possible that (1) might hold, and yet the generator might still be far from the true distribution (particularly when the training set is very small). However, we believe that (1) is still a useful condition to assess whether a GAN has at least done *something* -- namely, it shows whether or not we have gotten closer to the data distribution than we were a priori (when all we had was our training set). We still might have a long way to go to get to the data distribution, but in this case a good choice of D (such as a Wasserstein distance) would indicate this, since D(alpha(X), pi) would be large.\n\n* We completely agree that it is unclear whether GANs actually do minimise the divergences they claim to, especially given limited network capacity, and the method by which they are usually trained (alternating generator and discriminator steps) -- in fact, this work was largely motivated by a desire to test whether or not they do. To some extent, our model of a GAN in section 3 is unnecessary for our argument -- we could remain completely agnostic as to what exactly a GAN is doing, and simply consider it as a black box corresponding to one particular choice of alpha.\n\nHowever, we do see a divergence as necessary for formulating the overall problem that we are trying to solve with GANs (if not the mechanics of how a GAN will actually solve that problem). If our objective is to “learn the data distribution”, then we believe success or failure is naturally measured in terms of some divergence between the generator and the true distribution. We view the question of whether GANs strictly minimise this divergence at each training step as a separate (but related) question.\n\nFor your remaining points:\n\n* Yes, the inequality on page 5 is the wrong way around.\n\n* The dashed line in Figure 5 was left out by mistake. We will fix this - in this case, the dashed line is a significant distance below the blue line (much like the graphs in Figure 4.)\n\n* Yes, Figure 8 corresponds to a DC-GAN. (A similar error occurred in Figures 2 and 5, which should refer to CIFAR-10 rather than MNIST.)\n\nThank you very much also for those typos, and for your other suggestions.", "Hi, thank you very much for your comment.\n\nIt is certainly true that probability divergences have been previously suggested as a way to evaluate generator quality. However, as far as we are aware, this has not been used to formalise the notion of generalisation that we suggest, wherein the generator generalises if it moves closer to the data distribution than the empirical distribution of the training set.\n\nWe also see several more specific differences between our work and the papers you mention. To our understanding, Lopez-Paz & Oquab do not seek to minimise any divergence directly, but rather propose a two-sample test that aims to accept or reject the hypothesis that the generator and true distributions are identical. In particular, once a significance value has been chosen, the output of their test is simply a binary value indicating whether to accept or reject the hypothesis. Their proposed C2ST statistic does give a numerical value 0 <= t <= 1 that should give some information as to the \"closeness\" of the generator and true distributions, but this is not a statistical divergence as in our work.\n\nThe other two papers you mention do evaluate generator quality using an approximate Wasserstein distance between empirical distributions. However, unlike us, they do so by training a GAN discriminator e.g. with the WGAN or WGAN-GP methods. As acknowledged in the original WGAN paper, the accuracy of this approach depends on significant assumptions about the class of functions attainable via a given discriminator architecture and training procedure. In contrast, we compute empirical Wasserstein distances exactly by solving a linear program, which requires no such assumptions.\n\nThe use of a neural network embedding is also very different in these papers than in ours. In particular, Danihelka et al. use an embedding only to speed up discriminator training, and not to evaluate generator quality as we do. On the other hand, Lopez-Paz & Oquab use an embedding to weaken the quality of real and generated samples, since otherwise these are easily distinguished with perfect accuracy by their binary classifier. In contrast, we use the embedding to change the ground metric for our Wasserstein distance, in the hope that speed up the convergence of the empirical Wasserstein distance to the true Wasserstein distance (and we suggest empirically that this does take place). In the process, we also reveal the significance of the ground metric, which we believe has largely been negelected in this field so far.", "I believe that the idea of evaluating the quality of a generator using a probability distribution divergence (on either pixels or pre-trained features) was firstly explored in [https://arxiv.org/abs/1705.05263] (neural network Wasserstein distance), [https://arxiv.org/abs/1610.06545] (neural network Jensen-Shannon), and [https://arxiv.org/abs/1708.04692] (both of them).", "Hi, thanks for this interesting and easy to read paper. I have some questions and comments:\n\n*Given your definition of generalization (eq. 1), one would infer that none of the GANs you tested generalize well, since in all figures the solid curves remain above the dashed line, is this correct? There is only one exception to this, that is Fig. 2 in which the solid curves do go below the dashed line, but then the quality of the samples is not good (Fig. 8). So it seems to me that eq. 1 is not a clean test for the performance of a GAN, and you need some extra, ad hoc procedure (as the one you propose with the trained neural network) to make sure it implies that the GAN is working. Would it be possible to have a general recipe that works for any type of data?\n\n*I find very interesting the idea that focusing on eq.2, as Arora et al. 2017 proposed, might lead to degenerate choices of D. It is worth noting however that Arora and Zhang 2017 claim that \"the Arora et al. scenario could involve the trained distribution having small support, and yet all its samples could be completely disjoint from the training samples\". Thus it seems conceivable that a trained GAN could fulfill eq.1 but still being very far from the original distribution. Is this correct?\n\n*Recently, Fedus et al. have also used the Wasserstein distance to assess the quality of GANs. They also state that only viewing GANs as minimizing a divergence is \"overly restrictive\", which might be worth mentioning in the context of your paper.\n\nI have some other minor comments that I thought might help improving the paper:\n\n* The sentence \"For instance, the original GAN paper showed that\" is not complete.\n\n* Is the equation in page 5 correct? It does not seem to be consistent with its explanation: \"blurring X by any amount brings...\".\n\n* Fig. 5 does not have a dashed line (although the line is mentioned in the caption). This dashed line is pretty relevant, since it should show that the proposed ground metric solves the issues seen in Fig. 2.\n\n* Caption in Fig. 8 mentions I-WGAN when it corresponds to DC-GAN, right?\n\n* Adding legends to figures could make them easier to understand.\n\n* In some cases, it was difficult for me to follow the reasoning because there are variables defined but never used and some concepts referred with several letters.\n\n* Typos (?): \n\n -We measure closeness measured in terms.\n - Note that on MNIST we modified first duplicated.\n - and alo for learning.\n - it metricizing weak.\n - sqaure-summable.\n\n\nHope all the above is clear and helps improving the paper. I think it is a well written paper and found very interesting some of the ideas discussed.\n\nThanks.\n" ]
[ 5, 6, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ByuI-mW0W", "iclr_2018_ByuI-mW0W", "iclr_2018_ByuI-mW0W", "B1P-gBclf", "rJk_SwYxz", "ry88vO5ez", "BJKBqG7xf", "iclr_2018_ByuI-mW0W", "Hka0Iobyf", "rJOyommkf", "iclr_2018_ByuI-mW0W", "iclr_2018_ByuI-mW0W" ]
iclr_2018_S1EwLkW0W
Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients
The ADAM optimizer is exceedingly popular in the deep learning community. Often it works very well, sometimes it doesn’t. Why? We interpret ADAM as a combination of two aspects: for each weight, the update direction is determined by the sign of the stochastic gradient, whereas the update magnitude is solely determined by an estimate of its relative variance. We disentangle these two aspects and analyze them in isolation, shedding light on ADAM ’s inner workings. Transferring the "variance adaptation” to momentum- SGD gives rise to a novel method, completing the practitioner’s toolbox for problems where ADAM fails.
rejected-papers
This paper presents a theoretical justification for the Adam optimizer in terms of decoupling the signs and magnitudes of the gradients. The overall analysis seems reasonable, though there's been much back-and-forth with the reviewers about particular claims and assumptions. Overall, the contributions don't feel quite substantial enough for an ICLR publication. The interpretation in terms of signs is interesting, but it's very similar to the motivation for RMSprop, of which Adam is an extension. The performance result on diagonally dominant noisy quadratics is interesting, but it feels unsurprising that a diagonal curvature approximation would work well in this setting. I don't recommend acceptance at this point, though these ideas could potentially be developed further into a strong submission.
val
[ "r1mT8HDgf", "S1urbvOgf", "By-KPs9eM", "SJtm7zOZM", "HJbFyfdWz", "HkqnfeuZf", "HJOTq8P-M", "HJOpoC8Zf", "H1c9cRIbM", "rkV11bcAW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author" ]
[ "Summary: \nThe paper is trying to improve Adam based on variance adaption with momentum. Two algorithms are proposed, M-SSD (Stochastic Sign Descent with Momentum) and M-SVAG (Stochastic Variance-Adapted Gradient with Momentum) to solve finite sum minimization problem. The convergence analysis is provided for SVAG for strongly convex case. Numerical experiments are provided for some standard neural network structures with three common datasets MNIST, CIFAR10 and CIFAR100 compared the performance of M-SSD and M-SVAG to two existing algorithms: SGD momentum and Adam. \n \nComments:\nPage 4, line 5: You should define \\nu clearly.\n\nTheorem 1: In the strongly convex case, assumption E ||g_t ||^2 \\leq G^2 (if G is a constant) is too strong. In this case, G could be equal to infinity. If G is not infinity, you already assume that your algorithm converges, that is the reason why this assumption is not so good for strongly convex. If G is infinity (this is really possible for strongly convex), your proof would get a trouble as eq. (40) is not valid anymore.\n\nAlso, to compute \\gamma_{t,i}, it requires to compute \\nabla f_{t,i}, which is full gradient. By doing this, the computational cost should add the dependence of M, which is very large as you mentioned in the introduction. According to your rate O(1/t), the complexity is worse than that of gradient descent and SGD as well. \n\nAs I understand, there is no theoretical results for M-SSG and M-SVAG, but only the result for SVAG with exact \\eta_i^2 in the strongly convex case. Also, theoretical results are not strong enough. Hence, the experiments need to make more convincingly, at least for some different complicated architecture of deep neural network. As I see, in some dataset, Adam performs better than M-SSD, some another dataset, Adam performs better than M-SVAG. Same situation for M-SGD. My question is that: When should we use M-SSD or M-SVAG? For a given dataset, why should we not use Adam or M-SGD (or other existing algorithms such as Adagrad, RMSprop), but your algorithms? \n\nYou should do more experiments to various dataset and architectures to be more convincing since theoretical results are not strong enough. Would you think to try to use VGG or ResNet to ImageNet?\n\nI like the idea of the paper but I would love if the author(s) could improve more theoretical results to convince people. Otherwise, the results in this paper could not be considered as good enough. At this moment, I think the paper is still not ready for the publication. \n\nMinor comments:\nPage 2, in eq. (6): You should mention that “1” is a vector.\nPage 4, line 4: Q in R^{d} => Q in R^{d x d}\nPage 6, Theorem 1: You should define the finite sum optimization problem with f since you have not used it before.\nPage 6, Theorem 1: You should use another notation for “\\mu”-strongly convex parameter since you have another “\\mu”-momentum parameter in section 3.4\nPage 4, Page 7: Be careful with the case when c = 0 (page 4) and mu = 1 (page 7-8) with dividing by 0. \n", "The paper presents some analysis of the scale-invariance and the particular shape of the learning rate used in Adam. The paper argues that Adam's update is a combination of a sign-update and a variance-based learning rate. Some analysis is provided on these two aspects.\n\nAfter spending a sizeable amount of time with this paper, I am not sure what are its novel contributions and why it should be published in a scientific conference. The paper contains so many approximations, simplifications, and assumptions that make any presented result extremely weak.\n\nMore in details, the analysis of the sign is done in the case of quadratic functions of Gaussian variables. The result is mildly interesting, but I fail to see how this would give us a hint of what is happening during the minimization of the non-convex objectives for training deep networks.\nMoreover, the analysis of sign based updates has been already carried over using the Polyak-Łojasiewicz assumption in Karimi et al. ECML-PKDD 2016, that is strictly more general than any quadratic approximation.\n\nThe similarity between the ``optimal'' variance-based learning rate and the one of Adam hinges again on the fact that the noise is Gaussian. As the authors admit, Schaul et al. (2013) already derived similar updates. Also, Theorem 1 recover the usual rate of convergence for strongly convex function: How is this theorem supposed to support the fact that variance-adapted learning rates are a better idea than the usual updates?\nMoreover, the proof of Theorem 1 hinges on the fact that E[||g_t||^2]\\leq G^2. Clearly, this is not possible in general for a strongly convex function. The proof might still go through, but it needs to be fixed using the fact that the updates always decrease the function.\n\nOverall, if we are considering only the convex case, Adam is clearly sub-optimal from all the points of view and better algorithms with stronger guarantees can be used. Indeed, the fact that non-convexity is never discussed is particularly alarming. It is also indicative that none of the work for minimization of finite sums are cited or discussed, e.g. the variance reduced methods immediately come to mind.\n\nRegarding the experiments, the parameters are chosen to have the best test accuracy, mixing the machine learning problem with the optimization one: it is well-known and easy to prove that a worst optimizer can give rise to better test errors. Hence, the empirical results cannot be used to support any of the proposed interpretations nor the new optimization algorithms.\n\nTo summarize, I do not think the contributions of this paper are enough to be published in ICLR.", "Stochastic Sign Descent (SSD) and Stochastic Variance Adapted Gradient (SVAG) are inspired by ADAM and studied in this paper, together with momentum terms. \n\nAnalysis showed that SSD should work better than usual SGD when the Hessian of training loss is highly diagonal dominant. It is intrigued to observe that for MNIST and CIFAR10, SSD with momentum champions with better efficiency than ADAM, SGD and SVAG, while on the other hand, in CIFAR100, momentum-SVAG and SGD beat SSD and ADAM. Does it suggest the Hessians associated with MNIST and CIFAR10 training loss more diagonally dominant? \n\nThere are other adaptive step-sizes such as Barzilai-Borwein (BB) Step Sizes introduced to machine learning by Tan et al. NIPS 2016. Is there any connections between variance adaptation here and BB step size? ", "Dear author(s), \n\nI just provided the expressions to show that it is not true for all \\theta in R^d. I clearly stated for your case. \n\n\"From here, we have f(\\theta) \\leq G/(2*\\mu) + f(\\theta*) = FIXED CONSTANT for all \\theta \\in R^d since \\theta* is a unique solution of strongly convex f and \\mu and G are also fixed (by your assumption). This means that you implicitly assume that YOUR ALGORITHM converges in some FIXED neighborhood\"\n\nI undertand that you are assuming for all t for your algorithm. But this means that you are assuming all f(\\theta_1), ... , f(\\theta_t) are ALWAYS smaller than some \"FIXED particular constant\" since you fix G and mu and of course f(\\theta*) is also fixed. There is no guarantee that your updates is always in this fixed region since you are considering \\theta_1, ... , \\theta_t \\in R^d. What if at some time \\tau, there exists a f(\\theta_\\tau) which is greater than that fixed constant? This is always possible since you are not limited your updates. You cannot just simply assume that those updates are always in that fixed region. This implicity implies that you are assuming your algorithm converges in some fixed region (which is not true since they may go out). \n\nFor your references, they should add something else for the assumption. For example, in [1], they projected their updates into some convex set (this should be bounded convex set). For [2], [3], you can see they provide more supports to that assumption, either considering problems in some convex bounded set or adding something else. For [4], E || \\nabla f_i(\\theta_t) ||^2 \\leq M + N* ||\\nabla f(\\theta_t) ||^2, this is somehow still reasonable since you are not limited the RHS, which means they are still allowing ||\\nabla f(\\theta_t)||^2 become arbitrary large. \n\nAnyway, I know some specific papers assuming that assumption with fixed G in the strongly convex case. But like I said before, this is only true for an empty class function. To correct your proof, you should allow G become arbitrary large, which means allowing G -> \\infty. But like I said, in this case, in your proof, c -> 0 and eq (40) is not true anymore since 1/c -> \\infty. \n\nIn other words, you can assume E || g_t ||^2 < \\infty but not E ||g_t||^2 \\leq G < \\infty for some fixed G. \n", "Dear Reviewer 3,\n\nthank you for your positive review!\n\n\"Does it suggest the Hessians associated with MNIST and CIFAR10 training loss more diagonally dominant?\"\n\nOur analysis in Section 2 suggests this (and its interplay with stochasticity) as a possible explanation. It would be an interesting addition to the paper to investigate the diagonal dominance empirically on these two problems. However, since these are very high-dimensional problems, this would require serious additional effort (computationally and implementation). We think that this is beyond the scope of this conference paper.\n\n\"Is there any connections between variance adaptation here and BB step size?\"\n\nThe BB is a scalar step size, whereas we suggest manipulating the per-element update magnitudes, thereby altering the update direction. Also, the BB step size arises from \"geometric\" considerations (the secant equation) in the noise-free case, whereas we aim to control the effect of stochasticity. So, to answer your question, I don't think that there is a relevant connection. My understanding is that the BB step size is agnostic of the search direction, so it could even be combined with our variance-adapted update direction.", "Dear Reviewer 2,\n\nYou are misunderstanding the assumption. We do _not_ assume that E[ || \\nabla g(\\theta) ||^2 ] \\leq G^2 for all \\theta in R^d. That would of course be utterly wrong. We assume that E [ || \\nabla g(\\theta_t) ||^2 ] \\leq G^2 for all t, i.e., the expected squared norm of stochastic gradients __at the iterates of the algorithm__ are bounded uniformly! This assumption is made to bound the variance of stochastic gradients (it sometimes referred to as the finite variance condition). The (non-stochastic) gradient norms || \\nabla f(\\theta_t) ||^2 are bounded since the algorithm only explores a bounded region of the search space, as a direct consequence of its non-divergence. (This follows immediately from Eq. (33), which says that we expect a descent in function value at each step; if you insist, I can write it up and add it to the paper.)\n\nAdmittedly, assuming E [ || \\nabla g(\\theta_t) ||^2 ] \\leq G^2 entangles these two things. However, it is a standard and absolutely valid assumption, which keeps the proof concise and readable. Here are some more papers that use it, either in exactly this form, or in slight variations:\n[4] Assumtption (c) on the first page\n[5] Theorem 2.3\n[6] Assumption 3 (a) in Section 2.1\n[7] Assumption 4.3\nIf you insist that this assumption is invalid, you are questioning a good portion of research on stochastic optimization methods.\n\nIrrespective of our disagreement, thanks for checking the proof and engaging in this conversation!\n\n\n[4] Simon Lacoste-Julien, Mark Schmidt, Francis Bach. A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method. 2012.\n[5] Michael Friedlander, Mark Schmidt. Hybrid Deterministic-Stochasic Methods for Data Fitting. 2011.\n[6] Elad Hazan, Satyen Kale. Beyond the Regret Minimization Barrier: Optimal Algorithms for Stochastic Strongly-Convex Optimization. 2014.\n[7] Leon Bottou, Frank Curtis, Jorge Nocedal. Optimization Methods for Large-Scale Machine Learning. 2016.", "Dear author(s), \n\nThank you for your response! \n\n2) \"Theorem 1: In the strongly convex case, assumption E ||g_t ||^2 \\leq G^2 (if G is a constant) is too strong. In this case, G could be equal to infinity. If G is not infinity, you already assume that your algorithm converges, that is the reason why this assumption is not so good for strongly convex. If G is infinity (this is really possible for strongly convex), your proof would get a trouble as eq. (40) is not valid anymore.\"\n\nAs we already have pointed out in our response to Reviewer 1, this is a standard assumption in convergence proofs of stochastic optimization methods. To name just a few examples: [1] (Theorem 1), [2] (Theorem 4.1), [3] (e.g. Theorem 4). It does *not* assume that the algorithm convergences, only that it does not diverge ($|| \\nabla f_t ||^2 < \\infty$) and that the noise is bounded ($\\sum_i \\sigma(\\theta)_i^2 < \\infty$ for all $\\theta$). We can add a clarifying remark to the paper.\n\nRE: It is true that this is a standard assumption in convergence proofs of many previous papers. However, this is also well-known that for strongly convex case, this assumption is not valid. The reason is as follows. Your f(\\theta) = E[f_i(\\theta)] is \\mu-strongly convex and L-smooth. By \\mu-strongly convex property of f, we have: for all \\theta \\in R^d \n\n2*\\mu*[f(\\theta) - f(\\theta*)] \\leq || \\nabla f(\\theta) ||^2 = || E[\\nabla f_i(\\theta)] ||^2 \\leq E || \\nabla f_i(\\theta) ||^2 \\leq G (according to your assumption)\n\nFrom here, we have f(\\theta) \\leq G/(2*\\mu) + f(\\theta*) = FIXED CONSTANT for all \\theta \\in R^d since \\theta* is a unique solution of strongly convex f and \\mu and G are also fixed (by your assumption). This means that you implicitly assume that your algorithm converges in some FIXED neighborhood. Although this neighborhood is large, there is no guarantee that your algorithm never goes out of this region. Please notice that assuming || \\nabla f_t ||^2 < \\infty and || \\nabla f_t ||^2 \\leq G < \\infty are TOTALLY different since you are fixing G. If you allow G becomes arbitrary large (G -> \\infty), then your algorithm would be fine. But like I said before, if G -> infinity, your proof would get a trouble since eq. (40) is not valid anymore. \n\nI know that many previous papers were assuming this assumption (when G is fixed) with strong convexity. But this is only true for an empty class function satisfying both conditions. \n\nIn my opinion, your theoretical result is not rigorous enough. ", "Dear Reviewer 2,\n\nthanks for your constructive review. We want to address multiple of the points you have raised.\n\n1) \"The paper is trying to improve Adam based on variance adaption with momentum. Two algorithms are proposed, M-SSD (Stochastic Sign Descent with Momentum) and M-SVAG (Stochastic Variance-Adapted Gradient with Momentum) to solve finite sum minimization problem.\"\n\nWe do not want to improve Adam, but provide insight into its inner workings. We argue that Adam is a combination of two aspects (sign-based and variance adaptation), which can be separated. M-SSD and M-SVAG are the results of this separation. We do not propose M-SSD as a method to be used in practice, but merely as a baseline for comparison (we comment on this in more detail below).\n\n2) \"Theorem 1: In the strongly convex case, assumption E ||g_t ||^2 \\leq G^2 (if G is a constant) is too strong. In this case, G could be equal to infinity. If G is not infinity, you already assume that your algorithm converges, that is the reason why this assumption is not so good for strongly convex. If G is infinity (this is really possible for strongly convex), your proof would get a trouble as eq. (40) is not valid anymore.\"\n\nAs we already have pointed out in our response to Reviewer 1, this is a standard assumption in convergence proofs of stochastic optimization methods. To name just a few examples: [1] (Theorem 1), [2] (Theorem 4.1), [3] (e.g. Theorem 4). It does *not* assume that the algorithm convergences, only that it does not diverge ($|| \\nabla f_t ||^2 < \\infty$) and that the noise is bounded ($\\sum_i \\sigma(\\theta)_i^2 < \\infty$ for all $\\theta$). We can add a clarifying remark to the paper.\n\n3) \"Also, to compute \\gamma_{t,i}, it requires to compute \\nabla f_{t,i}, which is full gradient. By doing this, the computational cost should add the dependence of M, which is very large as you mentioned in the introduction. According to your rate O(1/t), the complexity is worse than that of gradient descent and SGD as well.\"\n\nWe are providing a theoretical result for *idealized* SVAG, where we assume access to the exact relative variance. As we write in the paper, this is meant as a *motivation* for this form of variance adaptation. We do not provide a theoretical result for SVAG (or M-SVAG) with *estimated* variances, but evaluate these methods empirically.\n\n4) \"My question is that: When should we use M-SSD or M-SVAG? For a given dataset, why should we not use Adam or M-SGD (or other existing algorithms such as Adagrad, RMSprop), but your algorithms?\"\n\nWhile our analysis in Section 2 provides some insight into when to use sign-based methods over SGD, we don't have a conclusive answer to your question. But couldn't we ask the same question for any other optimization method used in Deep Learning? Why should we use Adam instead of SGD+momentum? The answer is that it has been shown empirically to work better on some (but by no means all) problems. In this work, we show empirically that M-SVAG consistently improves over standard SGD with momentum. It would thus be a logical choice on problems where SGD+momentum outperforms Adam. We specifically avoided to \"sell\" M-SVAG as \"the new optimizer that everybody should use now\". It is an addition to the toolbox; it uses variance-based element-wise step sizes, but is not based on the sign of the gradient. Regarding M-SSD, we want to point out that we do not see this as a method to be used in practice. We included it in the comparison for completeness, since it adopts the sign aspect of Adam but removes the variance adaptation (see Table 1). We will make this more clear in a revised version of the paper.\n\n5) \"You should do more experiments to various dataset and architectures to be more convincing since theoretical results are not strong enough. Would you think to try to use VGG or ResNet to ImageNet?\"\n\nWe agree that more experimental results are always better. However, this is very computationally demanding. With the individual learning rate tuning for each method and 10 replication runs, adding a new data set / architecture amounts to roughly 100 training runs. (There are enough papers out there that make claims about an optimization method based on a single run with a single learning rate, but we do not want to do that.)\n\nWe hope we were able to alleviate some of your concerns. We kindly ask you to reconsider your evaluation of the paper in light of this response.\n\n[1] Shamir and Zhang. Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes. 2012.\n[2] Kingma and Ba. Adam: A Method for Stochastic Optimization. 2015.\n[3] Karimi et al. Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak- Lojasiewicz Condition, 2016.", "Dear Reviewer 1,\nthank you for your constructive review. We want to address some of the concerns you have raised.\n\n1) \"the analysis of the sign is done in the case of quadratic functions of Gaussian variables. [...] I fail to see how this would give us a hint of what is happening during the minimization of the non-convex objectives for training deep networks.\"\n\nWe disagree with this point. Studying the behavior of optimization algorithms on simple problems is insightful, helps direct future research and, thus, is an important step to gain a deeper understanding of the method on more complex problems. The chosen problem class is simple but non-trivial, allowing to study the interplay of stochasticty and curvature. Our analysis adds insight about the effects of noise and curvature on the sign-based method and SGD.\n\n2) \"the analysis of sign based updates has been already carried over using the PL assumption in Karimi et al. [...], that is strictly more general than any quadratic approximation.\"\n\nKarimi et al. [3] provide a convergence proof for a sign-based method under the PL assumption and derive a general worst-case rate. We are asking a different question: In what specific situations (curvature properties and noise) can the sign direction outperform SGD? Also, [3] only consider sign-based methods in the *noise-free* case, whereas we specifically analyze the interplay of noise and malicious curvature. We will include a pointer to [3] in a revised version of the paper, but this does not affect the significance of this work.\n\n3) \"The similarity between the ``optimal'' variance-based learning rate and the one of Adam hinges on the fact that the noise is Gaussian.\"\n\nSince a mini-batch stochastic gradient is the mean of individual per-training-example gradients (iid random variables), the Gaussian assumption is (asymptotically) supported by the CLT. We have done some qualitative experiments on this; stochastic gradients are not perfectly Gaussian, but it is a reasonable approximation at commonly-used mini-batch sizes. We are happy to include these experiments in the supplements.\n\n4) \"Theorem 1 recovers the usual rate of convergence for strongly convex function: How is this theorem supposed to support the fact that variance-adapted learning rates are a better idea than the usual updates?\"\n\nIt does not improve the rate, but it achieves it without an \"external\" decrease of the learning rate (e.g. a global 1/t schedule). Also, SVAG locally leads to a larger expected decrease than SGD (leading to better constants in the O(1/t) rate). This fact is currently hidden away in the proof, but becomes clear when we see that our choice for \\gamma_t minimizes the rhs of the first line of Eq (31).\n\n5) \"Moreover, the proof of Theorem 1 hinges on the fact that E[||g_t||^2]\\leq G^2. Clearly, this is not possible in general for a strongly convex function.\"\n\nThis is a standard assumption in convergence proofs of stochastic optimization methods. To name just a few examples: [1] (Theorem 1), [2] (Theorem 4.1), [3] (e.g. Theorem 4). It assumes that the algorithm does not diverge and that the noise is bounded. We can add a clarifying remark to the paper. As you point out, the non-divergence of the algorithm could be established in a first step.\n\n6) \"It is also indicative that none of the work for minimization of finite sums are cited or discussed, e.g. the variance reduced methods immediately come to mind.\"\n\nVariance-reduced methods are orthogonal to our work. They aim to construct gradient estimates with a lower variance, whereas we assume a gradient estimate as given and try to \"manage\" the variance by adapting per-element step sizes. These two approaches could be combined in future work. We will point out this connection in a revised version of the related work section.\n\n7) \"Regarding the experiments, the parameters are chosen to have the best test accuracy, mixing the machine learning problem with the optimization one\"\n\nWe agree that this is a problem. Ironically, on a different paper, where we treated NN training purely as an optimization problem, we had reviewers complain, asking for a test-set-based comparison. The community hasn't agreed on standard procedures for this. Do you have any recommendations how to please both sides? We can include a comparison based purely on train loss in the supplements.\n\n\nWe think that we have addressed your main concerns, especially about the assumption in Theorem 1 and the relationship to other works (variance-reduced methods, Karimi et al. [3]). We would thus like to ask you to reconsider your evaluation in light of this response.\n\n[1] Shamir and Zhang. Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes. 2012.\n[2] Kingma and Ba. Adam: A Method for Stochastic Optimization. 2015.\n[3] Karimi et al. Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak- Lojasiewicz Condition, 2016.", "Dear Reviewers, we want to point out that there is an unfortunate typo in the line after Eq. (9). The covariance matrix of $g(\\theta)$ should obviously be $Q (\\nu^2 I) Q^T = \\nu^2 QQ$ instead of $\\nu^2 I$. It's purely a typo; the subsequent considerations use the correct covariance matrix (see e.g., Section B.1)." ]
[ 4, 4, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1EwLkW0W", "iclr_2018_S1EwLkW0W", "iclr_2018_S1EwLkW0W", "HkqnfeuZf", "By-KPs9eM", "HJOTq8P-M", "HJOpoC8Zf", "r1mT8HDgf", "S1urbvOgf", "iclr_2018_S1EwLkW0W" ]
iclr_2018_HJYoqzbC-
A comparison of second-order methods for deep convolutional neural networks
Despite many second-order methods have been proposed to train neural networks, most of the results were done on smaller single layer fully connected networks, so we still cannot conclude whether it's useful in training deep convolutional networks. In this study, we conduct extensive experiments to answer the question "whether second-order method is useful for deep learning?". In our analysis, we find out although currently second-order methods are too slow to be applied in practice, it can reduce training loss in fewer number of iterations compared with SGD. In addition, we have the following interesting findings: (1) When using a large batch size, inexact-Newton methods will converge much faster than SGD. Therefore inexact-Newton method could be a better choice in distributed training of deep networks. (2) Quasi-newton methods are competitive with SGD even when using ReLu activation function (which has no curvature) on residual networks. However, current methods are too sensitive to parameters and not easy to tune for different settings. Therefore, quasi-newton methods with more self-adjusting mechanisms might be more useful than SGD in training deeper networks.
rejected-papers
This paper investigates the performance of various second-order optimization methods for training neural networks. Comparing different optimizers is worthwhile, but as this is an empirical paper which doesn't present novel techniques, the bar is very high for the experimental methodology. Unfortunately, I don't think this paper clears the bar: as pointed out by the reviewers, the comparisons miss several important methods, and the experiments miss out on important aspects of the comparison (e.g. wall clock time, generalization). I don't think there is enough of a contribution here to merit publication at ICLR, though it could become a strong submission if the reviewers' points were adequately addressed.
train
[ "rk_w_Rrlz", "Bkc6tWIgf", "rkOSPgqef", "SyLhb3IXz", "SJMkznLQf", "BJbwWnIQz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "A good experimentation of second order methods for training large DNNs in comparison with the popular SGD method has been lacking in the literature. This paper tries to fill that gap. Though there are some good experiments, I feel it could have been much better and more complete.\n\nSeveral candidates for second order methods are considered. However, their discussion and the final choice of the three methods is too rapid. It would have been useful to include an appendix with more details about them.\n\nThe results are mostly negative. The second order methods are much slower (in time) than SGD. The quasi-Newton methods are way too sensitive to hyperparameters. SHG is better in that sense, but it is far too slow. Distributed training is mentioned as an alternative, but that is just a casual statement - communication bottleck can still be a huge issue with large DNN models.\n\nI wish the paper had been bolder in terms of making improvements to one or more of the second order methods in order to make them better. For example, is it possible to come up with ways of choosing hyperparameters associated with the quasi-Newton implementations so as to make them robust with respect to batch size? Second order methods are almost dismissed off for RelU - could things be better with the use of a smooth version of RelU? Also, what about non-differentiability brought in my max pooling?\n\nOne disappointing thing about the paper is the lack of any analysis of the generalization performance associated with the methods, especially with the authors being aware of the works of Keskar et al and Kawaguchi et al. Clearly, the training method is having an effect on generlaization performance, with noise associated with stochastic methods being a great player for leading solutions to flat regions where generalization is better. One obvious question I have is: could it be that, methods such as SHG which have much less noise in them, have poor generalization properties? If so, how do we correct that?\n\nOverall, I like the attempt of exploring second order methods, but it could have come out a lot better.", "The paper conducts an empirical study on 2nd-order algorithms for deep learning, in particular on CNNs to answer the question whether 2nd-order methods are useful for deep learning. More modestly and realistically, the authors compared stochastic Newton method (SHG) and stochastic Quasi- Newton method (SR1, SQN) with stochastic gradient method (SGD). The activation function ReLu is known to be singular at 0, which may lead to poor curvature information, but the authors gave a good numerical comparison between the performances of 2nd-order methods with ReLu and the smooth function, Tanh. The paper presented a reasonably good overview of existing 2nd-order methods, with clear numerical examples and reasonably well written.\n\nThe paper presents several interesting empirical findings, which will no doubt lead to follow up work. However, there are also a few critical issues that may undermine their claims, and that need to be addressed before we can really answer the original question of whether 2nd-order methods are useful for deep learning. \n\n1. There is no complexity comparison, e.g. what is the complexity for a single step of different method.\n\n2. Relatedly, the paper reports the performance over epochs, but it is not clear what \"per epoch\" means for 2nd-order methods. In particular, it seems to me that they did not count the inner CG iterations, and it is known that this is crucial in running time and important for quality. If so, then the comparison between 1st-order and 2nd-order methods are not fair or incomplete.\n\n3. The results on 2nd-order methods behave similarly to 1st-order methods, which makes me wonder how many CG iterations they used for 2nd-order method in their experiment, and also the details of the data. In particular, are they looking at parameter/hyperparameter settings for which 2nd-order methods aren't really necessary.\n\n4. In deep learning setting, the training objective is non-convex, which means the Hessian can be non-PSD. It is not clear how the stochastic inexact-Newton method mentioned in Section 2.1 could work. Details on implementations of 2nd-order methods are important here.\n\n5. For 2nd-order methods, the author used line search to tune the step size. It is not clear in the line search, the author used the whole training objective or batch loss. Assuming using the batch loss, I suspect the training curve will be very noisy (depending on how large the batch size is). But the paper only show the average training curves, which might be misleading.\n\nHere are other points.\n\n1. There is no figure showing training/ test accuracy. Aside from being interested in test error, it is also of interest to see how 2nd order methods are similar/different than 1st order methods on training versus test.\n\n2. Since it is a comparison paper, it only compares three 2nd-order methods with SGD. The choices made were reasonable, but 2nd-order methods are not as trivial to implement as SGD, and it isn't clear whether they have really \"spanned the space\" of second order methods\n\n3. In the paper, the settings of LeNet, AlexNet are different with those in the original paper. The authors did not give a reason.\n\n4. The quality of figures is not good.\n\n5. The setting of optimization is not clear, e.g. the learning rate of SGD, the parameter of backtrack line search. It's hard to reproduce results when these are not described.\n\n", "This paper presents a comparative study on second-order optimization methods for CNNs. Overall, the topic is interesting and would be useful for the community.\n\nHowever, I think there are important issues about the paper:\n\n1) The paper is not very well-written. The language is sometimes very informal, there are many grammatical mistakes and typos. The paper should be carefully proofread.\n\n2) For such a comparative study, the number of algorithms and the number of datasets are quite little. The authors do not mention several important methods such as (not exhaustive)\n\nSchraudolph, N. N., Yu, J., and Günter, S. A stochastic quasi-Newton method for online convex optimization.\nGurbuzbalaban et al, A globally convergent incremental Newton method (and other papers of the same authors)\nA Linearly-Convergent Stochastic L-BFGS Algorithm, Moritz et al\n\n3) The experiment details are not provided. It is not clear what parameters are used and how. \n\n4) There are some vague statements such as \"this scheme does not work\" or \"fixed learning rates are not applicable\". For instance, for the latter I cannot see a reason and the paper does not provide any convincing results.\n\nEven though the paper attempts to address an important point in deep learning, I do not believe that the presented results form evidence for such rather bold statements.", "Reviewer 3 mentioned that certain description of certain existing methods such as “this scheme does not work” or “fixed learning rates are not applicable” are not clear. This is the case which we just explained above that implementation is not trivial. When implementing some algorithms, we find out our implementation simply cannot converge during training, or in original paper authors use fixed learning rate but it again diverges during training. This signals the fact that most existing second-order methods are not stable under more complicated problems but this observation is hardly discussed before. This makes extensive study of 2nd-order methods infeasible.\n", "\nQ: Relatedly, the paper reports the performance over epochs, but it is not clear what \"per epoch\" means for 2nd-order methods. In particular, it seems to me that they did not count the inner CG iterations, and it is known that this is crucial in running time and important for quality.\n\nA: We run each iteration 10 CG steps but in fact even 1 CG step performs roughly the same as 10 steps. Each hessian-vector product is about 2 times more expensive than a gradient computation. Somehow the key bottleneck here in SHG is on computing the full gradient (or 20% gradient in our paper) instead of CG (using much smaller subsamples). Huge time difference basically comes from gradient aggregation step not CG or line search stage.\n\n\nQ: In the paper, the settings of LeNet, AlexNet are different with those in the original paper. The authors did not give a reason. \n\nA: We basically follow the same architecture of LeNet and AlexNet, and the same as residual network 20-layer implementation. The only difference is that as we notice SHG cannot work on networks with ReLu unit, so we replace it with tanh. Also, we didn’t use data augmentation to accelerate the experiment.\n\nQ: The results on 2nd-order methods behave similarly to 1st-order methods, which makes me wonder how many CG iterations they used for 2nd-order method in their experiment, and also the details of the data. In particular, are they looking at parameter/hyperparameter settings for which 2nd-order methods aren't really necessary.\n\n\nA: CG part is explained above. SHG basically doesn’t have any hyperparameter to tune as we adopt fixed CG step and line search scheme. For other methods, there might be some parameters. For example, SQN needs to decide the update frequency and length of memory. Default values provided in the original paper does not converge in deep neural networks.\n\nQ: In deep learning setting, the training objective is non-convex, which means the Hessian can be non-PSD. It is not clear how the stochastic inexact-Newton method mentioned in Section 2.1 could work. Details on implementations of 2nd-order methods are important here. \n\nA: Indeed, Hessian might not be PSD. That’s why line search is important for inexact-Newton to work. As we decrease the step-size, eventually it will find an update step either descendent or the step size is too small to make this update affect the performance.\n\n\n", "We thank all reviewers for valuable comments. We will reply to common questions first and then reply specifically to reviewer 2 and 3.\n\nAs mentioned by reviewer 2, our choice of second-order methods is reasonable and it’s based on general categories of second-order methods. However, the implementation of all these second-order methods are not trivial so it’s pretty much impossible for us to reimplement all unless authors release their code, which is unfortunately not the case for almost all cases. In addition, most methods require delicate implementation for different models, and this inhibits us from experimenting all different methods on various CNN models. So we mainly focus on the methods which claim to be useful for non-convex problems or especially for deep neural networks. Thus works mentioned by the reviewer 3 are not considered since those works focus on “strongly convex” problems. From the remaining methods, we chose exemplar methods which we could implement and validate their correctness by comparing the experimental results on their original work. This coverage might not be exhaustive, but the real situation is that even the characteristics of vanilla-version 2nd-order methods on convolutional neural network are not well understood. We believe results in our paper provide some new findings which can benefit later development of 2nd-order methods.\n\n\nNext we explain more details on implementation. Reviewers raise concerns on details of our SGD setup. For SGD, we tried learning rate starting from 0.001 and upscale an order until the training curve does not converge. The value of optimal learning rate can be different with different model/dataset. Essentially we do this for every set up. We didn’t repeat this for second-order methods as in second-order methods we adopt line search scheme which does not need to predetermine a fixed learning rate.\n\n" ]
[ 5, 6, 3, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1 ]
[ "iclr_2018_HJYoqzbC-", "iclr_2018_HJYoqzbC-", "iclr_2018_HJYoqzbC-", "rkOSPgqef", "Bkc6tWIgf", "iclr_2018_HJYoqzbC-" ]
iclr_2018_ByJDAIe0b
Integrating Episodic Memory into a Reinforcement Learning Agent Using Reservoir Sampling
Episodic memory is a psychology term which refers to the ability to recall specific events from the past. We suggest one advantage of this particular type of memory is the ability to easily assign credit to a specific state when remembered information is found to be useful. Inspired by this idea, and the increasing popularity of external memory mechanisms to handle long-term dependencies in deep learning systems, we propose a novel algorithm which uses a reservoir sampling procedure to maintain an external memory consisting of a fixed number of past states. The algorithm allows a deep reinforcement learning agent to learn online to preferentially remember those states which are found to be useful to recall later on. Critically this method allows for efficient online computation of gradient estimates with respect to the write process of the external memory. Thus unlike most prior mechanisms for external memory it is feasible to use in an online reinforcement learning setting.
rejected-papers
This paper presents a memory architecture for RL based on reservoir sampling, and is meant to be an alternative to RNNs. The reviewers consider the idea to be potentially interesting and useful, but have concerns about the mathematical justification. They also point out limitations in the experiments: in particular, use of artificial toy problems, and a lack of strong baselines. I don't think the paper is ready for ICLR publication in its current form.
train
[ "HkoFNQpeG", "SkCS_v6ez", "SyZdbUQZz", "SkOS7RHWG", "Byo6WRS-z", "SJl_lCrZz", "BkAxeRHWG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper considers a new way to incorporate episodic memory with shallow-neural-nets RL using reservoir sampling. The authors propose a reservoir sampling algorithm for drawing samples from the memory. Some theoretical guarantees for the efficiency of reservoir sampling are provided. The whole algorithm is tested on a toy problem with 3 repeats. The comparisons between this episodic approach and recurrent neural net with basic GRU memory show the advantage of proposed algorithm.\n\nThe paper is well written and easy to understand. Typos didn't influence reading. It is a novel setup to consider reservoir sampling for episodic memory. The theory part focuses on effectiveness of drawing samples from the reservoir. Physical meanings of Theorem 1 are not well represented. What are the theoretical advantages of using reservoir sampling? \n\nFour simple, shallow neural nets are built as query, write, value, and policy networks. The proposed architecture is only compared with a recurrent baseline with 10-unit GRU network. It is not clear the better performance comes from reservoir sampling or other differences. Moreover, the hyperparameters are not optimized on different architectures. It is hard to justify the empirically better performance without hyperparameter tuning. The authors mentioned that the experiments are done on a toy problem, only three repeats for each experiment. The technically soundness of this work is weakened by the experiments.\n", "The paper proposes a modified approach to RL, where an additional \"episodic memory\" is kept by the agent. What this means is that the agent has a reservoir of n \"states\" in which states encountered in the past can be stored. There are then of course two main questions to address (i) which states should be stored and how (ii) how to make use of the episodic memory when deciding what action to take. \n\nFor the latter question, the authors propose using a \"query network\" that based on the current state, pulls out one state from the memory according to certain probability distribution. This network has many tunable parameters, but the main point is that the policy then can condition on this state drawn from the memory. Intuitively, one can see why this may be advantageous as one gets some information from the past. (As an aside, the authors of course acknowledge that recurrent neural networks have been used for this purpose with varying degrees of success.)\n\nThe first question, had a quite an interesting and cute answer. There is a (non-negative) importance weight associated with each state and a collection of states has weight that is simply the product of the weights. The authors claim (with some degree of mathematical backing) that sampling a memory of n states where the distribution over the subsets of past states of size n is proportional to the product of the weights is desired. And they give a cute online algorithm for this purpose. However, the weights themselves are given by a network and so weights may change (even for states that have been observed in the past). There is no easy way to fix this and for the purpose of sampling the paper simply treats the weights as immutable. \n\nThere is also a toy example created to show that this approach works well compared to the RNN based approaches.\n\nPositives:\n\n- An interesting new idea that has potential to be useful in RL\n- An elegant algorithm to solve at least part of the problem properly (the rest of course relies on standard SGD methods to train the various networks)\n\nNegatives:\n- The math is fudged around quite a bit with approximations that are not always justified\n- While overall the writing is clear, in some places I feel it could be improved. I had a very hard time understanding the set-up of the problem in Figure 2. [In general, I also recommend against using figure captions to describe the setup.]\n- The experiments only demonstrate the superiority of this method on an example chosen artificially to work well with this approach.", "This paper proposes one RL architecture using external memory for previous states, with the purpose of solving the non-markov tasks. The essential problems here are how to identify which states should be stored and how to retrieve memory during action prediction. The proposed architecture could identify the ‘key’ states through assigning higher weights for important states, and applied reservoir sampling to control write and read on memory. The weight assigning (write) network is optimized for maximize the expected rewards. This article focuses on the calculation of gradient for write network, and provides some mathematical clues for that.\n\nThis article compares their proposed architecture with RNN (GRU with 10 hidden unit) in few toy tasks. They demonstrate that proposed model could work better and rational of write network could be observed. However, it seems that hyper-parameters for RNN haven’t been tuned enough. It is because the toy task author demonstrates is actually quite similar to copy tasks, that previous state should be remembered. To my knowledge, copy task could be solved easily for super long sequence through RNN model. Therefore, empirically, it is really hard to justify whether this proposed method could work better. Also, intuitively, this episodic memory method should work better on long-term dependencies task, while this article only shows the task with 10 timesteps. \n\nAccording to that, the experiments they demonstrated in this article are not well designed so that the conclusion they made in this article is not robust enough. ", "Thank you for your feedback, points about the experimental design are discussed in the general comments.\n\nRe:\nPhysical meanings of Theorem 1 are not well represented. What are the theoretical advantages of using reservoir sampling?\n\nThis is partially addressed in the general comment at the top. The main advantage of using reservoir sampling is computationally. Naively drawing a weighted sample of n states from the entire history of visited states would require us to store this entire history to draw from. Reservoir sampling allows us to update a sample online (in O(n) time in the size of the memory) while avoiding storing any states not included in the current sample. Because we are able to do this we can then use a policy gradient like approach to train the memory to preferentially store states that lead to greater advantage when recalled.", "Thank you for your feedback, we agree that more work is needed to evaluate the empirical usefulness of our approach, our aim in this work was simply to demonstrably produce an external memory mechanism for which we could estimate gradients for training without back-propagation through time. We have explained our methodology in more detail in the general comment above.\n\nRe:\nIt is because the toy task author demonstrates is actually quite similar to copy tasks, that previous state should be remembered.\n\nIn the prior work you mention which trains RNNs on a copy task it is likely they were working in a sequence prediction framework rather than reinforcement learning. In sequence prediction the agent would process characters one at a time while outputting a probability distribution for each next character. The training would then proceed by increasing the probability of the correct characters as they are revealed providing a dense signal for improvement. The reinforcement learning framework assumes much less structure in both the problem and the provided feedback than this. The agent is not provided with the correct action at each step as a learning signal, it must instead learn this through trial and error, and rewards for the correct action are not necessarily given immediately but may be delayed as is the case in our problem (particularly in the case of more than one decision state). To train on this task in a manner similar to sequence prediction we would have to assume significantly more structure than the conventional reinforcement learning framework. These factors make the problem significantly more difficult, although it may be superficially similar.\n\nRe:\nAlso, intuitively, this episodic memory method should work better on long-term dependencies task, while this article only shows the task with 10 time-steps. \n\nAppendix E shows a version of the task with 20 time-steps which gives an early indication that it scales quite well with length, we agree scaling to much longer tasks would be informative to explore in the future.", "Thank you for the detailed feedback, these are some very helpful points.\n\nRe: \nHowever, the weights themselves are given by a network and so weights may change (even for states that have been observed in the past).\n\nThis is correct however it should be noted that this problem does not exist if we were to use batch training, i.e. running full episodes and training on the total resulting loss at the end. The issue is then similar to issues that arise when training recurrent models in an online setting (e.g. present recurrent state is the result of now stale past parameters). Though we do not prove it in this work we conjecture that this will not be an issue in the limit of small step sizes.\n\nRe:\n-The math is fudged around quite a bit with approximations that are not always justified\n\nWhile we tried to be as explicit as possible about where approximations were used, there are many aspects we are working to improve in the future. To give one example: how to take advantage of the interaction between the query and write network to better justify training only the write weight of the queried item and not every item presently in memory seems like a fruitful area for further theoretical investigation and improvement.\n\nRe:\n-While overall the writing is clear, in some places I feel it could be improved. I had a very hard time understanding the set-up of the problem in Figure 2. [In general, I also recommend against using figure captions to describe the setup.]\n\nThank you for pointing this out, we agree the task could be explained more clearly and will work on revising this.", "The authors thank the reviewers for their helpful feedback.\n\nOne thing we would like to clarify is that the primary aim of this paper is not to propose a method to compete with recurrent neural networks (or their many variants) for reinforcement learning. Rather we build on the body of work which makes use of neural networks acting on external memories which has been shown to be useful in many cases. However, existing work involving external memory in a reinforcement learning or sequence prediction setting usually uses back-propagation through time for training. This requires computation proportional to the history length which is not acceptable in online reinforcement learning, particularly in the continuing learning setting where the history length grows arbitrarily long. If a model uses only recurrent computation this can be remedied to an extent by truncated back-propagation through time, at the cost of biasing the gradients and limiting the effective horizon of the model. \n\nWhen using models which read from and write to external memories however, much of the apparent benefit comes from the ability to easily store information on much longer time scales. In this case it is less clear whether truncating gradients is a viable option and it makes sense to look into other gradient based methods which could work online with an external memory. The main contribution of this work is to present a particular framework for external memory in RL which, through the use of reservoir sampling, can be trained online (in O(n) time in the memory size) with approximate gradients without the need to perform back-propagation through time. For this reason we did not see it as meaningful or worthwhile at this point to compare with the many works which utilize external memories, or similarly temporal attention mechanisms, trained with back-propagation through time. We will make revisions to our introduction along these lines to make the motivation and intended use-case of our algorithm more clear.\n\nThe environment used in the experiments is intentionally simple to allow straightforward analysis and intuitive understanding of what the agent is doing. This is exemplified by the included plots of write weights and queried values which demonstrate that the individual components of our agent are performing as expected. Similar plots would likely not be possible to obtain in a more complex environment. In general we agree with the reviewers that more robust experiments are needed to demonstrate the practical utility of this approach. We see this work as building a conceptual foundation for future work on more realistic environments.\n\nThe RNN baseline provided was in no way meant to be representative of state of the art on problems like the one we explore here, it was only intended to provide context for our main experimental result. The main result being that our external memory mechanism, trained online with no back propagation through time can in fact learn to remember the important states (which for purpose of illustration are known to us ahead of time in this case) in a reinforcement learning problem. With that said we acknowledge that performing a more thorough hyper-parameter sweep would make for a more meaningful baseline, thus we plan do this and add it in a revision once complete.\n\nWe will address remaining concerns to the individual reviews.\n\nWe thank the reviewers again for constructive feedback, and hope that our responses address your concerns." ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2018_ByJDAIe0b", "iclr_2018_ByJDAIe0b", "iclr_2018_ByJDAIe0b", "HkoFNQpeG", "SyZdbUQZz", "SkCS_v6ez", "iclr_2018_ByJDAIe0b" ]
iclr_2018_H13WofbAb
Faster Distributed Synchronous SGD with Weak Synchronization
Distributed training of deep learning is widely conducted with large neural networks and large datasets. Besides asynchronous stochastic gradient descent~(SGD), synchronous SGD is a reasonable alternative with better convergence guarantees. However, synchronous SGD suffers from stragglers. To make things worse, although there are some strategies dealing with slow workers, the issue of slow servers is commonly ignored. In this paper, we propose a new parameter server~(PS) framework dealing with not only slow workers, but also slow servers by weakening the synchronization criterion. The empirical results show good performance when there are stragglers.
rejected-papers
This paper introduces a method for making synchronous SGD more resistant to failed or slow workers. The idea seems plausible, but as the reviewers point out, the novelty and the experimental validation are somewhat limited. For a contribution such as this, it would be good to see some experiments on a wider range of tasks, and experiments with real rather than simulated workloads. I don't think this work is ready for publication at ICLR.
train
[ "H1SAHAdlf", "Sy8xDgYxG", "HkVrKHYlG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces a parameter server architecture to improve distributed training of CNNs in the presence of stragglers. Specifically, the paper proposes partial pulling where a worker only waits for first b blocks rather than all the blocks of the parameters. This technique is combined with existing methods such as partial pushing (Pan et. al. 2017) for a partial synchronous SGD method. The method is evaluated with Resnet -50 using synthetic delays.\n\nComments for the author:\n\nThe paper is well-written and easy to follow. The problem of synchronization costs being addressed is important but it is unclear how much of this is arising due to large blocks.\n\n1) The partial pushing method (Pan et. al. 2017, section 3.1) shows a clear evidence for the problem using a real workload with a large number of workers. Unfortunately, in your Figure 2, this is not as obvious and not real since it is using simulated delays. More specifically, it is not clear how the workers behave in a real environment and whether you get a clear benefit from using a partial number of blocks as opposed to sending all of them. \n\n2) Did you modify your code to support block-wise sending of gradients (some description of how the framework was modified will be helpful)? The idea is to send partial parameter blocks and when 'b' blocks are received, compute the gradients. I feel that, with such a design, you may actually end up hurting the performance by sending a large number of small packets in the no failure case. For real, large data centers, this may cause a packet storm and subsequent throughput collapse (e.g. the incast problem). You need to show the evidence that you do not hurt the failure-free case for a large number of workers.\n\n3) The evaluation is on fairly small workloads (CIFAR-10). Again, evaluating over Imagenet and demonstrating a clear speedup over existing sync methods will be helpful. Furthermore, a clear description of your “pull” configuration (such as in Figure 1) i.e. how many actual bytes or blocks are sent and what is the threshold will be helpful (beyond a vague 90%).\n\n4) Another concern with partial synchronization methods that I have is that how do you pick these configurations (pull 0.75 etc). These appear to be dataset specific and finding the optimal configuration here requires significant experimentation that takes significantly more time than just running the baseline.\n\nOverall, I feel there is not enough evidence for the problem specifically generating large blocks of gradients and this needs to be clearly shown. To propose a solution for stragglers, evaluation should be done in a datacenter environment with the presence of stragglers (and not small workloads with synthetic delays). Furthermore, the proposed technique despite the simplicity appears as a rather incremental contribution.", "This paper considers distributed synchronous SGD, and proposes to use \"partial pulling\" to alleviate the problem with slow servers.\n\nThe motivation is that the server may be a straggler. The authors suggested one possibility, namely that the server and some workers are located on the same machine and the workers take most of the computational resource. However, if this is the case, a simple solution would be to move the server to a different node. A more convincing argument for a slow server should be provided.\n\nThough the authors claimed that they used 3 techniques to accelerate synchronous SGD, only partial pulling is proposed by them (the other 2 are borrowed straightforwardly from existing papers). The mechanism of partial pulling is very simple (just let SGD proceed after pulling a partial parameter block instead of the whole block). As mentioned by the authors in section 1, any relaxation in synchrony brings more noise and higher variance to the updates, and also may cause slow convergence or convergence to a poor solution. However, the authors provided no theoretical study on any of these aspects.\n\nExperimental results are not convincing. Only one relatively small dataset (cifar10) is used Moreover, the slow server problem is only simulated by artificially adding delays to the server.", "Paper proposes a weak synchronization approach to synchronous SGD with the goal of improving even with slow parameter servers. This is an improvement on earlier proposals (e.g. Revisiting Synchronous SGD) that allow for slow workers. Empirical results on ResNet50 on CIFAR show promising results for simulations with slow workers and servers, with the proposed approach.\n\nIssues with the paper:\n- Since the paper is focused on empirical results, having results only for ResNet50 on CIFAR is very limiting\n- Empirical results are based on simulations and not real workloads. The choice of simulation constants (% delayed, and delay time) seems somewhat arbitrary as well.\n- For the simulated results, the comparisons seem unfair since the validation error is different. It will be useful to also provide time to a certain accuracy that all of them get to e.g. the validation error of 0.1609 (reached by the 3 important cases).\n\nOverall, the paper proposes an interesting improvement to this area of synchronous training, however it is unable to validate the impact of this proposal." ]
[ 4, 3, 4 ]
[ 5, 4, 5 ]
[ "iclr_2018_H13WofbAb", "iclr_2018_H13WofbAb", "iclr_2018_H13WofbAb" ]
iclr_2018_BJLmN8xRW
Character Level Based Detection of DGA Domain Names
Recently several different deep learning architectures have been proposed that take a string of characters as the raw input signal and automatically derive features for text classification. Little studies are available that compare the effectiveness of these approaches for character based text classification with each other. In this paper we perform such an empirical comparison for the important cybersecurity problem of DGA detection: classifying domain names as either benign vs. produced by malware (i.e., by a Domain Generation Algorithm). Training and evaluating on a dataset with 2M domain names shows that there is surprisingly little difference between various convolutional neural network (CNN) and recurrent neural network (RNN) based architectures in terms of accuracy, prompting a preference for the simpler architectures, since they are faster to train and less prone to overfitting.
rejected-papers
meta score: 4 This is basically an application in which some different deep learning approaches are compared on the task of automatically identifying domain names automatically generated by malware. The experiments are well-constructed and reported. However, the work does not have novelty beyond the application domain, and thus is not really suitable for ICLR. Pros - good set of experiments carried out on an important task - clearly written Cons - lacks technical novelty
train
[ "H18dy2IEM", "HJukYvYxf", "SJ7ulqYxz", "S1SG_l5gz", "S1ZaxLkfz", "H1RFyLkMf", "r1xf1LJGz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "I appreciate the effort to include the additional experiments.\n\nThe positive points of this paper remain the correct technical evaluation and the multiple models being evaluated.\nTechnically the work appears to be solid and improved in the revision thanks to the additional experiments.\n\nUnfortunately, given the limited novelty, the paper remains borderline to me.\n\n\n", "\nSUMMARY\n\nThis paper addresses the cybersecurity problem of domain generation algorithm (DGA) detection. A class of malware uses algorithms to automatically generate artificial domain names for various purposes, e.g. to generate large numbers of rendezvous points. DGA detection concerns the (automatic) distinction of actual and artificially generated domain names. In this paper, a basic problem formulation and general solution approach is investigated, namely that of treating the detection as a text classification task and to let domain names arrive to the classifier as strings of characters. A set of five deep learning architectures (both CNNs and RNNs) are compared empirical on the text classification task. A domain name data set with two million instances is used for the experiments. The main conclusion is that the different architectures are almost equally accurate and that this prompts a preference of simpler architectures over more complex architectures, since training time and the likelihood for overfitting can potentially be reduced.\n\nCOMMENTS\n\nThe introduction is well-written, clear, and concise. It describes the studied real-world problem and clarifies the relevance and challenge involved in solving the problem. The introduction provides a clear overview of deep learning architectures that have already been proposed for solving the problem as well as some architectures that could potentially be used. One suggestion for the introduction is that the authors take some of the description of the domain problem and put it into a separate background section to reduce the text the reader has to consume before arriving at the research problem and proposed solution.\n\nThe methods section (Section 2) provides a clear description of each of the five architectures along with brief code listings and details about whether any changes or parameter choices were made for the experiment. In the beginning of the section, it is not clarified why, if a 75 character string is encoded as a 128 byte ASCII sequence, the content has to be stored in a 75 x 128 matrix instead of a vector of size 128. This is clarified later but should perhaps be discussed earlier to allow readers from outside the subarea to grasp the approach.\n\nSection 3 describes the experiment settings, the results, and discusses the learned representations and the possible implications of using either the deep architectures or the “baseline” Random Forest classifier. Perhaps, the authors could elaborate a little bit more on why Random Forests were trained on a completely different set of features than the deep architectures? The data is stated to be randomly divided into training (80%), validation (10%), and testing (10%). How many times is this procedure repeated? (That is, how many experimental runs were averaged or was the experiment run once?).\n\nIn summary, this is an interesting and well-written paper on a timely topic. The main conclusion is intuitive. Perhaps the conclusion is even regarded as obvious by some but, in my opinion, the result is important since it was obtained from new, rather extensive experiments on a large data set and through the comparison of several existing (earlier proposed) architectures. Since the main conclusion is that simple models should be prioritised over complex ones (due to that their accuracy is very similar), it would have been interesting to get some brief comments on a simplicity comparison of the candidates at the conclusion.\n\nMINOR COMMENTS\n\nAbstract: “Little studies” -> “Few studies”\n\nTable 1: “approach” -> “approaches”\n\nFigure 1: Use the same y-axis scale for all subplots (if possible) to simplify comparison. Also, try to move Figure 1 so that it appears closer to its inline reference in the text.\n\nSection 3: “based their on popularity” -> “based on their popularity”\n\n", "This paper applies several NN architectures to classify url’s between benign and malware related URLs.\nThe baseline is random forests and feature engineering.\n\nThis is clearly an application paper. \nNo new method is being proposed, only existing methods are applied directly to the task.\n\nI am not familiar with the task at hand so I cannot properly judge the quality/accuracy of the results obtained but it seems ok.\nFor evaluation data was split randomly in 80% train, 10% test and 10% validation. Given the amount of data 2*10**6 samples, this seems sufficient.\nI think the evaluation could be improved by using malware URLs that were obtained during a larger time window.\nSpecifically, it would be nice if train, test and validation URLs would be operated chronologically. I.e. all train url precede the validation and test urls.\nIdeally, the train and test urls would also be different in time. This would enable a better test of the generalization capabilities in what is essentially a continuously changing environment. \n\nThis paper is a very difficult for me to assign a final rating.\nThere is no obvious technical mistake and the paper is written reasonably well.\nThere is however a lack of technical novelty or insight in the models themselves. \nI think that the paper should be submitted to a journal or conference in the application domain where it would be a better fit.\n\nFor this reason, I will give the score marginally below the acceptance threshold now.\nBut if the other reviewers argue that the paper should be accepted I will change my score.\n\n", "This paper proposes to automatically recognize domain names as malicious or benign by deep networks (convnets and RNNs) trained to directly classify the character sequence as such.\n\n\nPros\n\nThe paper addresses an important application of deep networks, comparing the performance of a variety of different types of model architectures.\n\nThe tested networks seem to perform reasonably well on the task.\n\n\nCons\n\nThere is little novelty in the proposed method/models -- the paper is primarily focused on comparing existing models on a new task.\n\nThe descriptions of the different architectures compared are overly verbose -- they are all simple standard convnet / RNN architectures. The code specifying the models is also excessive for the main text -- it should be moved to an appendix or even left for a code release.\n\nThe comparisons between various architectures are not very enlightening as they aren’t done in a controlled way -- there are a large number of differences between any pair of models so it’s hard to tell where the performance differences come from. It’s also difficult to compare the learning curves among the different models (Fig 1) as they are in separate plots with differently scaled axes.\n\nThe proposed problem is an explicitly adversarial setting and adversarial examples are a well-known issue with deep networks and other models, but this issue is not addressed or analyzed in the paper. (In fact, the intro claims this is an advantage of not using hand-engineered features for malicious domain detection, seemingly ignoring the literature on adversarial examples for deep nets.) For example, in this case an attacker could start with a legitimate domain name and use black box adversarial attacks (or white box attacks, given access to the model weights) to derive a similar domain name that the models proposed here would classify as benign.\n\n\nWhile this paper addresses an important problem, in its current form the novelty and analysis are limited and the paper has some presentation issues.", "We would like to thank reviewer 3 for the careful review provided to us, and respond to the comments below. \n\nThe random forest classifier used in the experiments was trained on a set of expert defined features that is commonly used in machine learning models for DGA detection. In the revised version of the paper, we have also added results for a multilayer perceptron with 1 hidden layer that is trained on the same expert defined features as the random forest. These features are extracted from the domain names in a preprocessing step, i.e. each domain name is converted into a list of features (numerical values). The drawback of this is that it makes machine learning models for DGA detection vulnerable to being outdated, when hackers learn about the expert defined features and come up with new DGA algorithms to circumvent them. A recent, successful approach in machine learning for DGA detection is therefore to not extract any features at all a-priori and instead give the entire domain name string as a raw input signal to a deep network that learns by itself which features to extract. The features extracted by a deep network are in no way predefined by a human, and it is even famously difficult to interpret them. Furthermore, such deep networks can even outperform systems that incorporate human knowledge, as is clear from our experiments: all deep networks significantly outperform the random forest as well as a multilayer perceptron trained with human defined features. Note that there is no straightforward way at all to train a random forest directly on a raw domain name string. The ability to learn what features to extract from raw input is a defining characteristic of deep learning.\n \nFor the results reported in the paper, we split the dataset once into 80% for training, 10% for validation, and 10% for testing, and kept this exact same split throughout all the experiments. In other words, the numbers reported in the paper result from running each experiment once. They are not averages. We are aware that for small to medium-sized datasets, k-fold cross-validation gives a more reliable estimate of the predictive performance of models. In our initial experiments (not included in the paper), we trained and evaluated some of the deep networks using 5-fold cross-validation. We found that the difference across folds was small, which is a known phenomenon when working with large datasets. Given the high computational cost for training an individual deep network (up to 10 hours), we therefore decided to go forward with a single 80-10-10 split and keep this consistent across all experiments, i.e. each model reported on in the paper is trained on exactly the same set of domain names, and tested on exactly the same set of domain names (fully disjoint from the training dataset). For the revised version of the paper, we have created an additional prospective test dataset with DGAs that were observed in real traffic in December 2017. The additional results included in the paper (Table 4) show how the deep neural networks trained on data from July 2017 hold up against such a prospective dataset with “future” data.\n\nOur results were obtained, as pointed out by reviewer 3, by computationally expensive experiments, carefully designed and implemented. They required computational power that can be prohibitive for many other researchers who are interested in using deep learning models for DGA detection. As we previously stated, we believe our results are of practical value for people designing such kind of classifiers in industry and academia. In the revised version of the paper, we have extended our comparison of all the studied models regarding computational performance for training and scoring (as suggested by reviewer 3). \n\nFinally, we thank reviewer 3 for raising the presentation issues. We have addressed them in the revised version (including the creation of a separate “Background” section).", "We would like to thank reviewer 2 for the careful review provided to us, and respond to the comments below. \n \nDeep Neural Networks have recently appeared in the literature on DGA detection. They significantly outperform traditional machine learning methods in accuracy, at the price of increasing the complexity of training the model and requiring larger datasets. While it is exciting to see yet another task where deep learning comes to the scene as a leading technique, the proposed methods were entirely arbitrary. There was no justification for the proposed architectures or the size of the networks and no clue on how much better these models would perform given some extra fine-tuning. \n \nWe aimed at filling this gap in the literature. We collected several models previously used for text classification problems and DGA detection, optimized these models and compared them systematically and rigorously. We ended up showing that all these networks (after several optimizations and fine-tuning) performed equally well despite their vast differences. So, one should pick up the model that can be trained in the least amount of time and requires less data (has fewer parameters to be trained).\n \nWe believe our results are robust and will be of practical value for people designing such kind of classifiers in industry and academia. Given that, in practice, these models would be continuously re-trained to add new families (online learning), optimizing the training time is an important question.\n\nThe remark by reviewer 2 that a chronological split between training, validation, and test data would be “a better test of the generalization capabilities in a continuously changing environment” is a valid one. For the revised version of the paper, we have created an additional prospective test dataset with DGAs that were observed in real traffic in December 2017. The additional results included in the paper (Table 4) show how the deep neural networks trained on data from July 2017 hold up against such a prospective dataset with “future” data.", "We would like to thank reviewer 1 for the careful review provided to us, and respond to the comments below. \n\nDeep Neural Networks have recently appeared in the literature on DGA detection. They significantly outperform traditional machine learning methods in accuracy, at the price of increasing the complexity of training the model and requiring larger datasets. While it is exciting to see yet another task where deep learning comes to the scene as a leading technique, the proposed methods were entirely arbitrary. There was no justification for the proposed architectures or the size of the networks and no clue on how much better these models would perform given some extra fine-tuning. \n \nWe aimed at filling this gap in the literature. We collected several models previously used for text classification problems and DGA detection, optimized these models and compared them systematically and rigorously. We ended up showing that all these networks (after several optimizations and fine-tuning) performed equally well despite their vast differences. So, one should pick up the model that can be trained in the least amount of time and requires less data (has fewer parameters to be trained).\n \nWe believe our results are robust and will be of practical value for people designing such kind of classifiers in industry and academia. Given that, in practice, these models would be continuously re-trained to add new families (online learning), optimizing the training time is an important question.\n\nThe issue raised by reviewer 1 about generating adversarial examples is an important one. In the revised version of the paper, we cited the following recent work, and put it in context:\n\nAnderson, Hyrum S., Jonathan Woodbridge, and Bobby Filar. \"DeepDGA: Adversarially-Tuned Domain Generation and Detection.\" In Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security, pp. 13-21. ACM, 2016.\n\nIn this paper, a character-based generative adversarial network (GAN) is used to augment training sets in order to harden other machine learning models (like a random forest) against yet-to-be-observed DGAs. It is highly unlikely for attackers to use GANs themselves, because DGA algorithms must be light enough to be embedded inside malware code. Furthermore, generating domain names that look like a benign domain is not enough for an effective DGA. Ideally, every domain produced by a DGA must not have been registered yet or must have a low likelihood of being registered already – if a domain produced by a DGA has already been taken, it is useless for the botmaster. Combining all these requirements is essential for a serious study of adversarial generated domains and requires a paper of itself.\n\nThe controlling factor in our experiments is, in a sense, the accuracy (TPR). The Endgame and Invincea models resulted, after very small adaptations, in a very similar accuracy. We adjusted the other architectures (CMU, NYU, MIT) to achieve a similar performance. This was relatively easy to do, with a good amount of trial and error. For all models, at 97%-98% TPR we hit a plateau through which we have not been able to break yet. \n\nWe addressed the presentation issues raised by reviewer 1 in the revised version. The scale of the vertical axes in Figure 1 has been made consistent, and the Keras code snippets have been moved to an appendix." ]
[ -1, 7, 5, 4, -1, -1, -1 ]
[ -1, 4, 3, 4, -1, -1, -1 ]
[ "H1RFyLkMf", "iclr_2018_BJLmN8xRW", "iclr_2018_BJLmN8xRW", "iclr_2018_BJLmN8xRW", "HJukYvYxf", "SJ7ulqYxz", "S1SG_l5gz" ]
iclr_2018_rkrWCJWAW
Unbiasing Truncated Backpropagation Through Time
\emph{Truncated Backpropagation Through Time} (truncated BPTT, \cite{jaeger2002tutorial}) is a widespread method for learning recurrent computational graphs. Truncated BPTT keeps the computational benefits of \emph{Backpropagation Through Time} (BPTT \cite{werbos:bptt}) while relieving the need for a complete backtrack through the whole data sequence at every step. However, truncation favors short-term dependencies: the gradient estimate of truncated BPTT is biased, so that it does not benefit from the convergence guarantees from stochastic gradient theory. We introduce \emph{Anticipated Reweighted Truncated Backpropagation} (ARTBP), an algorithm that keeps the computational benefits of truncated BPTT, while providing unbiasedness. ARTBP works by using variable truncation lengths together with carefully chosen compensation factors in the backpropagation equation. We check the viability of ARTBP on two tasks. First, a simple synthetic task where careful balancing of temporal dependencies at different scales is needed: truncated BPTT displays unreliable performance, and in worst case scenarios, divergence, while ARTBP converges reliably. Second, on Penn Treebank character-level language modelling \cite{ptb_proc}, ARTBP slightly outperforms truncated BPTT.
rejected-papers
Meta score: 5 The paper explores an interesting idea, addressing a known bias in truncated BPTT by sampling across different truncated history lengths. Limited theoretical analysis is presented along with PTB language modelling experimentation. The experimental part could be stronger (e.g. trying to improve over the baseline) and perhaps more than just PTB. Pros: - interesting idea Cons: - limited analysis - limited experimentation
train
[ "rkvD7QulM", "rJRZM0txz", "r11tZl5xG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes stochastic determination methods for truncation points in backpropagation through time. The previous truncation methods naively determine truncation points with fixed intervals, however, these methods cannot ensure the unbiasedness of gradients. The proposed methods stochastically determine truncation points with importance sampling. This framework ensures the unbiasedness of gradients, which contribute to the reliable convergence. Moreover, this paper investigates how the proposed methods work effectively by carefully tuning the sampling probability. This paper shows two experimental results, in which one is a simple synthetic task and the other is a real-data task. These results validate the effectiveness of the proposed methods.\n\nOverall, I think the constitution and the novelty of this paper are above the bar. The proposed methods are simple extensions of the Truncated BPTT to ensure the unbiasedness. In particular, the investigation on the choice of the sampling probability is very helpful to consider how to enhance benefits of the proposed truncated BPTT methods. However, the written quality of this paper is not good at some points. I think the authors should re-check the manuscript and modify mistakes before the publication.", "This paper introduces a new approximation to backpropagation through time (BPTT) to overcome the computational and memory load that arise when having to learn from long sequences. \nRather than chopping the sequence into subsequences of equal length as in truncated BPTT, the authors suggest to segment the sequence into subsequences of differing lengths according to an a priori specified distribution for the segment length. The gradient estimator is made unbiased through a weighting procedure.\n\nWhilst the proposed method is interesting and relevant, I find the analysis quite superficial and limited.\n\n1) The distribution for the segment length is fully specified a priori. Depending on the problem at hand, different specifications could give rise to very different results. It would be good to suggest an approach for more automatically determine the (parameters of the) distribution.\n\n2) Whilst unbiased, the proposed estimator could have high variance. This point is not investigated in the experimental section.\n\n3) For an experimental paper as this one, it would be good to have many more problems analysed and a deeper analysis than the one given for the language problem. \n\n\n", "This is an interesting paper.\n\nIt is well known that TBPTT is biased because of a fixed truncation length. The authors propose to make it unbiased by sampling different truncation lengths and hence changing the optimization procedure which corresponds to adding noise in the gradient estimates which leads to unbiased gradients. \n\nPros:\n\n- Its a well written and easy to follow paper.\n- If I understand correctly, they are changing the optimization procedure so that the proposed approach is able to find a local minima, which was not possible by using truncated backpropagation through time. \n- Its interesting to see in there PTB results that they get better validation score as compared to truncated BPTT.\n\nCons: \n\n- Though the approach is interesting, the results are quite preliminary. And given the fact there results are worse than the LSTM baseline (1.40 v/s 1.38). The authors note that it might be because of they are applying without sub-sequence shuffling. \n\n- I'm not convinced of the approach yet. The authors could do some large scale experiments on datasets like Text8 or speech modelling. \n\n\nFew points\n\n- If I'm correct that the proposed approach indeed changes the optimization procedure, than I'd like to know what the authors think about exposure bias issue. Its a well known[1, 2] that we can't sample from RNN's for more number of steps, than what we used for trained (difference b/w teacher forcing and free running RNN). I'd like to know how does there method perform in such a regime (where you sample for more number of steps than you have trained for)\n\n- Another thing, I'd like to see is the results of this model as compared to truncated backpropagation when you increase the sequence length. For example, Lets say you are doing language modelling on PTB, how the result varies when you change the length of the input sequence. I'd like to see a graph where on X axis is the length of the input sequence and on the Y axis is the bpc score (for PTB) and how does it compare to truncated backpropagation through time. \n\n- PTB dataset has still not very long term dependencies, so I'm curious what the authors think about using there method for something like speech modelling or some large scale experiments.\n\n- I'd expect the proposed approach to be more computationally expensive as compared to Truncated Back-propagation through time. I dont think the authors mentioned this somewhere in the paper. How much time does a single update takes as compared to Truncated Back-propagation through time ?\n\n- Does the proposed approach help in flow of gradients? \n\n- In practice for training RNN's people use gradient clipping which also makes the gradient biased. Can the proposed method be used for training longer sequences? \n\n[1] Scheduled Sampling For Sequence Prediction with RNN's https://arxiv.org/abs/1506.03099\n[2] Professor Forcing https://arxiv.org/abs/1610.09038\n\n\nOverall, Its an interesting paper which requires some more analysis to be published in this conference. I'd be very happy to increase my score if the authors can provide me results what I have asked for. " ]
[ 6, 5, 5 ]
[ 3, 4, 4 ]
[ "iclr_2018_rkrWCJWAW", "iclr_2018_rkrWCJWAW", "iclr_2018_rkrWCJWAW" ]
iclr_2018_rJJzTyWCZ
Large-scale Cloze Test Dataset Designed by Teachers
Cloze test is widely adopted in language exams to evaluate students' language proficiency. In this paper, we propose the first large-scale human-designed cloze test dataset CLOTH in which the questions were used in middle-school and high-school language exams. With the missing blanks carefully created by teachers and candidate choices purposely designed to be confusing, CLOTH requires a deeper language understanding and a wider attention span than previous automatically generated cloze datasets. We show humans outperform dedicated designed baseline models by a significant margin, even when the model is trained on sufficiently large external data. We investigate the source of the performance gap, trace model deficiencies to some distinct properties of CLOTH, and identify the limited ability of comprehending a long-term context to be the key bottleneck. In addition, we find that human-designed data leads to a larger gap between the model's performance and human performance when compared to automatically generated data.
rejected-papers
Meta score: 4 The paper presents a manually-constructed cloze-style fill-in-the-missing-word dataset, with baseline language modelling experiments that aim to show that this dataset is difficult for machines relative to human performance. The dataset is interesting but the fact that the experiments are confined to baseline language models Pros: - interesting dataset - clear and well-written - attempt to move the field forward in an important area Cons: - limited experimentation - language modelling approaches not appropriate baseline
train
[ "BynNGX9eG", "SJU2A_Yyf", "BJAUOGclz", "Sye2r5h7z", "HJMLr93Xz", "SJ3RG937f", "H1zBE92QM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper collects a cloze-style fill-in-the-missing-word dataset constructed manually by English teachers to test English proficiency. Experiments are given which are claimed to show that this dataset is difficult for machines relative to human performance. The dataset seems interesting but I find the empirical evaluations unconvincing. The models used to evaluate machine difficulty are basic language models. The problems are multiple choice with at most four choices per question. This allows multiple choice reading comprehension architectures to be used. A window of words around the blank could be used as the \"question\". A simple reading comprehension baseline is to encode the question (a window around the blank) and use the question vector to compute an attention over the passage. One can then compute a question-specific representation of the passage and score each candidate answer by the inner product of the question-specific sentence representation and the vector representation of the candidate answer. See \"A thorough examination of the CNN/Daily Mail reading comprehension task\" by Chen, Bolton and Manning.\n\n", "This paper presents a new dataset for cloze style question-answering. The paper starts with a very valid premise that many of the automatically generated cloze datasets for testing reading comprehension suffer from many shortcomings. The paper collects data from a novel source: reading comprehension data for English exams in China. The authors collect data for middle school and high school exams and clean it to obtain passages and corresponding questions and candidate answers for each question.\n\nThe rest of the paper is about analyzing this data and performance of various models on this dataset. \n\n1) The authors divide the questions into various types based on the type of reasoning needed to answer the question, noticeably short-term reasoning and long-term reasoning. \n2) The authors then show that human performance on this dataset is much higher than the performance of LSTM-based and language model-based baselines; this is in contrast to existing cloze style datasets where neural models achieve close to human performance. \n3) The authors hypothesize that this is partially explained by the fact that neural models do not make use of long-distance information. The authors verify their claim by running human eval where they show annotators only 1 sentence near the empty slot and find that the human performance is basically matched by a language model trained on 1 billion words. This part is very cool.\n4) The authors then hypothesize that human-generated data provides more information. They even train an informativeness prediction network to (re-)weight randomly generated examples which can then be used to train a reading comprehension model.\n\nPros of this work:\n1) This work contributes a nice dataset that addresses a real problem faced by automatically generated datasets.\n2) The breakdown of characteristics of questions is quite nice as well.\n3) The paper is clear, well-written, and is easy to read.\n\nCons:\n1) Overall, some of the claims made by the paper are not fully supported by the experiments. E.g., the paper claims that neural approaches are much worse than humans on CLOTH data -- however, they do not use state-of-the-art neural reading comprehension techniques but only a standard LSTM baseline. It might be the case that the best available neural techniques are still much worse than humans on CLOTH data, but that remains to be seen. \n2) Informativeness prediction: The authors claim that the human-generated data provides more information than automatically/randomly generated data by showing that the models trained on the former achieve better performance than the latter on test data generated by humans. The claim here is problematic for two reasons:\n a) The notion of \"informativeness\" is not clearly defined. What does it mean here exactly?\n b) The claim does not seem fully justified by the experiments -- the results could just as well be explained by distributional mismatch without appealing to the amount of information per se. The authors should show comparisons when evaluating on randomly generated data.\n\nOverall, this paper contributes a useful dataset; the analysis can be improved in some places.", "1) this paper introduces a new cloze dataset, \"CLOTH\", which is designed by teachers. The authors claim that this cloze dataset is a more challenging dataset since CLOTH requires a deeper language understanding and wider attention span. I think this dataset is useful for demonstrating the robustness of current RC models. However, I still have the following questions which lead me to reject this paper.\n\n2) I have the questions as follows:\ni) The major flaw of this paper is about the baselines in experiments. I don't think the language model is a robust baseline for this paper. When a wider span is used for selecting answers, the attention-based model should be a reasonable baseline instead of pure LM. \nii) the author also should provide the error rates for each kind of questions (grammar questions or long-term reasoning). \niii) the author claim that this CLOTH dataset requires wider span for getting the correct answer, however, there are only 22.4 of the entire data need long-term reasoning. More importantly, there are 26.5% questions are about grammar. These problems can be easily solved by LM. \niv) I would not consider 16% percent of accuracy is a \"significant margin\" between human and pure LM-based methods. LM-based methods should not be considered as RC model.\nv) what kind accuracy is improved if you use 1-billion corpus trained LM? Are these improvements mostly in grammar? I did not see why larger training corpus for LM could help a lot about reasoning since reasoning is only related to question document.\n", "Thank you for your valuable review!\n1. Please see our comment about the attention baseline in the top thread. \n2. Indeed, the statement about informativeness is not rigorous. With further experiments, we find that the results should be explained by a distributional mismatch instead of informativeness. Specifically, when the training set contains both the human-designed data and automatically generated data, the accuracy on automatically generated data increases if we have a higher proportion of automatically generated data in the training set. Please see Table 7 for more details. We restructured Section 4 and removed the informativeness section. \n3. However, we believe human-designed data is a much better test bed for general cloze test with the following reasons: Human-designed data is different from automatically generated data since it leads to a larger gap between the model’s performance and the human performance. The model's performance and human's performance on the human-designed data are 0.484 and 0.860 respectively, leading to a gap of 0.376. The performance gap on the automatically-generated data is at most 0.185 since the model's performance reaches 0.815. Similarly, on Children’s Book Test where the questions are generated, the human performance is between 0.708 to 0.828 on four categories and the language model can nearly achieve human performance on the preposition and verb categories. Hence human-designed data is a good test base because of the larger gap between performances of the model and the human, although the distributional mismatch problem makes it difficult to be the best training source for out-of-domain cloze test such as automatically generated cloze test.\n", "Thank you for your valuable review!\ni) Please see our comment about the attention baseline in the top thread. \nii) The error rates for each kind of questions are added in Figure 1. \niii) The questions in CLOTH dataset require a wider span when compared to automatically generated questions. We added more comparisons about human-designed data and automatically generated data in Section 4.1. \niv) The margin 15.3% results from training on a large external dataset. Specifically, the 1-billion-word dataset is more than 40 times larger than our dataset. However, in practice, it requires too many computational resources to train models on such a large dataset. Hence, it is valuable to compare models that do not use external data. When we do not use external data, the margin between the best model and the human performance is 27.7%, which is still a large margin.\nv) Accuracies on all categories are improved if we train the LM on the 1-billion-word corpus. It shows that a large amount of data is necessary to learn complex language regularities. Please see Figure 1 for more details. \n", "Since all three reviewers suggested employing stronger baselines, specifically attention models, we will first clarify here:\n\n1. We tested machine comprehension models (with attention) when we started working on the task but found that they do not significantly outperform the LSTM baseline. Specifically, the Stanford Attentive Reader achieves an accuracy of 0.487 on CLOTH while an LSTM based method has an accuracy of 0.484. We also implemented position-aware attention model [Zhang et al. 2017] to enable the model to use the distance information. It achieves an accuracy of 0.485. We have updated these results in the paper. \n2. In fact, LSTM based language model is capable of modeling statistical regularities of language. Hill et al. 2015 show language models outperform memory networks and nearly achieves human performance on the verbs or prepositions questions of Children’s Book Test. A concurrent work also shows that language model is very good at modeling complex language regularities when trained on a large amount of data, although they use the LM to extract features instead of directly using it for prediction (Please see ICLR submission “Deep contextualized word representations” ). Specifically, by replacing word vectors with hidden representations of LM, they achieve state-of-the-art results on six language tasks including textual entailment, question answering, semantic role labeling, coreference resolution, named entity extraction, sentiment analysis. Reasoning also benefits from LM features, e.g., the F1 on reading comprehension (SQuAD) improves from 81.1 to 85.3.\n3. We hypothesize the attention models’ unexpected performance is due to the difficulty to learn to comprehend longer contexts when the majority of the training data only requires understanding short-term information. Specifically, there are 23.2% of questions that require a long-term context. Note that although the cloze test was previously introduced for evaluating reasoning abilities in the machine comprehension task, CLOTH does NOT focus on reasoning. We mentioned the difference in the related work section: “Our dataset focuses on evaluating language proficiency including knowledge in vocabulary, reasoning and grammar while the focus of reading comprehension is reasoning.” We have updated the paper to emphasize this point in the introduction. \n\nReference:\nZhang, Y., Zhong, V., Chen, D., Angeli, G., & Manning, C. D. (2017). Position-aware Attention and Supervised Data Improve Slot Filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (pp. 35-45).\nHill, F., Bordes, A., Chopra, S., & Weston, J. (2015). The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations. arXiv preprint arXiv:1511.02301.\n", "Thank you for your valuable review! Please see our comment about the attention baseline in the top thread. " ]
[ 4, 7, 4, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_rJJzTyWCZ", "iclr_2018_rJJzTyWCZ", "iclr_2018_rJJzTyWCZ", "SJU2A_Yyf", "BJAUOGclz", "iclr_2018_rJJzTyWCZ", "BynNGX9eG" ]
iclr_2018_Bk346Ok0W
Sensor Transformation Attention Networks
Recent work on encoder-decoder models for sequence-to-sequence mapping has shown that integrating both temporal and spatial attentional mechanisms into neural networks increases the performance of the system substantially. We report on a new modular network architecture that applies an attentional mechanism not on temporal and spatial regions of the input, but on sensor selection for multi-sensor setups. This network called the sensor transformation attention network (STAN) is evaluated in scenarios which include the presence of natural noise or synthetic dynamic noise. We demonstrate how the attentional signal responds dynamically to changing noise levels and sensor-specific noise, leading to reduced word error rates (WERs) on both audio and visual tasks using TIDIGITS and GRID; and also on CHiME-3, a multi-microphone real-world noisy dataset. The improvement grows as more channels are corrupted as demonstrated on the CHiME-3 dataset. Moreover, the proposed STAN architecture naturally introduces a number of advantages including ease of removing sensors from existing architectures, attentional interpretability, and increased robustness to a variety of noise environments.
rejected-papers
Meta-score: 4 This paper presents an approach which uses attention across multiple speech or video channels. After some synthetic experiments, presents experiments on chime-3, but has a rather weak baseline system Pros: - addresses an interesting task Cons: - does not take account of other recent papers in the area - experimental results are weak - very high errors in baseline system - limited novelty
train
[ "BJWWXpKeM", "SyBQzAteM", "BJKqHlqgM", "B1o7pzaQM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "This paper proposes sensor transformation attention network (STAN), which dynamically select appropriate sequential sensor inputs based on an attention mechanism. \n\nPros:\nOne of the main focuses of this paper is to apply this method to a real task, multichannel speech recognition based on CHiME-3, by providing its reasonable sensor selection function in real data especially to avoid audio data corruptions. This analysis is quite intuitive, and also shows the effectiveness of the proposed method in this practical setup. \n\nCons:\nThe idea seems to be simple and does not have significant originality. Also, the paper does not clearly mention the attention mechanism part, and needs some improvement. \n\nComments:\n-\tThe paper mainly focuses on the soft sensor selection. However, in an array signal processing context (and its application to multichannel speech recognition), it would be better to mention beamforming techniques, where the compensation of the delays of sensors is quite important.\n-\tIn addition, there is a related study of using multichannel speech recognition based on sequence-to-sequence modeling and attention mechanism by Ochiai et al, \"A Unified Architecture for Multichannel End-to-End Speech Recognition with Neural Beamforming,\" IEEE Journal of Selected Topics in Signal Processing. This paper uses the same CHiME-3 database, and also showing a similar analysis of channel selection. It’s better to discuss about this paper as well as a reference.\n-\tSection 2: better to explain about how to obtain attention scores z in more details.\n-\tFigure 3, experiments of Double audio/video clean conditions: I cannot understand why they are improved from single audio/video clean conditions. Need some explanations.\n-\tSection 3.1: 39-dimensional Mel-frequency cepstral coefficients (MFCCs) -> 13 -dimensional Mel-frequency cepstral coefficients (MFCCs) with 1st and 2nd order delta features.\n-\tSection 3.2 Dataset “As for TIDIGIT”: “As for GRID”(?)\n-\tSection 4 Models “The parameters of the attention modules are either shared across sensors (STAN-shared) or not shared across sensors (STAN- default).”: It’s better to explain this part in more details, possibly with some equations. It is hard to understand the difference.\n\n", "The manuscript introduces the sensor transformation attention networks, a generic neural architecture able to learn the attention that must be payed to different input channels (sensors) depending on the relative quality of each sensor with respect to the others. Speech recognition experiments on synthetic noise on audio and video, as well as real data are shown.\n\nFirst of all, I was surprised on the short length of the discussion on the state-of-the-art. Attention models are well known and methods to merge information from multiple sensors also (very easily, Multiple Kernel Learning, but many others).\n\nSecond, from a purely methodological point of view, STANs boil down to learn the optimal linear combination of the input sensors. There is nothing wrong about this, but perhaps other more complex (non-linear) models to combine data could lead to more robust learning.\n\nThird, the experiments with synthetic noise are significant to a reduced extend. Indeed, adding Gaussian noise to a replicated input is too artificial to be meaningful. The network is basically learning to discard the sensor when the local standard deviation is high. But this is not the kind of noise found in many applications, and this is clearly shown in the performances on real data (not always improving w.r.t state of the art). The interesting part of these experiments is that the noise is not stationary, and this is quite characteristic of real-world applications. Also, to be fair when discussion the results, the authors should say that simple concatenation outperforms the single sensor paradigm.\n\nI am also surprised about the baseline choice. The authors propose a way to merge/discard sensors, and there is no comparison with other ways of doing it (apart from the trivial sensor concatenation). It is difficult to understand the benefit of this technique if no other baseline is benchmarked. This mitigates the impact of the manuscript.\n\nI am not sure that the discussion in page corresponds to the actual number on Table 3, I did not understand what the authors wrote.", "Summary: \n\nThe authors consider the use of attention for sensor, or channel, selection. The idea is tested on several speech recognition datasets, including TIDIGITS and CHiME3, where the attention is over audio channels, and GRID, where the attention is over video channels. Results on TIDIGITS and GRID show a clear benefit of attention (called STAN here) over concatenation of features. The results on CHiME3 show gain over the CHiME3 baseline in channel-corrupted data.\n\nReview:\n\nThe paper reads well, but as a standard application of attention lacks novelty. The authors mention that related work is generalized but fail to differentiate their work relative to even the cited references (Kim & Lane, 2016; Hori et al., 2017). Furthermore, while their approach is sold as a general sensor fusion technique, most of their experimentation is on microphone arrays with attention directly over magnitude-based input features, which cannot utilize the most important feature for signal separation using microphone arrays---signal phase. Their results on CHiME3 are terrible: the baseline CHiME3 system is very weak, and their system is only slightly better! The winning system has a WER of only 5.8%(vs. 33.4% for the baseline system), while more than half of the submissions to the challenge were able to cut the WER of the baseline system in half or better! http://spandh.dcs.shef.ac.uk/chime_challenge/chime2015/results.html. Their results wrt channel corruption on CHiME3, on the other hand, are reasonable, because the model matches the problem being addressed…\n\nOverall Assessment: \n\nIn summary, the paper lacks novelty wrt technique, and as an “application-of-attention” paper fails to be even close to competitive with the state-of-the-art approaches on the problems being addressed. As such, I recommend that the paper be rejected.\n\n\nAdditional comments: \n\n-\tThe experiments in general lack sufficient detail: Were the attention masks trained supervised or unsupervised? Were the baselines with concatenated features optimized independently? Why is there no multi-channel baseline for the GRID results? \n-\tIssue with noise bursts plot (Input 1+2 attention does not sum to 1)\n-\tA concatenation based model can handle a variable #inputs: it just needs to be trained/normalized properly during test (i.e. like dropout)…\n", "We thank the reviewers for their time in reviewing the submission. Our omission on the specific comparisons of our work to other systems such as Kim et al, and Hori et al, was unintended and will be corrected in the updated manuscript. \nWe realize that we are unable to change the document significantly at this time and will take the reviewers comments into consideration when we write our next revision.\n" ]
[ 7, 4, 3, -1 ]
[ 4, 4, 4, -1 ]
[ "iclr_2018_Bk346Ok0W", "iclr_2018_Bk346Ok0W", "iclr_2018_Bk346Ok0W", "iclr_2018_Bk346Ok0W" ]
iclr_2018_SkNQeiRpb
Training Deep AutoEncoders for Recommender Systems
This paper proposes a new model for the rating prediction task in recommender systems which significantly outperforms previous state-of-the art models on a time-split Netflix data set. Our model is based on deep autoencoder with 6 layers and is trained end-to-end without any layer-wise pre-training. We empirically demonstrate that: a) deep autoencoder models generalize much better than the shallow ones, b) non-linear activation functions with negative parts are crucial for training deep models, and c) heavy use of regularization techniques such as dropout is necessary to prevent over-fitting. We also propose a new training algorithm based on iterative output re-feeding to overcome natural sparseness of collaborate filtering. The new algorithm significantly speeds up training and improves model performance. Our code is publicly available.
rejected-papers
meta score: 4 The paper uses a deep autoencoder to rating prediction, with experiments on netflix. Pros - Proposed dense refeeding approach appears novel - Good experimental results Cons - limited experimentation - main novelty (dense refeeding) is not well linked to existing data imputation approaches - novel contribution is otherwise quite limited
train
[ "S17oyqIgM", "r1eVIIqgM", "rk5q9Vixz", "BJoaxnnzz", "Sk3ikh3fG", "BkfXAonGG", "HkcDh0A1M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public" ]
[ "This paper presents a deep autoencoder model for rating prediction. The autoencoder takes the user’s rating over all the items as input and tries to predict the observed ratings in the output with mean squared error. A few techniques are applied to make the training feasible without layer-wise pre-training: 1) SELU activation. 2) dropout with high probability. 3) dense output re-feeding. On the Netflix prize dataset, the proposed deep autoencoder outperforms other state-of-the-art approaches. \n\nOverall, the paper is easy to follow. However, I have three major concerns regarding the paper that makes me decide to reject it.\n\n1. Lack of novelty. The paper is essentially a deeper version of the U-AutoRec (Sedhain et al. 2015) with a few recently emerged innovations in deep learning. The dense output re-feeding is not something particularly novel, it is more or less a data-imputation procedure with expectation-maximization — in fact if the authors intend to seek explanation for this output re-feeding technique, EM might be one of the interpretations. And similar technique (more theoretically grounded) has been applied in image imputation for variational autoencoder (Rezende et al. 2014, Stochastic Backpropagation and Approximate Inference in Deep Generative Models). \n\n2. The experimental setup is also worth questioning. Using a time-split dataset is of course more challenging. However, the underlying assumption of autoencoders (or more generally, latent factor models like matrix factorization) is the all the ratings are exchangeable (conditionally independent given the latent representations), i.e., autoencoders/MF are not capable of inferring the temporal information from the data, Thus it is not even a head-to-head comparison with a temporal model (e.g., RNN in Wu et al. 2017). Of course you can still apply a static autoencoder to time-split data, but what ends up happening is the model will use its capacity to try to explain the temporal signal in the data — a deeper model certainly has more extra capacity to do so. I would suggest the authors comparing on a non-time-split dataset with other static models, like I(U)-AutoRec/MF/CF-NADE (Zheng et al. 2016)/etc. \n\n3. Training deep models on recommender systems data is impressive. However, I would like to suggest we, as a community, start to step away from the task of rating predictions as much as we can, especially in more machine-learning-oriented venues (NIPS, ICML, ICLR, etc.) where the reviewers might be less aware of the shift in recommender systems research. (The task of rating predictions was made popular mostly due to the Netflix prize, yet even Netflix itself has already moved on from ratings.) Training (and evaluating) with RMSE on the observed ratings assumes all the missing ratings are missing at random, which is clearly far from realistic for recommender systems (see Marlin et al. 2007, Collaborative Filtering and the Missing at Random Assumption). In fact, understanding why some of the ratings are missing presents a unique challenge for the recommender systems. See, e.g., Steck 2010, Training and testing of recommender systems on data missing not at random, Liang et al. 2016, Modeling user exposure in recommendation, Schnabel et al. 2016, Recommendations as Treatments: Debiasing Learning and Evaluation. A model with good RMSE in a lot of cases does not translate to good recommendations (Cremonesi et al. 2010, Performance of recommender algorithms on top-n recommendation tasks\n). As a first step, at least start to use all the 0’s in the form of implicit feedback and focus on ranking-based metrics other than RMSE. ", "This paper proposed to use deep AE to do rating prediction tasks in recommender systems.\nSome of the conclusions of the paper, e.g. deep models perform bettern than shallow ones, the non-linear activation\nfunction is important, dropout is necessary to prevent overfitting, are well known, and hence is of less novelty.\nThe proposed re-feeding algorithm to overcome natural sparseness of CF is interesting, however, I don't think it is enough to support being accepted by ICLR. \nSome reference about rating prediction are missing, such as \"A neural autoregressive approach to collaborative filtering, ICML2016\". And it would be better to show the performance of the model on implicit rating data, since it is more desirable in practice, since many industry applications have only implicit rating (e.g. whether the user watches the movie or not.).", "In this paper the authors present a model for more accurate Netflix recommendations (rating predictions, RMSE). In particular, the authors demonstrate that a deep autoencoder, carefully tuned, can out-perform more complex RNN-based models that have temporal information. The authors examine how different non-linear activations, model size, dropout, and a novel technique called \"dense re-feeding\" can together improve DNN-based collaborative filtering.\n\nPros:\n- The accuracy results are impressive and a useful datapoint in how to build a DNN-based recommender. \n- The dense re-feeding technique seems to be novel with incremental (but meaningful) benefits. \n\nCons:\n- Experimental results on only one dataset. \n- Difficult to know if the results are generalizable.\n\n", "Please see below our responses to your concerns.\n\nConcern 1. Yes, the model is a deeper version of U-AutoRec. But simply stacking more layers does not always work and we show that in this case the following changes were needed: a) new activation functions and dropouts, and b) new optimization scheme (dense re-feeding). Most importantly, we show how *each* change impacts performance and enables training of deeper and deeper models - which we think is of interest to the ICLR audience.\nThe EM-based point of view on dense re-feeding is an interesting angle, thanks for pointing this out.\n\nConcern 2. We strongly disagree that experimental setup is questionable. \n\nAs you commented yourself, \"Using a time-split dataset is of course more challenging\" - this is true. However, in practice, we are interested in predicting *future* ratings/interests, given the past ones. Therefore, time-based benchmark makes much more sense then random-based one.\n\nWe also made sure (by corresponding with Wu et al. 2017 authors) that our dataset splits match theirs exactly.\nYes, we agree that our model does not model explicitly temporal signal (unlike RRN from Wu et al.) which makes it even more interesting that our model beats RRN which is RNN-based and explicitly takes time as a signal. \nPerhaps, you are right that \"model will use its capacity to try to explain the temporal signal in the data\", however we do not see how this makes experimental setup questionable.\n\nConcern 3. \n\nWe agree that, in practice, for the *production* recommender system, ratings prediction task is not particularly valuable due to many reasons including the ones that you've cited above. In fact, from our experience, production recommender systems are more similar to search engines in the sense that they take myriads of signals into the account with ratings data being just one of them.\n\nNevertheless, often a simple test (Netflix) and metric (RMSE) which could tell whether algo 1 models rating data potentially better than algo 2 is desirable. \n\nIn particular, we think that for ICLR audience it would be interesting to see how classical well known machine-learning techniques such as matrix factorization can be replaced by a deep learning based model without going too deeply into the specifics of the domain of application area (that would be more appropriate for a RecSys paper).", "Stacking many layers together does not always work and often well-known techniques (more recent activation functions, dropouts, etc.) need to be thoughtfully combined to successfully train deeper models. This is why the paper reads as a technical report - we evaluated the effect of every \"trick\" we used. We think this, as well as new optimization scheme (dense re-feeding) may be of interest to the ICLR audience.\n\nThe reference to “A neural autoregressive approach to collaborative filtering, ICML2016\" is indeed relevant to our work and we added it into the “Related work” section.\n", "Similarly to recent approaches, we performed several different time-splits of Neflix data to see if the results will generalize. \nNetflix data was chosen, because it is the largest publicly available ratings data which would also help to show how scalable our approach is (takes few hours to train on single GPU).\n\nWe do agree that more experiments would make model and results stronger and we are currently working on getting access to a bigger datasets (which is not public unfortunately).\n", "It can be easily verified by using open source toolkit, such as librec, MyMediaLite, etc." ]
[ 4, 3, 6, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_SkNQeiRpb", "iclr_2018_SkNQeiRpb", "iclr_2018_SkNQeiRpb", "S17oyqIgM", "r1eVIIqgM", "rk5q9Vixz", "iclr_2018_SkNQeiRpb" ]
iclr_2018_H1DGha1CZ
Enhancing Batch Normalized Convolutional Networks using Displaced Rectifier Linear Units: A Systematic Comparative Study
In this paper, we turn our attention to the interworking between the activation functions and the batch normalization, which is a virtually mandatory technique to train deep networks currently. We propose the activation function Displaced Rectifier Linear Unit (DReLU) by conjecturing that extending the identity function of ReLU to the third quadrant enhances compatibility with batch normalization. Moreover, we used statistical tests to compare the impact of using distinct activation functions (ReLU, LReLU, PReLU, ELU, and DReLU) on the learning speed and test accuracy performance of standardized VGG and Residual Networks state-of-the-art models. These convolutional neural networks were trained on CIFAR-100 and CIFAR-10, the most commonly used deep learning computer vision datasets. The results showed DReLU speeded up learning in all models and datasets. Besides, statistical significant performance assessments (p<0.05) showed DReLU enhanced the test accuracy presented by ReLU in all scenarios. Furthermore, DReLU showed better test accuracy than any other tested activation function in all experiments with one exception, in which case it presented the second best performance. Therefore, this work demonstrates that it is possible to increase performance replacing ReLU by an enhanced activation function.
rejected-papers
meta score: 4 This paper proposes an activation function, called displaced ReLU (DReLU), to improve the performance of CNNs that use batch normalization. Pros - good set of experiments using CIFAR, with good results - attempt to explain the approach using expectations Cons - theoretical explanations are not so convincing - limited novelty - CIFAR is relatively limited set of experiments - does not compare with using bn after relu, which is now well-studied and seems to address the motivation of this paper (and thus questions the conclusions)
train
[ "ByyhLzKgM", "S1WbYz5gM", "SJwceNsxG", "ryV4TxjzM", "HJJL5Lz-z", "BkOQHGmWf", "B1juMp1WM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The key argument authors present against ReLU+BN is the fact that using ReLU after BN skews the values resulting in non-normalized activations. Although the BN paper suggests using BN before non-linearity many articles have been using BN after non-linearity which then gives normalized activations (https://github.com/ducha-aiki/caffenet-benchmark/blob/master/batchnorm.md) and also better overall performance. The approach of using BN after non-linearity is termed \"standardization layer\" (https://arxiv.org/pdf/1301.4083.pdf). I encourage the authors to validate their claims against simple approach of using BN after non-linearity. ", "This paper proposes an activation function, called displaced ReLU (DReLU), to improve the performance of CNNs that use batch normalization. Compared to ReLU, DReLU cut the identity function at a negative value rather than the zero. As a result, the activations outputted by DReLU can have a mean closer to 0 and a variance closer to 1 than the standard ReLU. The DReLU is supposed to remedy the problem of covariate shift better. \n\nThe presentation of the paper is clear. The proposed method shows encouraging results in a controlled setting (i.e., all other units, like dropout, are removed). Statistical tests are performed for many of the experimental results, which is solid.\n\nHowever, I have some concerns. \n1) As DReLU(x) = max{-\\delta, x}, what is the optimal strategy to determine \\delta? If it is done by hyperparameter tuning with cross-validation, the training cost may be too high.\n2) I believe the control experiments are encouraging, but I do not agree that other techniques like Dropouts are not useful. Using DReLU to improve the state-of-art neural network in an uncontrolled setting is important. The arguments for skipping this experiments are respectful, but not convincing enough. \n3) Batch normalization is popular, especially for the convolutional neural networks. However, its application is not universal, which can limit the use of the proposed DReLU. It is a minor concern, anyway. \n\n\n", "This paper describes DReLU, a shift version of ReLU. DReLU shifts ReLU from (0, 0) to (-\\sigma, -\\sigma). The author runs a few CIFAR-10/100 experiments with DReLU.\n\nComments:\n\n1. Using expectation to explain why DReLU works well is not sufficient and convincing. Although DReLU’s expectation is smaller than expectation of ReLU, but it doesn’t explain why DReLU is better than very leaky ReLU, ELU etc.\n2. CIFAR-10/100 is a saturated dataset and it is not convincing DReLU will perform will on complex task, such as ImageNet, object detection, etc.\n3. In all experiments, ELU/LReLU are worse than ReLU, which is suspicious. I personally have tried ELU/LReLU/RReLU on Inception V3 with Batch Norm, and all are better than ReLU. \n\nOverall, I don’t think this paper meet ICLR’s novelty standard, although the authors present some good numbers, but they are not convincing. \n\n\n", "Thank you for the response.\n\nOne additional point: If the improvement can be more significant and on a larger dataset, the work can be much more exciting. The \"theory\" is straightforward, so the experimental results are what finally matters. I like the idea of statistic test, but still, the improvement of the mean value is limited. The conclusion can be stronger if every model is trained for a sufficiently long time (i.e., converged no matter how many epochs) and the proposed model can beat the state-of-the-art by a significant margin. (This is what I referred by an uncontrolled setting)", "The primary aim of this paper is to propose an activation function to improve the performance of mainstream state-of-the-art convolutional neural networks.\n\nTherefore, the experiments were designed to use the batch normalization followed by ReLU (BN+ReLU) since we believe this is currently clearly the mainstream approach used by most recent proposed state-of-the-art models.\n\nFirstly, the original batch normalization paper, as the reviewer acknowledges, proposed BN+ReLU instead of ReLU+BN. The authors made their arguments why not to use BN+ReLU.\n\nFurther, the mentioned approach was followed by the ILSVRC winner ResNet in both the original as much as in the pre-activation variant. To the best of our knowledge, all Generative Adversarial Networks (GANs) use normalization before non-linearity in either Generator and Discriminator convolutional networks. The same is true for (Variational or not) Autoencoder designs.\n\nBesides, the same pattern was observed by the so-called Wide Residual Networks, which showed improved results in some situations compared with original Residual Networks. Furthermore, all the Google's Inception Networks variants from version two to version four followed the same design, placing the non-linearity after the batch normalization.\n\nFinally, this year, DenseNets, which won the CVPR 2017 Best Paper Awards, also insisted on using BN+ReLU, not otherwise. Hence, it is still to be seen if relevant peer-review papers conclude ReLU+BN provides any improvement. Even if this hypothesis proves to be right in future, this will not invalidate the conclusions of the present work, rather it will be a novel achievement not directly related to the present research.\n\nNaturally, it is possible that shortly using ReLU before BN shows better results than otherwise, but it is yet to be demonstrated as none of the most recent and distinguished models did not adopt the mentioned approach. Consequently, we believe much more evidence is still needed to conclude otherwise.\n\nFinally, we believe that the theoretical and mathematical arguments we made still holds. Since DReLU extends the linearity into the third quadrant, we think DReLU+BN is likely to work better than ReLU+BN.", "Regarding the second and third comments, we emphasize that the work proposes to design a non-linearity that can be used to improve the performance (training speed and test accuracy) of mainstream state-of-the-art (SOTA) convolutional models.\n\nWe are not claiming dropout is useless in general. It is undoubtedly important and frequently used in, for example, Recurrent Neural Networks. However, we firmly believe that its usage in mainstream state-of-the-art convolutional models has been in visible decline recently. \n\nTherefore, designing nonlinearities that can overcome ReLU using dropout (as we believe it may be the case of previously proposed activation functions) would be of no practical significance (for the scope we are considering) if the overall best performance is still achieved by a strictly batch normalized network using ReLU. \n\nIn this sense, we have not concentrated our experiments on not using dropout to perform a controlled setting. We have done this because we believe this is currently the relevant scenario regarding mainstream SOTA convolutional networks. We think that strictly batch normalized setting is the one that is most relevant from this point of view because this is the approach followed the SOTA ConvNet models recently.\n\nFirstly, the Inception models are designed without using dropout after convolutional layers. Instead, after those layers, only batch normalization is applied. In those models, dropout was only used before the last fully connected layer. \n\nThe original ResNet, an ILSVRC winner, avoids dropout not only after convolutional layers but also before the last fully connected one. The same holds true for the more recent pre-activation ResNet variant. No dropout layers were used. Not even before the densely connected classification layer.\n\nThe Wide Residual Network used undoubtedly the same approach. No dropout whatsoever. This network was shown to improve the performance when compared Residual Networks.\n\nThe generators and discriminators networks of Generative Adversarial Networks (GANs) are typically convolutional neural networks. Once again, we see no dropout being used in those architectures.\n\nFinally, the DenseNets paper, which won the CVPR 2017 Best Paper Awards, shows in its Table 2 that their strictly batch normalized variants undoubtedly outperforms the options using dropout.\n\nActually, once established our scope with the above justifications, we indeed performed extremely uncontrolled settings. Different from previously proposed activation functions, we have used standard (no hand-designed) widely used ConvNet models: VGG and ResNets and covered a significant range of depths.\n\nBesides, different from previous works, we execute as many repetitions (runs) of each experiment as needed to achieve statistical significance (p<0.05). We consider this is a significant innovation regarding deep learning papers. Another relevant improvement in methodology was to compare the proposed activation function with many others known non-linearities, not only ReLU. \n\nRegarding the first comment, we showed in Appendix C.6 how the delta was defined. Similarly to LReLU, PReLU, and ELU, in practical usage, the network designer may choose to use 0.05 (DReLU default hyperparameter value) that was established for CIFAR10 if he wants to avoid training costs. Optionally, the designer may perform cross-validation to the specifically used dataset.", "Regarding the first comment, we emphasize that the reduction of the mean activations produced by DReLU was not the only argument used to explain why DReLU works better than the other evaluated non-linearities. In fact, theoretical reasons were also provided. Furthermore, it was mentioned and mathematically expressed that DReLU probably implicates less damage to the normalization process as it extends the linear function into the third quadrant, which is a characteristic provided by neither LReLU nor ELU. Moreover, rigorous statistical tests and the vast amount of repetitions of the experiments also contributed from an experimental point of view to ensure DReLU improves the deep models presented.\n\nIn respect of the second comment, we argue that CIFAR10/100 are the standard and most frequently used datasets in deep learning computer vision research papers. Moreover, for each dataset, the paper presented consistent results for a range of standard and relevant models with a substantially different number of layers. We did not perform few experiments, rather we executed hundreds of repetitions on the mentioned datasets and performed statistical tests (p<0.05). Naturally, it is never possible to be sure the results presented on some datasets will repeat in other bases. Unfortunately, we will probably not have enough time to execute more 150 experiments needed to include the ImageNet in this study in the next few weeks.\n\nRegarding the third comment, we believe that one of the primary results of our work is precisely showing that without dropout, ReLU is likely to outperform all previously proposed activation functions. It is entirely consistent with the fact that all recently introduced models (VGG, ResNet, WideResNet, DenseNets, etc.) still use ReLU as default activation function. It is no surprise considering the results of our work. Hence, we are not contesting that ELU/LReLU may outperform ReLU if dropout is used. However, our work shows this is unlikely to happen in convolutional networks optimized to strictly use batch normalization without dropout, which is the mainstream state-of-the-art (SOTA) approach to design convolutional networks. These SOTA designs still rely on ReLU as the standard activation function.\n\nIndeed, It is relevant to observe that the mentioned studies usually completely avoid dropout (ResNet, WideResNet, and GANs) or show that the variant without dropout clearly outperforms the one using it (DenseNets). Therefore, we emphasize that no dropout was used since we believe that adding it implies worst results than just use batch normalization, at least for convolutional neural networks. Since 2014 we have seen dropout increasingly less relevant in the design of SOTA ConvNets.\n\nThe above mention arguments are indeed in agreement with the more than three hundred experiments and statistical tests we performed which shows that ReLU is a compelling option in strictly batch normalized ConvNets, which is, in our opinion, the best possible design from a regularization point of view to achieve higher performances. Indeed, the test accuracies presented by the paper are essentially state-of-the-art for the models and datasets considered. Besides, dropout slows the training.\n\nWe remember that, as mentioned in the paper, the vast majority of the previously proposed activation functions used experiments with dropout and almost always without batch normalization as many of them were designed before the advent of it. We believe that if the experiments with Inception V3 that you mentioned used dropout, it could explain the reason why ELU/LReLU/RReLU outperformed ReLU. If not enough executions were performed or statistical tests were not used, it could also be a statistical error. Finally, different from our study, we emphasize that the previous proposed nonlinearity works did not use standard models (but rather very hand-designed ones), perform statistical tests or at least execute many times the same experiment.\n\nIn fact, the use of the statistical tests has been shown to be of fundamental importance as the experiments showed a substantial overlap of the test accuracy performance of the compared activation functions. Therefore, this work showed that make conclusions with few repetitions is inappropriate. Moreover, we made a comprehensive systematic study, testing simultaneously the main activation functions currently in use." ]
[ 3, 4, 5, -1, -1, -1, -1 ]
[ 5, 4, 5, -1, -1, -1, -1 ]
[ "iclr_2018_H1DGha1CZ", "iclr_2018_H1DGha1CZ", "iclr_2018_H1DGha1CZ", "BkOQHGmWf", "ByyhLzKgM", "S1WbYz5gM", "SJwceNsxG" ]
iclr_2018_ryY4RhkCZ
DEEP DENSITY NETWORKS AND UNCERTAINTY IN RECOMMENDER SYSTEMS
Building robust online content recommendation systems requires learning com- plex interactions between user preferences and content features. The field has evolved rapidly in recent years from traditional multi-arm bandit and collabora- tive filtering techniques, with new methods integrating Deep Learning models that enable to capture non-linear feature interactions. Despite progress, the dynamic nature of online recommendations still poses great challenges, such as finding the delicate balance between exploration and exploitation. In this paper we provide a novel method, Deep Density Networks (DDN) which deconvolves measurement and data uncertainty and predicts probability densities of CTR, enabling us to perform more efficient exploration of the feature space. We show the usefulness of using DDN online in a real world content recommendation system that serves billions of recommendations per day, and present online and offline results to eval- uate the benefit of using DDN.
rejected-papers
Meta score: 4 The paper concerns the development of a density network for estimating uncertainty in recommender systems. The submitted paper is not very clear and it is hard to completely understand the proposed method from the way it is presented. This makes assessing the contribution of the paper difficult. Pros: - addresses an interesting and important problem - possible novel contribution Cons: - poorly written, hard to understand precisely what is done - difficult to compare with the state-of-the-art, not helped by disorganised literature review - experimentation could be improved The paper needs more work before being ready for publication.
train
[ "SyCyVT-ez", "rJtu_bqlf", "HyJutQ6gz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper adresses a very interesting question about the handling of the dynamics of a recommender systems at scale (here for linking to some articles).\nThe defended idea is to use the context to fit a mixture of Gaussian with a NN and to assume that the noise could be additively split into two terms. One depend only on the number of observations of the given context and the average reward in this situation and the second term begin the noise. This is equivalent to separate a local estimation error from the noise. \n\nThe idea is interesting but maybe not pushed far enough in the paper:\n*At fixed context x, assuming that the error is a function of the average reward u and of the number of displays r of the context could be a constant could be a little bit more supported (this is a variance explanation that could be tested statistically, or the shape of this 2D function f(u,r) could be plot to exhibit its regularity). \n* None of the experiments is done on public data which lead to an impossible to reproduce paper\n* The proposed baselines are not really the state of the art (Factorization Machines, GBDT features,...) and the used loss is MSE which is strange in the context of CTR prediction (logistic loss would be a more natural choice)\n* I'm not confident with the proposed surrogate metrics. In the paper, the work of Lihong Li &al on offline evaluation on contextual bandits is mentioned and considered as infeasible here because of the renewal of the set of recommendation. Actually this work can be adapted to handle theses situations (possibly requiring to bootstrap if the set is actually regenerating too fast). Also note that Yahoo Research R6a - R6b datasets where used in ICML'12 Exploration and Exploitation 3 challenge where about pushing some news in a given context and could be reused to support the proposed approach. An other option would be to use some counterfactual estimates (See Leon Bottou &all and Thorsten Joachims &all)\n* If the claim is about a better exploration, I'd like to have an idea of the influence of the tuning parameters and possibly a discussion/comparison over alternatives strategies (including an epsilon-n greedy algorithm)\n\nBesides theses core concerns, the papers suffers of some imprecisions on the notations which should be clarified. \n* As an example using O(1000) and O(1M) in the figure one. Everyone understands what is meant but O notation are made to eliminate constant terms and O(1) = O(1000).\n* For eqn (1) it would be better to refer to and \"optimistic strategy\" rather to UCB because the name is already taken by an algorithm which is not this one. Moreover the given strategy would achieve a linear regret if used as described in the paper which is not desirable for bandits algorithms (smallest counter example with two arms following a Bernouilli with different parameters if the best arms generates two zero in a row at the beginning, it is now stuck with a zero mean and zero variance estimate). This is why bandits bounds include a term which increase with the total number of plays. I agree that in practice this effect can be mitigated at that the strategy can be correct in the contextual case (but then I'd like to the dependancies on x to be clear) \n* The papers never mentions whats is a scalar, a vector or a matrix. This creates confusion: as an example eqn (3) can have several different meaning depending if the values are scalars, scalars depending on x or having a diagonal \\sigma matrix\n* In the paragraph above (2) I unsure of what is a \"binomial noise error distribution\" for epsilon, but a few lines later epsilon becomes a gaussian why not just mention that you assume the presence of a gaussian noise on the parameters of a Bernouilli distribution ? \n\n ", "In the paper \"DEEP DENSITY NETWORKS AND UNCERTAINTY IN RECOMMENDER SYSTEMS\", the authors propose a novel neural architecture for online recommendation. The proposed model deals with data and measurement uncertaintes to define exploitation/exploration startegies. \n\nMy main concern with the paper is that the contribution is unclear, as the authors failed from my point of view in establishing the novely w.r.t. the state of the art regarding uncertainty in neural networks. The state of the art section is very confusing, with works given in a random order, without any clear explanation about the limits of the existing works in the context of the task addressed in the paper. The only positioning argument that is given in that section is the final sentence \"In this paper we model measurement noise using a Gaussian model and combine it with a MDN\". It is clearly not sufficient to me, as it does not gives insights about why such proposal is done. \n\nIn the same spirit, I cannot understand why not any classical bandit baseline is given in the experiments. The experiments only concern two slightly different versions of the proposed algorithm in order to show the importance of the deconvolution of both considered noises, but nothing indicates that the model performs fairly well compared to existing approaches. Also, it would have been useful to compare ot to other neural models dealing with uncertainty (some of them having been applied to bandit problems- e.g., Blundell et al. (2015)).\n\nAt last, for me the uncertainty considered in the proposal is not sufficient to claim that the approach is an UCB-like one. The confidence bound considered should include the uncertainty on the parameters in the predictive posterior reward distribution (as done for instance in Blundell et al. (2015) in the context of neural networks), not only the distribution of the observed data with regards to the considered probabilistic families. Not enough discussion is given wrt the assumptions made by the model anyway. The section 4 is also particularly hard to follow.\n\nOther remarks:\n - Equation (1) does not fit with mixtures considered in 4.2.1. So what is the selection score that is used\n - \"Due to the fact that data noise is small given x\" => what does it mean since x is a couple ? Also I cannot understand the causal relation with the following of the sentence\n - Figure 4 (and the associated paragraph) is very difficult to understand (I couldn't extract any information from this)\n - Too many abreviations that complicate the reading\n - The throughput measure is not clear\n - Not enough justification about the architecture. For instance, nothing is said about the title subnet represented in the figure 3.\n - What is the \"recommendation age\" ?\n - \"We can rewrite eq. 2 using eq. 3 and 6\" => \"and 4\".\n\n", "This paper presents a methodology to allow us to be able to measure uncertainty of the deep neural network predictions, and then apply explore-exploit algorithms such as UCB to obtain better performance in online content recommendation systems. The method presented in this paper seems to be novel but lacks clarity unfortunately. My main doubt comes from Section 4.2.1, as I am not sure how exactly the two subnets fed into MDN to produce both mean and variance, through another gaussian mixture model. More specifically, I am not able to see how the output of the two subnets get used in the Gaussian mixture model, and also how the variance of the prediction is determined here. Some rewriting is needed there to make this paper better understandable in my opinion. \n\nMy other concerns of this paper include:\n1. It looks like the training data uses empirical CTR of (t,c) as ground truth. This doesn't look realistic at all, as most of the time (t,c) pair either has no data or very little data in the real world. Otherwise it is a very simple problem to solve, as you can just simply assume it's a independent binomial model for each (t,c).\n2. In Section 4.2.1, CTR is modeled as a Gaussian mixture, which doesn't look quite right, as CTR is between (0,1).\n3. A detailed explanation of the difference between MDN and DDN is needed.\n4. What is OOV in Section 5.3?" ]
[ 4, 3, 4 ]
[ 5, 4, 3 ]
[ "iclr_2018_ryY4RhkCZ", "iclr_2018_ryY4RhkCZ", "iclr_2018_ryY4RhkCZ" ]
iclr_2018_rkfbLilAb
Improving Search Through A3C Reinforcement Learning Based Conversational Agent
We develop a reinforcement learning based search assistant which can assist users through a set of actions and sequence of interactions to enable them realize their intent. Our approach caters to subjective search where the user is seeking digital assets such as images which is fundamentally different from the tasks which have objective and limited search modalities. Labeled conversational data is generally not available in such search tasks and training the agent through human interactions can be time consuming. We propose a stochastic virtual user which impersonates a real user and can be used to sample user behavior efficiently to train the agent which accelerates the bootstrapping of the agent. We develop A3C algorithm based context preserving architecture which enables the agent to provide contextual assistance to the user. We compare the A3C agent with Q-learning and evaluate its performance on average rewards and state values it obtains with the virtual user in validation episodes. Our experiments show that the agent learns to achieve higher rewards and better states.
rejected-papers
meta score: 4 This paper is primarily an application paper applying known RL techniques to dialogue. Very little reference to the extensive literature in this area. Pros: - interesting application (digital search) - revised version contains subjective evaluation of experiments Cons: - limited technical novelty - very weak links to the state-of-the-art, missing many key aspects of the research domain
train
[ "Hy4tIW5xf", "H1f_jh_ef", "BkL816Ygf", "SkPmHMpXz", "ryJv-MaXM", "Hy3jUZaXf", "SkDTDVamf", "Byxcyzp7M", "H1QVkGpmM", "HkAuVWpmz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "The paper \"IMPROVING SEARCH THROUGH A3C REINFORCEMENT LEARNING BASED CONVERSATIONAL AGENT\" proposes to define an agent to guide users in information retrieval tasks. By proposing refinements of the query, categorizations of the results or some other bookmarking actions, the agent is supposed to help the user in achieving his search. The proposed agent is learned via reinforcement learning. \n\nMy concern with this paper is about the experiments that are only based on simulated agents, as it is the case for learning. While it can be questionable for learning (but we understand why it is difficult to overcome), it is very problematic for the experiments to not have anything that demonstrates the usability of the approach in a real-world scenario. I have serious doubts about the performances of such an artificially learned approach for achieving real-world search tasks. Also, for me the experimental section is not sufficiently detailed, which lead to not reproducible results. Moreover, authors should have considered baselines (only the two proposed agents are compared which is clearly not sufficient). \n\nAlso, both models have some issues from my point of view. First, the Q-learning methods looks very complex: how could we expect to get an accurate model with 10^7 states ? No generalization about the situations is done here, examples of trajectories have to be collected for each individual considered state, which looks very huge (especially if we think about the number of possible trajectories in such an MDP). The second model is able to generalize from similar situations thanks to the neural architecture that is proposed. However, I have some concerns about it: why keeping the history of actions in the inputs since it is captured by the LSTM cell ? It is a redondant information that might disturb the process. Secondly, the proposed loss looks very heuristic for me, it is difficult to understand what is really optimized here. Particularly, the loss entropy function looks strange to me. Is it classical ? Are there some references of such a method to maintain some exploration ability. I understand the need of exploration, but including it in the loss function reduces the interpretability of the objective (wouldn't it be preferable to use a more classical loss but with an epsilon greedy policy?).\n\n\nOther remarks: \n - In the begining of \"varying memory capacity\" section, what is \"100, 150 and 250\" ? Time steps ? What is the unit ? Seconds ? \n - I did not understand the \"Capturing seach context at local and global level\" at all\n - In the loss entropy formula, the two negation signs could be removed\n \n", "This paper proposes to use RL (Q-learning and A3C) to optimize the interaction strategy of a search assistant. The method is trained against a simulated user to bootstrap the learning process. The algorithm is tested on some search base of assets such as images or videos. \n\nMy first concern is about the proposed reward function which is composed of different terms. These are very engineered and cannot easily transfer to other tasks. Then the different algorithms are assessed according to their performance w.r.t. to these rewards. They of course improve with training since this is the purpose of RL to optimize these numbers. Assessment of a dialogue system should be done according to metrics obtained through actual interactions with users, not according to auxiliary tasks etc. \n\nBut above all, this paper incredibly lacks of context in both RL and dialogue systems. The authors cite a 2014 paper when it comes to refer to Q-learning (Q-learning was first published in 1989 by Watkins). The first time dialogue has been casted into a RL problem is in 1997 by E. Levin and R. Pieraccini (although it has been suggested before by M. Walker). User simulation has been proposed at the same time and further developed in the early 2000 by Schatzmann, Young, Pietquin etc. Using LSTMs to build user models has been proposed in 2016 (Interspeech) by El Asri et al. Buiding efficient reward functions for RL-based conversational systems has also been studied for more than 20 years with early work by M. Walker on PARADISE (@ACL 1997) but also via inverse RL by Chandramohan et al (2011). A2C (which is a single-agent version of A3C) has been used by Strub et al (@ IJCAI 2017) to optimize visually grounded dialogue systems. RL-based recommender systems have also been studied before (e.g. Shani in JMLR 2005). \n\nI think the authors should first read the state of the art in the domain before they suggest new solutions. ", "The paper describes reinforcement learning techniques for digital asset search. The RL techniques consist of A3C and DQN. This is an application paper since the techniques described already exist. Unfortunately, there is a lack of detail throughout the paper and therefore it is not possible for someone to reproduce the results if desired. Since there is no corpus of message response pairs to train the model, the paper trains a simulator from logs to emulate user behaviours. Unfortunately, there is no description of the algorithm used to obtain the simulator. The paper explains that the simulator is obtained from log data, but this is not sufficient. The RL problem is described at a very high level in the sense that abstract states and actions are listed, but there is no explanation about how those abstract states are recognized from the raw text and there is no explanation about how the actions are turned into text. There seems to be some confusion in the notion of state. After describing the abstract states, it is explained that actions are selected based on a history of states. This suggests that the abstract states are really abstract observations. In fact, this becomes obvious when the paper introduces the RNN where a hidden belief is computed by combining the observations. The rewards are also described at a hiogh level, but it is not clear how exactly they are computed. The digital search application is interesting, however a detailed description with comprehensive experiments are needed for the publication of an application paper.", "Thanks for your reviews.\n\nOur state representation comprises of history of actions taken by the user and the agent (along with other variables as described in the state space section 3.3) and not only the most recent action taken by the user. User action is obtained from user utterance using a rule-based Natural language unit (NLU) which uses dependency tree based syntactic parsing, stop words and pre-defined rules (as described in appendix, section 6.1.2). We capture the search context by including the history of actions taken by the user and the agent in the state representation. The state at a turn in the conversation comprises of agent and user actions in last ‘k’ turns. Since a search episode can extend indefinitely and suitability & dependence of action taken by the agent can go beyond last ‘k’ turns, we include an LSTM in our model which aggregates the local context represented in state (‘local’ in terms of state including only the recent user and agent actions) into a global context to capture such long term dependencies. We analyse the trend in reward and state values obtained by comparing it with the case when we do not include the history of actions is state and let the LSTM learn the context alone (section 4.1.3).\n\nOur system does not generate utterances, it instead selects an utterance based on the action taken by the agent from a corpus of possible utterances. This is because we train our agent to assist user in their search through optimising dialogue strategy and not actual dialogue utterances made by the agent. Though we aim to pursue this as future work where we generate agent utterances and train NLU for obtaining user action in addition to optimising dialogue strategy (which we have done in our current work).\n\nSince we aim to optimise dialogue strategy and do not generate dialogue utterances, we assign the rewards corresponding to the appropriateness of the action performed by the agent considering the state and history of the search. We have used some rewards such as task success, extrinsic rewards based on feedback signals from the user and auxiliary rewards based on performance on auxiliary tasks. These rewards have been modelled numerically on a relative scale.\n\nWe have evaluated our model through humans and updated the paper, please refer to section 4.3 for human evaluation results and appendix (section 6.2) for conversations between actual users and trained agent.", "Due to legal issues, we cannot not share the query session logs data. We have tried to provide details of our algorithm which can be used for obtaining user model from any given session logs data. The mapping between interactions in session log data and user actions which the agent can understand has been discussed in table 3. Using these mapping, we obtain a probabilistic user model (algorithm has been described in section 3.5). Figure 1 in the paper demonstrates how interactions in a session can be mapped to user actions. \n\nKindly mention the sections which are lacking details and missing information in the algorithm for user model which will help us in improving our paper.", "We evaluated our system through real humans and added the results in section 4.3. Please refer to appendix (section 6.2) for some conversations between actual users and trained agent. For performing experiments with humans, we developed chat interface where an actual user can interact with the agent during their search. The implementation details of the chat interface have been discussed in the appendix (section 6.1.1). User action is obtained from user utterance using a rule-based Natural language unit (NLU) which uses dependency tree based syntactic parsing, stop words and pre-defined rules (as described in appendix, section 6.1.2). You may refer to supplementary material (footnote-2, page-9) which contains a video demonstrating search on our conversational search interface.\n\nIn order to evaluate our system with the virtual user, we simulate validation episodes between the agent and the virtual user after every training episode. This simulation comprises of sequence of alternate actions between the user and the agent. The user action is sampled using the user model while the agent action is sampled using the policy learned till that point. Corresponding to a single validation episode, we determine two performance metrics. First is total reward obtained at the end of the episode. The values of the states observed in the episode is obtained using the model, average of states values observed during the validation episode is determined and used as the second performance metric. Average of these values over different validation episodes is taken and depicted in figures 3,4,5 and 6.", "Thanks for your reviews.\n\nWe have modeled rewards specifically for the domain of digital assets search in order to obtain a bootstrapped agent which performs reasonably well in assisting humans in their search so that it can be fine tuned further based on interaction with humans. As our problem caters to a subjective task of searching digital assets which is different from more common objective tasks such as reservation, it is difficult to determine generic rewards based on whether the agent has been able to provide exact information to the user unlike objective search tasks where rewards are measured based on required information has been provided to the user. This makes rewards transferability between subjective and objective search difficult. Though our modeled rewards are easily transferable to search tasks such as e-commerce sites where search tasks comprises of a subjective component (in addition to objective preferences such as price).\n\nSince we aim to optimise dialogue strategy and do not generate dialogue utterances, we assign the rewards corresponding to the appropriateness of the action performed by the agent considering the state and history of the search. We have used some rewards such as task success (based on implicit and explicit feedback from the user during the search) which is also used in PARADISE framework [1]. At the same time several metrics used by PARADISE cannot be used for modelling rewards. For instance, time required (number of turns) for user to search desired results cannot be penalised since it can be possible that user is finding the system engaging and helpful in refining the results better which may increase number of turns in the search.\n\nWe evaluated our system through humans and added the results to the paper, please refer to section 4.3 in the updated paper. You may refer to appendix (section 6.2) for some conversations between actual users and the trained agent.\n\nThanks for suggesting related references, we have updated our paper based on the suggestions. Kindly suggest any other further improvements.\n\n[1] Walker, Marilyn A., et al. \"PARADISE: A framework for evaluating spoken dialogue agents.\" Proceedings of the eighth conference on European chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 1997.", "Thanks for your reviews.\n\nStandard REINFORCE method for policy gradient has high variance in gradient estimates [1]. Moreover while optimising and weighing the likelihood for performing an action in a given state, it does not measure the reward with respect to a baseline reward due to which the agent is not able to compare different actions. This may result in gradient pointing in wrong direction since it does not know how good an action is with respect to other good actions in a given state. This may weaken the probability with which the agent takes the best action (or better actions).\n\nIt has been shown that if a baseline value for a state is used to critic the rewards obtained for performing different actions in that state reduces the variance in gradient estimates as well as provides correct appraisal for an action taken in a given state (good actions get a positive appraisal) without requiring to sample other actions [2]. Moreover it has been shown that if baseline value of the state is learned through function approximation, we get an an unbiased or very less biased gradient estimates with reduced variance achieving better bias-variance tradeoff. Due to these advantages we use A3C algorithm since it learns the state value function along with the policy and provides unbiased gradient estimator with reduces variance.\n\nIn standard policy gradient methods, multiple episodes are sampled before updating the parameters using the gradients obtained over these episodes. It has been observed that sampling gradients over multiple episodes which can span over large number of turns results in higher variance in the gradient estimates due to which the model takes more time to learn [3]. The higher variance is the result of stochastic nature of policy since taking sampling random actions initially (when the agent has not learned much) over multiple episodes before updating the parameters compounds the variance. Due to this reason, we instead use truncated rollouts where we update parameters of the policy and value model after every n-steps in an episode which are proven to be much effective and results in faster learning.\n\n[1] : Sehnke, Frank, et al. \"Parameter-exploring policy gradients.\" Neural Networks 23.4 (2010): 551-559.\n[2] : Sutton, Richard S., et al. \"Policy gradient methods for reinforcement learning with function approximation.\" Advances in neural information processing systems. 2000\n[3] : Tesauro, Gerald, and Gregory R. Galperin. \"On-line policy improvement using Monte-Carlo search.\" Advances in Neural Information Processing Systems. 1997. ; Gabillon, Victor, et al. \"Classification-based policy iteration with a critic.\" (2011).\n\n", "Q-Learning Model:\nWe experimented with Q-learning approach in order to obtain baseline results for the task defined in the paper since RL has not been applied before for providing assistance in searching digital assets. The large size of the state space requires large amount training data for model to learn useful representations since number of parameters is directly proportional to the size of state space which is indicative of the complexity of the model. The number of training episodes is not a problem in our case since we leverage the user model to sample interactions between the learning agent and user. This indeed is reflected in figure 6 (left), which shows that the model converges when trained on sufficient number of episodes.\n\nSince our state space is discrete, we have used table storage method for Q-learning. Kindly elaborate on what does generalisation of state means in this context so that we may elaborate more and improve our paper.\n\n\nA3C Model: \n\nWe capture the search context by including history of actions taken by the user and the agent in last ‘k’ turns explicitly in the state representation. Since a search episode can extend indefinitely and suitability & dependence of action taken by the agent can go beyond last ‘k’ turns, we include an LSTM in our model which aggregates the local context represented in state (‘local’ in terms of including only the recent user and agent actions) to capture such long term dependencies and analyse the trend in reward and state values obtained by comparing it with the case when we do not include the history of actions in the state and let the LSTM learn the context alone (section 4.1.3).\n\nIn varying memory capacity, by LSTM size (100,150,250), we mean dimension of the hidden state h of the LSTM. With more number of units, the LSTM can capture much richer latent representations and long term dependencies. We have explored the impact of varying the hidden state size in the experiments (section 4.1.2).\n\n\nEntropy loss function has been studied to provide exploration ability to the agent while optimising its action strategy in the Actor-Critic Model [1]. While epsilon-greedy policy has been successfully used in many RL algorithms for achieving exploration vs exploitation balance, it is commonly used in off-policy algorithms like Q-learning where the policy is not represented explicitly. The model is trained on observations which are sampled following epsilon-greedy policy which is different from the actual policy learned in terms of state-action value function. \n\nThis is in contrast to A3C where we apply an on-policy algorithm such that the agent take actions according to the learned policy and is trained on observations which are obtained using the same policy. This policy is optimized to both maximise the expected reward in an episode as well as to incorporate the exploration behavior (which is enabled by using the exploration loss). Using epsilon-greedy policy will disturb the on-policy behavior of the learned agent since it will then learn on observations and actions sampled according to epsilon-greedy policy which will be different from the actual policy learnt which we represent as explicit output of our A3C model.\n\nThe loss described in the paper optimise the policy to maximise the expected reward obtained in an episode where the expectation is taken with respect to different possible trajectories that can be sampled in an episode. In A3C algorithm, the standard policy gradient methods is modified by replacing the reward term by an advantage term which is difference between reward obtained by taking an action and value of the state which is used as a baseline (complete derivation in [2]). The learned baseline enforces that parameters are updated in a way that likelihood of actions that results in rewards better than value of the state is increased while it is decreased for those which provide rewards lower than the average action in that state.\n\n\n\n[1] : Mnih, Volodymyr, et al. \"Asynchronous methods for deep reinforcement learning.\" International Conference on Machine Learning. 2016.\n[2] : Sutton, R. et al., Policy Gradient Methods for Reinforcement Learning with Function Approximation, NIPS, 1999)\n\n", "We evaluated our system trained using A3C algorithm through professional designers who regularly use image search site for their design tasks and asked them to compare our system with conventional search interface in terms of engagement, time required and ease of performing the search. In addition to this we asked them to rate our system on the basis of information flow, appropriateness and repetitiveness. The evaluation shows that although we trained the bootstrapped agent through user model, it performs decently well with actual users by driving their search forward with appropriate actions without being much repetitive. The comparison with the conventional search shows that conversational search is more engaging. In terms of search time, it resulted in more search time for some designers while it reduces the time required to search the desired results in some cases, in majority cases it required about the same time. The designers are regular users of conventional search interface and well versed with it, even then majority of them did not face any cognitive load while using our system with one-third of them believing that it is easier than conventional search." ]
[ 5, 2, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkfbLilAb", "iclr_2018_rkfbLilAb", "iclr_2018_rkfbLilAb", "BkL816Ygf", "BkL816Ygf", "Hy4tIW5xf", "H1f_jh_ef", "Hy4tIW5xf", "Hy4tIW5xf", "iclr_2018_rkfbLilAb" ]
iclr_2018_rJVruWZRW
Dense Recurrent Neural Network with Attention Gate
We propose the dense RNN, which has the fully connections from each hidden state to multiple preceding hidden states of all layers directly. As the density of the connection increases, the number of paths through which the gradient flows can be increased. It increases the magnitude of gradients, which help to prevent the vanishing gradient problem in time. Larger gradients, however, can also cause exploding gradient problem. To complement the trade-off between two problems, we propose an attention gate, which controls the amounts of gradient flows. We describe the relation between the attention gate and the gradient flows by approximation. The experiment on the language modeling using Penn Treebank corpus shows dense connections with the attention gate improve the model’s performance.
rejected-papers
meta score: 4 This paper concerns a variant to previous RNN architectures using temporal skip connections, with experimentation on the PTB language modelling task The reviewers all recommend that the paper is not ready for publication and thus should be rejected from ICLR. The novelty of the paper and its relation to the state-of-the-art is not clear. The experimental validation is weak. Pros: - possibly interesting idea Cons: - weak experimental validation - weak connection to the state of the art - precise original contribution w.r.t state-of-the-art is not clear
train
[ "SkLFMZ9gG", "Hk37PGqlz", "SJLNIX9lG", "BJ_ph527G", "BylCs9n7z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ " This paper proposes a new type of RNN architectures called Dense RNNs. The authors combine several different RNN architectures and claim that their RNN can model long-term dependencies better, can learn multiscale representation of the sequential data, and can sidestep the exploding or vanishing gradients problem by using parametrized gating units.\n\nUnfortunately, this paper is hard to read, it is difficult to understand the intention of the authors. The authors make several claims without any supportive reference or experimental evidence. Both intuitive and theoretical justifications of the proposed architecture are not so convincing. The experiment is only done on PTB dataset, and the reported numbers are not that promising either. \n\nThis paper tries to combine three different features from previous works, and unfortunately, it is not so well conducted.\n", "Summary: \n\nThis paper proposes a fully connected dense RNN architecture that has connections to every layer and the preceding connections of each layer. The connections are also gated by using a simple gating mechanism. The authors very briefly discusses about the effect of these on the dynamics of the learning. They report results on PTB character-level language modelling task.\n\n\nQuestions:\nWhat is the computational complexity of this approach compared to a vanilla RNN architecture?\nWhat is the implications of these skip connections in terms of memory consumption during BPTT?\nDid you use gradient clipping and have you used any specific type of initialization for the parameters?\nHow would this approach would compare against the Clockwork RNNs which has a block-diagonal weight matrices? [1]\nHow would dense-RNNs compare against to the MANNs [2]?\nHow would you implement this model efficiently?\n\nPros:\nInteresting idea.\nCons:\nLack of experiments and empirical results supporting the arguments.\nHand-wavy theory.\nLack of references to the relevant literature. \n\nGeneral Comments:\nIn general the paper is relatively well written despite having some minor typos. The idea is interesting, however the experiments in this paper is seriously lacking. The only results presented in this paper is on PTB. The results are quite behind the SOTA and PTB is a really tiny, toyish language modeling task. The theory is very hand-wavy, the connections to the previous attempts to come up with related properties of the recurrent models should be cited. The Figure 2 is very related to the Gersgorin circle theorem in [3]. The discussion about the skip-connections is very related to the results in [2]. \n\nOverall, I think this paper is rushed and not ready for the publication.\n\n[1] Koutnik, J., Greff, K., Gomez, F., & Schmidhuber, J. (2014, January). A clockwork rnn. In International Conference on Machine Learning (pp. 1863-1871).\n[2] Gulcehre, Caglar, Sarath Chandar, and Yoshua Bengio. \"Memory Augmented Neural Networks with Wormhole Connections.\" arXiv preprint arXiv:1701.08718 (2017).\n[3] Zilly, Julian Georg, Rupesh Kumar Srivastava, Jan Koutník, and Jürgen Schmidhuber. \"Recurrent highway networks.\" arXiv preprint arXiv:1607.03474 (2016).\n", "The authors propose an RNN that combines temporal shortcut connections from [Soltani & Jang, 2016] and Gated Recurrent Attention [Chung, 2014]. However, their justification about the novelty and efficacy of the model is not well demonstrated in the paper. The experiment part is modest with only one small dataset Penn Tree Bank is used. The results are not significant enough and no comparisons with models in [Soltani & Jang, 2016] and [Chung, 2014] are provided in the paper to show the effectiveness of the proposed combination. To conclude, this paper is an incremental work with limited contributions.\n\nSome writing issues:\n1. Lack of support in arguments,\n2. Lack of referencing to previous works. For example, the sentence “By selecting the same dropout mask for feedforward, recurrent connections, respectively, the dropout can apply to the RNN, which is called a variational dropout” mentions “variational dropout” with no citing. Or “NARX-RNN and HO-RNN increase the complexity by increasing recurrent depth. Gated feedback RNN has the fully connection between two consecutive timesteps” also mentions a lot of models without any references at all.\n3. Some related papers are not cited, e.g., Hierarchical Multiscale Recurrent Neural Networks [Chung, 2016]\n", " We thanks the reviewers for their work. And your review was very helpful for me. \nI answered your questions and based on the aswers, I updated my paper. \n\nQ. What is the computational complexity of this approach compared to a vanilla RNN architecture? \n\nA. The number of parameters in dense rnn is feedforward depth^2 * recurrent depth * hidden size^2\nThe number of parameters in vanilla rnn is hidden size^2. \n\nDoubling the hidden size and doubling feedforward depth have same effect in terms of the number of parameters. And doubling recurrent depth is more efficient than doubling of hidden size with same factor. \n\nQ. What is the implications of these skip connections in terms of memory consumption during BPTT? \n\nA. If there is no skip connections, the gradients have to flow with stopping by every hidden states, it makes the parameters being vanished or exploded. The skip connections make the gradients pass the less number of hidden states, it alleviates the vanishing gradient or exploding gradient problems. \n\nQ. Did you use gradient clipping and have you used any specific type of initialization for the parameters? \n\nA. We used gradient clipping with the value 5. We used stochastic gradient optimizer with sceduling the learning rate.\n\nQ. How would this approach compare against the Clockwork RNNs which has a block-diagonal weight matrices?\n\nA. In clockwork RNN, the hidden states are divided into multiple sub-modules, which act with different periods to capture multiple timescales. In dense RNN, all previous states within recurrent depth affect current hidden state every time step. The periods underlying the sequences are automatically selected using the attention gate in dense RNN. In summary, clockwork RNN pre-defines the frequency to capture from the sequence and dense RNN learns the frequency using the attention gate. \n\nQ. [1] How would dense-RNNs compare against to the MANNs [2]? \n\nA. All previous states within the recurrent depth don't always affect the next state. Thus, MANN uses the memory to remember previous states and retrieve some of previous states if necessary. This is similar concept. However, the MANN has only connections between same layers. \n\nQ. How would you implement this model efficiently? \n\nA. In equation (12), there are many weight multiplication. As the number of weight multiplication increases, slower the calculation speed is. \n\nIn theoretical analysis, we analyzed using Gersgorin circle theorem similar to the paper \"Recurrent Highway Network\". \n", " We thanks the reviewers for their work. And your review was very helpful for me. \n\nLack of support in arguments\n\nI added the reference papers belows.\n- Learning long-term dependencies in narx recurrent neural networks (NARX-RNN)\n- Higher order recurrent neural networks (HO-RNN) \n- Hierarchical multiscale recurrent neural networks\n- Memory augmented neural networks with wormhole connections (MANN)\n- A clockwork rnn\n\n" ]
[ 2, 4, 4, -1, -1 ]
[ 4, 4, 4, -1, -1 ]
[ "iclr_2018_rJVruWZRW", "iclr_2018_rJVruWZRW", "iclr_2018_rJVruWZRW", "Hk37PGqlz", "SJLNIX9lG" ]
iclr_2018_SkYXvCR6W
Compact Encoding of Words for Efficient Character-level Convolutional Neural Networks Text Classification
This paper puts forward a new text to tensor representation that relies on information compression techniques to assign shorter codes to the most frequently used characters. This representation is language-independent with no need of pretraining and produces an encoding with no information loss. It provides an adequate description of the morphology of text, as it is able to represent prefixes, declensions, and inflections with similar vectors and are able to represent even unseen words on the training dataset. Similarly, as it is compact yet sparse, is ideal for speed up training times using tensor processing libraries. As part of this paper, we show that this technique is especially effective when coupled with convolutional neural networks (CNNs) for text classification at character-level. We apply two variants of CNN coupled with it. Experimental results show that it drastically reduces the number of parameters to be optimized, resulting in competitive classification accuracy values in only a fraction of the time spent by one-hot encoding representations, thus enabling training in commodity hardware.
rejected-papers
meta score: 4 The paper has been extensively edited during the review process - the edits are so extensive that I think the paper requires a re-review, which is not possible for ICLR 2018 Pros: - potentially interesting and novel approach to prefix encoding for character level CNN text classification - some experimental comparisons Cons: - lacks good comparison with the state-of-the-art, which makes it difficult to determine conclusions - writing style lacks clarity. I would recommend that the authors continue to improve the paper and submit it to a later conference.
train
[ "HJPT3oIVM", "rkst5E4Jz", "BJ1fw5Flf", "rkB5-TcgG", "SyLE3TzXz", "Hk2cEo9QG", "HkRp2TG7z", "BkP93TM7M", "SyLL2pfQM", "rygb3pfQf", "BJWqQmMGG", "BJ9KbpZMz", "ryxk9FaZM", "HJFJeMf-G", "rkHO_abWG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "public", "public", "author", "public" ]
[ "After looking at the revision, the manuscript looks in a much better shape at this point.\nHowever, due to the amount of changes, \nI believe it has to go trough a full review process again as I mentioned in the original review.\n\nTherefore I stand by my original opinion the paper cannot be accepted now. \n\nIf the authors are thinking to re-submit this manuscript, I think they should focus on the following:\n\n- Check the literature on the datasets and compare to more recent approaches than Zhang & LeCun such that the baselines are the current state of the art. I am not so familiar with these datasets so I do not know the current best approaches for these datasets. \n\n- Polish the language. After quickly reading through the manuscript, I found several more strange formulations. \n\n- The times per experiment from Zhang & LeCun should be replaced by current (efficient) re-implementations on the same hardware. Since 2015, we have made advances in hardware and software libraries. Therefore the measurements from 2015 and 2017/2018 are not directly comparable.", "The paper proposed to encode text into a binary matrix by using a compressing code for each word in each matrix row. The idea is interesting, and overall introduction is clear.\n\nHowever, the work lacks justification for this particular way of encoding, and no comparison for any other encoding mechanism is provided except for the one-hot encoding used in Zhang & LeCun 2015. The results using this particular encoding are not better than any previous work.\n\nThe network architecture seems to be arbitrary and unusual. It was designed with 4 convolutional layers stacked together for the first layer, while a common choice is to just make it one convolutional layer with 4 times the output channels. The depth of the network is only 5, even with many layers listed in table 5.\n\nIt uses 1-D convolution across the word dimension (inferred from the feature size in table 5), which means the convolutional layers learn intra-word features for the entire text but not any character-level features. This does not seem to be reasonable.\n\nOverall, the lack of comparisons and the questionable choices for the networks render this work lacking significance to be published in ICLR 2018.", "The manuscript proposed to use prefix codes to compress the input to a neural network for text classification. It builds upon the work by Zhang & LeCun (2015) where the same tasks are used.\n\n\nThere are several issues with the paper and I cannot recommend acceptance of the paper in the current state. \n- It looks like it is not finished.\n- the datasets are not described properly. \n- It is not clear to me where the baseline results come from.\n They do not match up to the Zhang paper (I have tried to find the matching accuracies there).\n- It is not clear to me what the baselines actually are or how I can found more info on those.\n- the results are not remarkable. \n\nBecause of this, the paper needs to be updated and cleaned up before it can be properly reviewed. \n\nOn top of this, I do not enjoy the style the paper is written in, the language is convoluted. \nFor example: “The effort to use Neural Convolution Networks for text classification tasks is justified by the possibility of appropriating tools from the recent developments of techniques, libraries and hardware used especially in the image classification “\nI do not know which message the paper tries to get across here. \nAs a reviewer my impression (which is subjective) is that the authors used difficult language to make the manuscript look more impressive.\nThe acknowledgements should not be included here either. \n\n", "This paper proposes a new character encoding scheme for use with character-convolutional language models. This is a poor quality paper, is unclear in the results (what metric is even reported in Table 6), and has little significance (though this may highlight the opportunity to revisit the encoding scheme for characters).", " Thank you very much for your time and constructive comments. We have addressed the issues pointed out in your remarks and tried to make more evident the contributions of our work. We made important improvements on the quality of the text, the description of our proposal and presentation of the experimental results.\n \n In this new version, we provide a better description and justification of our proposal. In particular, we discuss why we took some decisions regarding the encoding. Similarly, we now include an additional neural network architecture in the comparative experiments. For completeness, we included a more detailed description of datasets and base models involved in the experiments.\n \n Our main concern with this paper is show that a more compact encoding procedure could reduce the computational footprint of using words codified character-by-character in text classification. Character-based text classification allows to handle less curated texts or texts in languages that have a lot of declensions, and with poor or none a priori pre-trained language modules (ala word2vec). \n \n In terms of classification performance, we have matched the results of Zhang & LeCun (2015) which represent the state of the art in this area and the departing point of this work. It should be noted that like them, we also match traditional methods. In this regard, in order to improve the readability of the paper we changed the comparison metric to accuracy, different of Zhang & LeCun (2015) that report error loss ($(1-accuracy)\\times100$).\n \n On the other hand, in terms of computational footprint, our approach is much faster than Zhang & LeCun (2015). We find that is is a relevant result as it makes the approach suitable for extended use. We are providing along with our paper the supporting code. We expect that with the collaboration of the community a streamlined implementation can be obtained and even better times will be reached.\n\n To our knowledge, our approach is the first who try to codify words into vectors using only characters composition in a sparse way. We are aware that there is room for improvement. In the process of updating the paper we included another neural network with positive results. Nevertheless, this direction should be properly explored in the future. In this paper we focused mainly on the comparison with the previous results by Zhang & Lecun (2015) but we are confident that other architectures will yield better results. \n \n Our main line of research is educational data mining where it is crucial to be able to handle texts produced by students, with equations, orthographic errors and not-so-formal language. In this scenario, we have a lot of interest in building better and faster solutions build upon the character-based approach originally put forward by Zhang & Lecun (2015).\n \n We appreciate that you take a moment an revise again that paper under the light of these comments and modifications.\n \n Regarding the writing style, we really apologize. Sometimes is difficult to express yourself in a foreign language. We did your best in this updated version to not give you these impressions. ", "we have not seen your question in time to help you on the reproducibility challenge, but you guess right, we used RELU\n\nWe are glad that you could replicate the main findings using just our instructions, but in this new version, we tried to give more information on how and why we took some decisions, in a way that could be easier to reproduce the same findings.\n \n We have prepared a Jupyter/IPython notebook that we publish online along with the paper. We did not publish it yet to not infringe the double-blind review policy. \n \nThank you ", "Thank you for your interest in this approach\n\n We did a updated version addressing your key recommendations. \n \n We are glad that you could replicate the main findings using just our instructions, but in this new version, We tried to give more information on how and why we took some decisions, in a way that could be easier to reproduce the same findings.\n \n We have prepared a Jupyter/IPython notebook that we publish online along with the paper. We did not published yet to not infringe the double-blind review policy. \n \n We invite you to read it again. We are open to any suggestion to better presents these findings.\n \n Thank you.", "Thank you for your interest in this approach\n\n We did a updated version addressing your key recommendations. \n \n In this new version, we have tried to give more information on how and why we took some decisions, in a way that could be easier to reproduce the same results.\n \n We have a Jupyter/IPython notebook ready with all the experiments that we intend to publish online along with the paper. We did not published yet to not infringe the double-blind review policy. \n \n We kindly invite you to read the paper again. We are open to any suggestion to better presents these findings.\n \n Thank you.", "Thank you for your comments.\n\n We created an updated version where we did our best to improve the quality of our presentation and express it in a clear way. \n \n In this new version, we provide a better description and justification of our proposal. In particular, we discuss why we took some decisions regarding the encoding. Similarly, we now include an additional neural network architecture in the comparative experiments. For completeness, we included a more detailed description of datasets and base models involved in the experiments.\n \n Regarding encodings comparison, in the initial stage of our research, we investigated others encodings, mainly Huffman and End Tag Dense Codes- ETDC. We discarded these because we did not found a way to represent words in a distinct way once we concatenate each char code. We made more clear this step in the manuscript and still working in a way to modify ETDC to make it distinct for each word.\n \n Our main concern with this paper is show that a more compact encoding procedure could reduce the computational footprint of using words codified character-by-character in text classification. Character-based text classification allows to handle less curated texts or texts in languages that have a lot of declensions and with poor or none a priori pre-trained language modules (ala word2vec). \n \n In terms of classification performance, we have matched the results of Zhang & LeCun (2015) which represent the state of the art in this area and the departing point of this work. It should be noted that like them, we also beat traditional methods. In this regard, in order to improve the readability of the paper, we changed the comparison metric to accuracy, different of Zhang & LeCun (2015) that report error loss ($(1-accuracy)\\times100$).\n \n On the other hand, in terms of computational footprint, our approach is much faster than Zhang & LeCun (2015). We find that is is a relevant result as it makes the approach suitable for extended use. We are providing along with our paper the supporting code. We expect that with the collaboration of the community a streamlined implementation can be obtained and even better times will be reached.\n\n To our knowledge, our approach is the first who try to codify words into vectors using only characters composition in a sparse way. We are aware that there is room for improvement. In the process of updating the paper, we included another neural network with positive results. Nevertheless, this direction should be properly explored in the future. In this paper, we focused mainly on the comparison with the previous results by Zhang & Lecun (2015) but we are confident that other architectures will yield better results. \n \n Our main line of research is educational data mining where it is crucial to be able to handle texts produced by students, with equations, orthographic errors and not-so-formal language. In this scenario, we have a lot of interest in building better and faster solutions build upon the character-based approach originally put forward by Zhang & Lecun (2015).\n \n We appreciate that you take a moment an revise again that paper under the light of these comments and modifications.", "Thank you for your time for helping us to better express our findings.\n\n We made important improvements in the quality of the text, the description of our proposal and presentation of the experimental results. \n \n In this new version, we provide a better description and justification of our proposal. In particular, we discuss why we took some decisions regarding the encoding. Similarly, we now include an additional neural network architecture in the comparative experiments. For completeness, we included a more detailed description of datasets and base models involved in the experiments.\n \n Our main concern with this paper is show that a more compact encoding procedure could reduce the computational footprint of using words codified character-by-character in text classification. Character-based text classification allows to handle less curated texts or texts in languages that have a lot of declensions and with poor or none a priori pre-trained language modules (ala word2vec). \n \n In terms of classification performance, we have matched the results of Zhang & LeCun (2015) which represent the state of the art in this area and the departing point of this work. It should be noted that like them, we also beat traditional methods. In this regard, in order to improve the readability of the paper, we changed the comparison metric to accuracy, different of Zhang & LeCun (2015) that report error loss ($(1-accuracy)\\times100$).\n \n On the other hand, in terms of computational footprint, our approach is much faster than Zhang & LeCun (2015). We find that is is a relevant result as it makes the approach suitable for extended use. We are providing along with our paper the supporting code. We expect that with the collaboration of the community a streamlined implementation can be obtained and even better times will be reached.\n\n To our knowledge, our approach is the first who try to codify words into vectors using only characters composition in a sparse way. We are aware that there is room for improvement. In the process of updating the paper, we included another neural network with positive results. Nevertheless, this direction should be properly explored in the future. In this paper, we focused mainly on the comparison with the previous results by Zhang & Lecun (2015) but we are confident that other architectures will yield better results. \n \n Our main line of research is educational data mining where it is crucial to be able to handle texts produced by students, with equations, orthographic errors and not-so-formal language. In this scenario, we have a lot of interest in building better and faster solutions build upon the character-based approach originally put forward by Zhang & Lecun (2015).\n \n We appreciate that you take a moment an revise again that paper under the light of these comments and modifications.", "Our reproducibility experiment was carried out in the context of the ICLR 2018 Reproducibility Challenge, where various groups are encouraged to reproduce the findings papers submitted to the ICLR 2018 conference. The intended outcome of the initiative is to emphasize the need for reproducibility in the fast-growing field of machine learning.\n\nAs contenders in the reproducibility challenge, we chose this paper, which describes a simple scheme for encoding text-data into matrix form, which we dubbed the CME. The encoding is applied to 8 datasets, which are then used as input to a convolutional neural network with the assumption that the algorithm will only deal with short text excerpts in the classification setting. The main claim of the paper is that similar performance to existing methods can be achieved using the CME, albeit at a fraction of the runtime.\n\nWe attempted to replicate these findings on 3 of the 8 datasets. Although the datasets were provided in a clear format and did not require further preprocessing, the contents of the datasets were not described nor were the questions of whether and how the particularities of each dataset affects the performance of the proposed algorithm. First, we implemented the encoding procedure using the specifications described in the paper, and encoded the datasets using the CME scheme. Then, we trained a convolutional neural network using the same architecture. Because neither the datasets nor the code were not supplied along with the conference paper, we attempted to implement both these methods ourselves, relying strictly on the specifications provided in the paper.\n\nIt was relatively straightforward for us to use the specifications to reproduce most of the methods described in the work. The encoding procedure is presented punctiliously, and with sufficient examples, that a person with a modest level of expertise in ML could likely implement the encoder from scratch. The authors also provide an in-depth summary of the specifications (i.e. parameters, hyperparameters, computational infrastructure), which greatly facilitates any attempt to reproduce the findings.\nHowever, we started to question our implementation in light of the rather poor performance that we obtained. We were unable to replicate the findings of the paper, achieving accuracies matching those yielded by random multiclass classifiers instead. We reasoned that this poor performance was likely due to our making wrong assumptions about how to apply the encoding scheme to the training and test datasets. Because no substantial information regarding exactly how the encoding scheme was to be applied to the training and test sets, we encoded these datasets in a completely independent manner, which we suspect was not what the authors had intended. As such, we make no claim of inaccurate findings, only that we were not able to properly replicate them given the specifications provided in the paper, the time constraints for the reproducibility experiment, and the lack of publicly available code.\n\nThe main strength of the paper lies in 1) the novel CME technique which lends itself well to being used by convolutional neural networks and 2) the detailed specification of both the CME and of the neural network architecture. The language used throughout the report is readable and very helpful in aiding understanding of the methods.\nThe major drawback of the paper lies in the lack of information regarding the baselines and how exactly the encoding scheme is to be applied on the training and test sets. No adequate baseline methods were mentioned in the paper against which we could compare the promising CME. Furthermore, the nature of the metrics used to evaluate the neural nets were not explicitly stated, as only one number was reported per dataset. Only through careful investigation was it possible for us to determine that these numbers in fact correspond to test accuracies. In addition, it would have been interesting to use a host of metrics, and not just one measure of fitness, to establish performance.\n\nIn summary, we would like to commend the authors for their contribution to the field of machine learning. The CME is a promising novel take on the use of convolutional neural nets in the text classification setting. The report provides excellent clarity on the underlying methods. However, since neither the code nor details on how to adequately apply the encoding scheme were supplied, reproducing the major findings of the paper proved to be difficult.", "This is an executive summary of a more detailed report that can be found here: \nhttps://de.scribd.com/document/367280305/COMP-551-Project-4-Reproducible-Machine-Learning \nThis project is part of the ICLR2018 Reproducibility Challenge. The goal of the challenge is to improve the quality of submissions by highlighting the importance of reproducibility. In this review we present the main challenges we faced while reproducing the ideas presented in this paper as well as highlighting which aspects were easily reproducible.\n\nWe gathered three of the analyzed data sets, implemented the encoding scheme and the convolutional neural network according to the description given in the paper.\n\nOur results (test error as well as running times) are very similar to the results of the original paper, and we were able to reproduce it for the most part. The test errors achieved on the datasets were the following (original results in parenthesis): \nag_news 11.89 (12.33), dbpedia 2.33 (2.07) & yelp_polarity 7.84 (7.96). \nOur training time per epoch (trained on a Nvidia Tesla K80 whereas the authors used a Nvidia 1080ti): \nag_news 4.87 min (3 min), dbpedia 23.11 min (18 min) & yelp_polarity 20.17 min (21 min).\n\nHowever, as can be seen from these values, we could not reproduce exact results. This was mainly due to the fact, that not all of the details of preprocessing, network architecture and hyperparameters were available to us. In the following, we address the most relevant points we faced in our replication:\n\nThe original paper does not contain any direct links or any other information on the data sets used. All the information on the data sets was gathered from papers cited. Therefore, extra work and time had to be spent tracking these data sets down. Once we had found the data sets, we were able to exactly replicate the split into training and test set, because this split was already provided.\n\nThe description of the encoding function is very clear and examples make it easy to understand. We're confident in our replication of the encoding process. However, the paper doesn't go into detail on the preprocessing of the data sets. We were unsure how newline characters in the documents were preprocessed for instance. For replication purposes, a detailed description of the preprocessing employed would have been helpful.\n\nThe network architecture is presented in form of a table. Some important implementation details are missing (activation functions, loss function used), and others have to be deduced by observing the output dimensions of the individual layers of the network. This makes it difficult to exactly replicate the network the authors used. Again, a more detailed description would have been helpful.\n\nThe computing infrastructure (including library versions) used was clearly explained, and even though we did not possess the exact same environment, we believe that one would be able to set up the exact same infrastructure with the information provided.\n\nWe did not have access to the code of the authors and therefore had to implement the full model on our own. In some cases we were missing information on parameters or how exactly things were implemented (see above). Our implementation could therefore be different from the one of the original authors, affecting computation times. However, since the paper was pretty clear for the most part, and our results resemble the ones of the authors, we are relatively certain that this has not been a big issue. \n\nFinally, some interaction with the authors was necessary to clarify a few points that were left ambiguous after reading their paper. Some of the results and tables were not described very extensively by the authors and therefore needed clarification. We contacted them using this platform and received a quick answer. We also contacted them about the activation functions used in the network but received no reply until the submission deadline.\n\nThis review was created by Seara Chen, Benjamin Paul-Dubois-Taine and Lino Toran Jenner, students of McGill University, Montreal.", "Hello again,\n\nI am part of the same McGill team working on your paper. Thank you very much for taking the time to answer our question last time.\n\nWe have another question:\nWhat activation functions did you use in the CNN ?\n\nThank you in advance.", "Hello\n\nThank you for you interest in this approach.\n\nTable 6 results are testing error, or 1- accuracy*100. \n\nTheses results came from Table 4 in Zhang, Zhao, LeCun paper: https://arxiv.org/pdf/1509.01626.pdf . We just compare their results with ours. \n\n\n\n", "Hello,\n\nI am part of a team at McGill University participating in the ICLR 2018 reproducibility challenge.\nWe are currently trying to reproduce results from your paper.\n\nWe have a few questions and would be very thankful if you could answer them:\n1) what is the loss function used to compare models on table 6 ?\n2) can you provide more details about your implementation of the traditional models described in table 6?\n\nThank you in advance." ]
[ -1, 4, 3, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 5, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "SyLE3TzXz", "iclr_2018_SkYXvCR6W", "iclr_2018_SkYXvCR6W", "iclr_2018_SkYXvCR6W", "BJ1fw5Flf", "ryxk9FaZM", "BJ9KbpZMz", "BJWqQmMGG", "rkst5E4Jz", "rkB5-TcgG", "iclr_2018_SkYXvCR6W", "iclr_2018_SkYXvCR6W", "iclr_2018_SkYXvCR6W", "rkHO_abWG", "iclr_2018_SkYXvCR6W" ]
iclr_2018_Sy5OAyZC-
On the Use of Word Embeddings Alone to Represent Natural Language Sequences
To construct representations for natural language sequences, information from two main sources needs to be captured: (i) semantic meaning of individual words, and (ii) their compositionality. These two types of information are usually represented in the form of word embeddings and compositional functions, respectively. For the latter, Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) have been considered. There has not been a rigorous evaluation regarding the relative importance of each component to different text-representation-based tasks; i.e., how important is the modeling capacity of word embeddings alone, relative to the added value of a compositional function? In this paper, we conduct an extensive comparative study between Simple Word Embeddings-based Models (SWEMs), with no compositional parameters, relative to employing word embeddings within RNN/CNN-based models. Surprisingly, SWEMs exhibit comparable or even superior performance in the majority of cases considered. Moreover, in a new SWEM setup, we propose to employ a max-pooling operation over the learned word-embedding matrix of a given sentence. This approach is demonstrated to extract complementary features relative to the averaging operation standard to SWEMs, while endowing our model with better interpretability. To further validate our observations, we examine the information utilized by different models to make predictions, revealing interesting properties of word embeddings.
rejected-papers
This work presents a strong baseline model for several NLP-ish tasks such as document classification, sentence classification, representation learning based NLI, and text matching. In terms of originality, reviewers found that "there is not much contribution in terms of technical novelty" but that "one might also conclude that we need more challenging dataset". There was significant discussion about whether it "sheds new lights on limitations of existing methods" or whether the results were "marginally surprising". In terms of quality, reviewers found it to be an "insightful analysis" and noted that these "SWEMs should be considered a strong baseline in future work". There was significant discussion with the AC about the signficance of the work. In the opinion of the AC reviewers did were too quick to accept the authors novelty claims, and did not push them enough to include other baselines in their tables that were not overly deep model. In particular the AC felt that important numbers were left out of the experiment tables, for document classification that muddied the results. The response of the authors was: "Moreover, fasttext and our SWEM variants all belong to the category of simpler methods (with parameter-free compositional functions). Since our motivation is to explore the necessity of employing complicated compositional functions for various NLP tasks, we do not think it is necessary for us to make any comparisons between fasttext and SWEM." In addition when a reviewer pointed out the lack of inclusion of FOFE embeddings, the authors noted something similar "Besides, we totally agree that developing sentence embeddings that are both simple and efficient is a very promising research direction (FOFE is a great work along this line)." The reviewer correctly pointed out related work that shows a model very similar to what the author's propose. In general this seems like evidence that the techniques are known, not that they are significant and novel.
train
[ "BJxQqeTEf", "B1wBlYKNz", "rJ54aZ9gG", "r1HVB4DxG", "rkEpbo5xz", "H1ls7qmVf", "ryDqLvfNz", "SJC634fVG", "SJjhTLgEG", "BJkidEgNz", "S1V6EaJ4M", "r1QGIf37M", "S1Hg83yVz", "rJpcO1gQG", "r1UNBe6Mz", "SyDybe6zz", "BkZ7kepfG" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "More experiments have been conducted for the sequence tagging tasks: we shuffled all the words within each input sentence (along with the corresponding labels) for the training set and trained a BI-LSTM-CRF model on both datasets. For NER, the F1 score drops from 90.10 to 85.79; while for chunking, the F1 score drops from 94.46 to 90.68. This observation indicates that the word-order information within a sentence does play an important role in sequence tagging problems, which is in consistent with our SWEM-CRF model’s results.\n\nWith these additional investigations regarding the concerns you pointed out, we suppose that our contributions in general should now be much more solid. Looking forward to your feedback regarding our update, and we would be very much interested in an open discussion to find out if there are any remaining unfavorable factors. Thanks a lot for your time!", "Thanks for your update and valuable suggestion! We totally agree that sequence tagging should be a very important NLP problem to be considered, which could make the systematic comparisons in our paper more diverse and comprehensive. In this regard, we have tried on two (structured) sequence tagging tasks (i.e. chunking, NER). Specifically, we have considered the standard CoNLL2000 chunking and CoNLL2003 NER datasets. The corresponding results (F1 score) are shown as below:\n\n Dataset CNN-CRF [1] BI-LSTM-CRF [2] SWEM-CRF\n\nCoNLL2000 94.32 94.46 90.34\n\nCoNLL2003 89.59 90.10 86.28\n\nSWEM-CRF indicates that CRF is directly operated on top of the word embedding layer and make predictions for each word (there is no contextual/word-order information before CRF layer, compared to CNN-CRF or BI-LSTM-CRF). As shown above, CNN-CRF and BI-LSTM-CRF consistently outperform SWEM-CRF on both sequence tagging tasks, although the training takes around 4 to 5 times longer (for BI-LSTM-CRF) than SWEM-CRF. This suggests that for chunking and NER, compositional functions such as LSTM or CNN are very necessary, because of the sequential (order-sensitive) nature of sequence tagging tasks. \n\nOne interesting future direction is to design some models that are simple yet still effective at capturing the contextual information needed for sequence tagging tasks. [3] is a great work along this line, which has proposed a simple and fast model for NER based on FOFE. We thank you again for pointing out the FOFE paper! \n\nAll told: again, thanks for the helpful, critical feedback! We think that the paper, with these additional results, should have much more general implications in NLP than it was on submission, and sincerely hope you will agree.\n\n\n[1] Collobert, Ronan, et al. \"Natural language processing (almost) from scratch.\" Journal of Machine Learning Research 12.Aug (2011): 2493-2537.\n[2] Huang, Zhiheng, Wei Xu, and Kai Yu. \"Bidirectional LSTM-CRF models for sequence tagging.\" arXiv preprint arXiv:1508.01991 (2015).\n[3] Xu, Mingbin, and Hui Jiang. \"A FOFE-based Local Detection Approach for Named Entity Recognition and Mention Detection.\" ACL 2017.\n", "This paper presents a very thorough empirical exploration of the qualities and limitations of very simple word-embedding based models. Average and/or max pooling over word embeddings (which are initialized from pretrained embeddings) is used to obtain a fixed-length representation for natural language sequences, which is then fed through a single layer MLP classifier. In many of the 9 evaluation tasks, this approach is found to match or outperform single-layer CNNs or RNNs.\n\nThe varied findings are very clearly presented and helpfully summarized, and for each task setting the authors perform an insightful analysis.\n\nMy only criticism would be the fact that the study is limited to English, even though the conclusions are explicitly scoped in light of this. Moreover, I wonder how well the findings would hold in a setting with a more severe OOV problem than is perhaps present in the studied datasets.\n\nBesides concluding from the presented results that these SWEMs should be considered a strong baseline in future work, one might also conclude that we need more challenging datasets!\n\nMinor things:\n- It wasn't entirely clear how the text matching tasks are encoded. Are the two sequences combined into a single sequence before applying the model, or something else? I might have missed this detail.\n\n- Given the two ways of using the Glove embeddings for initialization (direct update vs mapping them with an MLP into the task space), it would be helpful to know which one ended up being used (i.e. optimal) in each setting.\n\n- Something went wrong with the font size for the remainder of the text near Figure 1.\n\n** Update **\nThanks for addressing my questions in the author response.\n\nAfter following the other discussion thread about the novelty claims, I believe I didn't weigh that aspect strongly enough in my original rating, so I'm revising it. I remain of the opinion that this paper offers a useful systematic comparison that goes sufficiently beyond the focus of the two related papers mentioned in that thread (fasttext and Parikh's).\n", "This paper empirically investigates the differences realized by using compositional functions over word embeddings as compared to directly operating the word embeddings. That is, the authors seek to explore the advantages afforded by RNN/CNN based models that induce intermediate semantic representations of texts, as opposed to simpler (parameter-free) approaches to composing these, like addition. \n\nIn sum, I think this is exploration is interesting, and suggests that we should perhaps experiment more regularly with simple aggregation methods like SWEM. On the other hand, the differences across the models is relatively modest, and the data resists clear conclusions, so I'm not sure that the work will be very impactful. In my view, then, this work does constitute a contribution, albeit a modest one. I do think the general notion of attempting to simplify models until performance begins to degrade is a fruitful path to explore, as models continue to increase in complexity despite compelling evidence that this is always needed.\n\nStrengths\n---\n+ This paper does highlight a gap in existing work, as far as I am aware: namely, I am not sure that there are generally known trade-offs associated with different compositional models over token embeddings for NLP. However, it is not clear that we should expect there to be a consistent result to this question across all NLP tasks.\n\n+ The results are marginally surprising, insofar as I would have expected the CNN/RNN (particularly the former) to dominate the simpler aggregation approaches, and this does not seem borne out by the data. Although this trend is seemingly reversed on the short text data, muddying the story. \n\nWeaknesses\n---\n- There are a number of important limitations here, many of which the authors themselves note, which mitigate the implications of the reported results. First, this is a small set of tasks, and results may not hold more generally. It would have been nice to see some work on Seq2Seq tasks, or sequence tagging tasks at least. \n\n- I was surprised to see no mention of the \"Fixed-Size Ordinally-Forgetting Encoding Method\" (FOFE) proposed by Zhang et al. in 2015, which would seem to be a natural point of comparison here, given that it sits in a sweet spot of being simple and efficient while still expressive enough to preserve word-order information. This actually seems like a pretty glaring omission given that it meets many of the desiderata the authors put forward. \n\n- The interpretability angle discussed seems underdeveloped. I'm not sure that being able to identify individual words (as the authors have listed) meaningfully constitutes \"interpretability\" -- standard CNNs, e.g., lend themselves to this as well by tracing back through the filter activations. \n\n- Some of the questions addressed seem tangential to the main question of the paper -- e.g., word vector dimensionality seems an orthogonal issue to the composition function, and would influence performance for the more complex architectures as well.\n\nSmaller comments\n---\n- On page 1, the authors write \"By representing each word as a fixed-length vector, these embeddings can group semantically similar words, while explicitly encoding rich linguistic regularities and patterns\", but actually I would say that these *implicitly* encode such regularities, rather than explicitly. \n\n- \"architecture in Kim 2014; Collobert et al. 2011; Gan et al. 2017\" -- citation formatting a bit weird here.\n\n\n*** Update based on author response *** \n\nI have read the authors response and thank them for the additional details. \n\nRegarding the limited set of problems: of course any given work can only explore so many tasks, but for this to have general implications in NLP I would maintain that a standard (structured) sequence tagging task/dataset should have been considered. This is not about the number of datasets, but rather than diversity of the output spaces therein.\n\nI appreciated the additional details regarding FOFE, which as the authors themselves note in their response is essentially a generalization of SWEM. \n\nOverall, the response has not changed my opinion on this paper: I think this (exploring simple representations and baselines) is an important direction in NLP, but feel that the paper would greatly benefit from additional work.\n\n", "This paper extensively compares simple word embedding based models (SWEMs) to RNN/CNN based-models on a suite of NLP tasks. \nExperiments on document classification, sentence classification, and natural language sequence matching show that SWEMs perform competitively or even better in the majority of cases.\nThe authors also propose to use max pooling to complement average pooling for combining information from word embeddings in a SWEM model to improve interpretability.\n\nWhile there is not much contribution in terms of technical novelty, I think this is an interesting paper that sheds new lights on limitations of existing methods for learning sentence and document representations. \nThe paper is well written and the experiments are quite convincing.\n- An interesting finding is that word embeddings are better for longer documents, whereas RNN/CNN models are better for shorter text. Do the authors have any sense on whether this is because of the difficulty in training an RNN/CNN model for long documents or whether compositions are not necessary since there are multiple predictive independent cues in a long text?\n- It would be useful to include a linear classification model that takes the word embeddings as an input in the comparison (SWEM-learned).\n- How crucial is it to retrain the word embeddings on the task of interest (from GloVe initialization) to obtain good performance?", "We agree and are well aware that most people are using very thin (one-layer) CNNs, rather than 29-layer CNNs, for NLP problems. We specifically mentioned in the introduction part that most of our comparisons were considering one-layer recurrent/convolutional models (except for document classification tasks where deep models’ results were available). Besides, although fasttext and Parikh’s results have manifested the advantages of simpler model on certain tasks, there were hundreds of recent papers on text representation learning that were based on LSTM or CNN compositional functions, without comparisons to simpler methods. In this regard, the general trade-offs among different compositional functions have not been widely recognized yet. There is a clear gap here for research.\n\nMore importantly, the motivations of the two papers you mentioned are different from ours. As a result, we have presented a much more comprehensive investigation regarding the necessity of employing complicated compositional functions (LSTM or CNN) and have answered many research questions they did not discuss: when (on what type of tasks) do simpler methods work better? When are CNN or LSTM-based models necessary? Why are the advantages provided by complicated compositional functions so limited on tasks such as text matching or document categorization, in other words, why are simpler methods so efficient on these problems? Neither the fasttext paper nor Parikh’s work has explored these interesting questions.\n\nBesides, from Parikh’s results, we cannot directly draw the conclusion that simplicity is better, because the superior results they got may stem from the fact that the compare-aggregate framework they proposed is very efficient, which has made LSTM or CNN unnecessary. Moreover, they have only shown results on SNLI dataset, so that their observations may not apply to other text matching problems in general (e.g. paraphrase identification, answer sentence selection).\n\nMoreover, fasttext and our SWEM variants all belong to the category of simpler methods (with parameter-free compositional functions). Since our motivation is to explore the necessity of employing complicated compositional functions for various NLP tasks, we do not think it is necessary for us to make any comparisons between fasttext and SWEM.\n", "Okay, I understand your claim. \n\n\n- While people have unfortunately made exaggerated claims about LSTMs or 29-layer CNNs for these tasks, I think most people in NLP use word-based models or very thin one layer CNNs. My worry is that you are emphasizing the use of big models, but ignoring an influential set of results on the same tasks with single word or bigram models, e.g. like fasttext (https://arxiv.org/pdf/1607.01759.pdf) or Parikh's results for SNLI. \n\n- Can you make it clear which CNNs you are using? I agree with your conclusion that they are overfitting, but I want to be sure you are trying very simple CNNs for these tasks. For instance in fasttext (https://arxiv.org/pdf/1607.01759.pdf) with a bigram (kernel 2 model) they get similar really good results on the document classification tasks. \n\n", "As to the claim that ‘SWEM consistently outperforms…, on a wide range of training data proportions’, we are considering the case where only part of the training data is available. For example, as shown in Figure 2, for both Yahoo! Ans and SNLI datasets, SWEM consistently performs much better than CNN or LSTM on the wide range of 0.1% ~ 10% proportion of original training data. With the whole training set, SWEM typically performs comparable or a bit better than LSTM or CNN on text matching and document topic prediction tasks. This indicates that SWEM is much less likely to overfit with limited training observations. \n\nFor LSTM and CNN, most of our results are directly used from previous literature wherever available. As to SWEM, there are not much hyperparameters to be tuned thanks to its simplicity and our reported results are quite robust to the selection of hyperparameters. We will make our code publicly available after publication. \n\nThe reasons that SWEM outperforms LSTM or CNN in some cases could be two-fold: 1) as already discussed in the paper, because of the simplicity, SWEM could be much easier to be optimized and thus may converge to some better local optima; 2) as suggested in [1], simpler methods tend to be better at capturing semantics than RNN’s and LSTM’s, although ignoring word-order information. Therefore, for datasets where word-order information is not important (such as Yahoo! Ans or SNLI), directly optimizing the word embeddings (semantics), as in SWEM, could be a better strategy.\n\n[1] Arora, Sanjeev, Yingyu Liang, and Tengyu Ma. \"A simple but tough-to-beat baseline for sentence embeddings.\" (2016).\n", "Sure, I'm all for simplicity on text matching. But your claim goes further, you say: \"Surprisingly, SWEM consistently outperforms CNN and LSTM models by a large margin, on a wide range of training data proportions. \" \n\nI get that your hyperparameters may be better than past experiments. (And Parikh has shown that simple word-based models do really well on these problems). \n\nBut what is going on here? We are in agreement that a single-layer-thin CNN should certainly be able to replicate the SWEM results. (And if you count embedding parameters (which I think you should), has roughly the same number of parameters.) So why is it scoring so much worse in these experiments? It shouldn't hurt so much to have CNN/ngrams vs SWEM/unigrams. ", "We argue that good papers are not always about designing novel algorithms. In the extreme case, you can think of SWEM as a special case of CNN. You can even think of SWEM-aver as a special case of RNN where the transition function is just an adding operation. However, there is no doubt that SWEM models are much simpler than CNN or LSTM, in terms of both computational complexity/speed and number of parameters, but they typically ignore the sequential/compositional information (e.g. word-order features). From this perspective, there is not much work that has investigated the trade-offs among different compositional functions. Our work aims to understand this important research problem with solid experiments and careful analysis.\n\nThus, the motivation of our paper is not to claim that we develop a new model/algorithm, but to discuss/understand the general trade-offs stated above and to answer the following research questions: when is it necessary to employ more complicated compositional function, such as LSTM or CNN, for various NLP tasks? What information, other than the semantic meaning of individual words, is needed for distinct problems? Given the observation that SWEM performs very strong on text matching and document categorization, what semantic features are taken advantage of by SWEM to make the final predictions? How robust are different compositional functions with a relatively small number of training data? We did not know the answer to these questions before our investigation. Max-pooling is introduced while we are trying to answer the third question, and it turns out to help us understand how SWEM works and boost the SWEM performance as a side benefit.\n", "My claim is that this is a semantic distinction. Why wouldn't a kernel size-1 convolution of the same # of features as embedding size, perform as roughly as fast (asymptotically), have roughly the same number of parameters, and perform at least as well your methods? If the kernel was an identity, wouldn't this CNN be exactly the same as SWEM. And of course max-pooling, min-pooling, sum-pooling have all been tried extensively in the single layer CNN context. ", "I want to push back on the novelty claims here. I think we are in agreement that 1) CNN with max pooling is widely used and shown to be effective, and 2) has been shown in many papers to yield greater interpretability. The claim here is that max-over time pooling with embeddings makes this novel. This feels like a stretch. At heart, embeddings are just a kernel-1 convolution. And BoW is just sum-over time pooling. While I don't have a reference for the exact use case of kernel-1 convolution with max-over-time pooling, it has very likely been tried before. ", "As stated above, the main contribution of our paper is to discuss the general trade-offs among distinct compositional functions for various NLP tasks. Besides, we propose, for the first time, to apply max-pooling operation (as a new type of compositional function) directly over the word embedding matrix and have demonstrated its advantages (performance gains, interpretability). To the best of our knowledge, the use of max-pooling operation alone as a compositional function has not been explored before. If possible, could you please let us know the reference that has tried the same setup as our SWEM-max model?\n", "As for the novelty concern raised by the reviewer 1, we want to further highlight our contributions more clearly.\n\nThere are some recent works finding that CNN/LSTM may not be necessary for certain NLP problems. However, the general trade-offs among different compositional functions (simple operations versus more complicated ones) for various NLP applications have not been widely recognized yet and are far from systematic. Our work aims to bridge this gap by conducting an extensive comparative study on a wide range of text-representation-based tasks.\n\nIn this regard, the main contribution of our paper is not to achieve state-of-the-art results, but to investigate the relative importance of word embeddings and compositional functions, as well as to understand the observed results by unveiling the underlying reasons. Therefore, we keep the models to be compared as simple as possible, so that the functionality of different compositional functions could be highlighted.\n\nMoreover, although max-pooling operation has been employed a lot along with convolutions in NLP, our utilization of max-pooling here is different in two main aspects: 1) as far as we are concerned, we are the first to apply max-pooling directly over word embeddings matrix; 2) this operation is shown to endow our SWEM-max model with improved transparency/interpretability (which is one major motivation of our work), and to extract complementary features with averaging operation as well.\n\nTo conclude, our work discovers several general rules (along with careful analysis) on how to rationally choose compositional functions for different NLP problems, which may let us rethink the necessity of employing CNN/LSTM in certain application scenarios. Besides, another interesting research direction, based on our findings, is to develop more challenging NLP datasets that require higher-level language understanding capabilities.", "Thanks for your constructive feedback!\n\n- Although our paper has discussed a limited set of problems (which is true for almost any research), we argue that we have explored 15 different NLP datasets (detailed information in Supplementary), which should have covered a wide range of real-world application scenarios. More importantly, our work also sheds lights on how SWEM model works and what types of information are needed for distinct tasks. Therefore, we suppose that our conclusions here should be helpful and general in many cases of interest. \n\nFor example, if we are solving a text sequence matching problem where word-order information does not contribute a lot (including textual entailment, paraphrase identification, question answering), according to our research, we would know that employing complicated compositions, such as LSTM or CNN, may not be necessary. In this regard, our work reveals several general rules (along with careful analysis) on rationally selecting model for various NLP tasks, which should be useful for future research. \n\n- “Interpretability” definition: we think that there are some misunderstandings here. The key of our “interpretability” here is that we can endow each dimension of word embeddings, learned by SWEM-max, with a topic-specific meaning. That is, embeddings for individual words with a shared semantic topic typically have their largest values in a shared dimension. \n\nWe are aware that word embeddings, such as Word2vec, can also be interpreted with some simple vector arithmetics (e.g. element-wise addition), but we suppose that the property of word vectors mentioned above could be an even more straightforward interpretation regarding how information has been encoded in word vectors. This type of “interpretability” has been previously discussed in [1, 2].\n\n- FOFE model: Thanks for pointing out this inspiring reference. The idea of employing a constant forgetting factor to model word-order information is very interesting. In this regard, we implemented the FOFE model and tested it on both Yahoo and Yelp Polarity datasets. We experimented with different choice of the constant forgetting factor (\\alpha):\n\n\\alpha 0.9 0.99 0.999 0.9999 1.0 SWEM-aver SWEM-concat\nYelp P. 84.58 93.01 93.81 93.79 93.48 93.59 93.76\nYahoo! Ans 72.66 72.72 73.03 72.82 72.97 73.14 73.53\n\nIt is worth noting that when \\alpha = 1, FOFE is very similar to SWEM-aver model, except the fact that FOFE takes the sum over all words, rather than average. As shown above, with a careful selection of \\alpha, FOFE can get slightly better performance on Yelp dataset (with \\alpha = 0.999), compared to SWEM-concat. While on Yahoo dataset, we do not observe significant performance gains with the FOFE model. These results are in consistent with our observations that word-order features are necessary for sentiment analysis, but not for topic prediction. We will include this reference and the additional results in the revised version. \n\nBesides, we totally agree that developing sentence embeddings that are both simple and efficient is a very promising research direction (FOFE is a great work along this line).\n\n- Thanks for pointing out the wording and format issue. We will fix them accordingly in the revision.\n\nHopefully our clarifications could address the concerns and questions raised in your review. Thanks!\n\n[1] Lipton, Zachary C. \"The mythos of model interpretability.\" arXiv preprint arXiv:1606.03490 (2016).\n[2] Subramanian, Anant, et al. \"SPINE: SParse Interpretable Neural Embeddings.\" arXiv preprint arXiv:1711.08792 (2017).\n", "Thanks for your positive review!\n\n- For the text matching tasks, we first use a certain (single) compositional function (mean/max pooling, LSTM or CNN) to encode both sequences into two fixed-length vectors. Then we compare the two vectors by taking their concatenation, element-wise subtraction and element-wise product. These three features are concatenated together and further sent to an MLP classifier for prediction.\n\n- We found that the following tasks performed stronger empirically by mapping with an MLP, while keeping the Glove embeddings fixed: SNLI, MultiNLI, MR, SST-1, SST-2 and TREC. This will be included in future edition.\n\n- We agree that extending our investigation to other language would be an interesting future direction to pursue. Besides, we definitely need more challenging datasets (where higher-level semantic features can be leveraged) for a deeper understanding of natural language!\n\n- OOV problem: empirically, we found that the performance of SWEMs is not sensitive to the choice of vocabulary size, in other words, the number of OOV words. As discussed in the Supplementary, the key words used for predictions are typically of a frequency of around 200 to 300 in the training set. Therefore, we conjecture that treating those relatively rarely words (e.g. appear less than 50 times) as OOV would not have a big impact on the final results.\n\n- Thanks for pointing out. We will fix the font size issue in the revised version.\n", "Thanks for your positive feedback!\n\n- According to our experiments, we tend to think that compositions are not as necessary for longer documents as for short sentences, which is the main reason that SWEM performs comparable or even better than RNN or CNN. The evidence here is two-fold: first, for Yelp review datasets, the text sequences considered are also long documents. However, since word-order features are necessary for sentiment analysis tasks (as demonstrated from multiple perspectives in the paper), CNN or LSTM has shown better results than SWEM. This indicates that even in the case of modeling longer text, LSTM and CNN could potentially take advantage of compositional (word-order) features if necessary. Second, we did observe that there are typically multiple key words (i.e. predictive independent cues) in a longer text for prediction, especially in the case of topic predictions (where the key words could be very topic-specific). This may intuitively explain why compositions are not necessary for document categorization.\n\n- Thanks for suggesting this! We agree that including a linear classifier comparison would be useful. In this regard, we trained and tested our SWEM-concat model along with a linear classifier (denoted as SWEM-linear), the results are shown as below (word embeddings are initialized from GloVe and directly updated during training): \n\nModel Yahoo! Ans. Yelp P.\nSWEM-concat\t 73.53 93.76\nSWEM-linear 73.18 93.66\n\nAs shown above, employing a linear classifier only leads to a very small performance drop for both Yahoo and Yelp datasets. This observation highlights that SWEM model is able to extract robust and informative sentence representations.\n\n- It is quite necessary to fine-tune the GloVe embeddings. As discussed in the paper, an intrinsic difference between GloVe and SWEM-learned embeddings is that the latter are very sparse. This is closely related to the fact that SWEM is utilizing the key words to make predictions. As a result, we would need to update GloVe embeddings or transform them to another space to boost the model performance. At the same time, we also found that for large-scale datasets (such as Yahoo or Yelp dataset), initializing with GloVe does not contribute a lot to the final results (i.e. randomly initializing the word embeddings leads to similar performance).\n" ]
[ -1, -1, 7, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "r1HVB4DxG", "r1HVB4DxG", "iclr_2018_Sy5OAyZC-", "iclr_2018_Sy5OAyZC-", "iclr_2018_Sy5OAyZC-", "ryDqLvfNz", "SJC634fVG", "SJjhTLgEG", "BJkidEgNz", "S1V6EaJ4M", "S1Hg83yVz", "rJpcO1gQG", "r1QGIf37M", "iclr_2018_Sy5OAyZC-", "r1HVB4DxG", "rJ54aZ9gG", "rkEpbo5xz" ]
iclr_2018_Byht0GbRZ
STRUCTURED ALIGNMENT NETWORKS
Many tasks in natural language processing involve comparing two sentences to compute some notion of relevance, entailment, or similarity. Typically this comparison is done either at the word level or at the sentence level, with no attempt to leverage the inherent structure of the sentence. When sentence structure is used for comparison, it is obtained during a non-differentiable pre-processing step, leading to propagation of errors. We introduce a model of structured alignments between sentences, showing how to compare two sentences by matching their latent structures. Using a structured attention mechanism, our model matches possible spans in the first sentence to possible spans in the second sentence, simultaneously discovering the tree structure of each sentence and performing a comparison, in a model that is fully differentiable and is trained only on the comparison objective. We evaluate this model on two sentence comparison tasks: the Stanford natural language inference dataset and the TREC-QA dataset. We find that comparing spans results in superior performance to comparing words individually, and that the learned trees are consistent with actual linguistic structures.
rejected-papers
This work introduces a new type of structured attention network that learn latent structured alignments between sentences in a fully differentiable manner, which allows the network to learn not only the target task, but also the latent relationships. Reviewers seem partial to the idea of the work, and it's originality, but have issues with the contributions. In particular: - The reviewers note that the gains in performance from using this approach are quite small and do not outperform previous structured approaches.
train
[ "HJem9rYlf", "Sybe_7qlG", "S1zkYGm-G", "SJ2-1ibmz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThis paper introduces a structured attention mechanisms to compute alignment scores among all possible spans in two given sentences. The span representations are weighted by the spans marginal scores given by the inside-outside algorithm. Experiments on TREC-QA and SNLI show modest improvement over the word-based structured attention baseline (Parikh et al., 2016).\n\nStrengths:\nThe idea of using latent syntactic structure, and computing cross-sentence alignment over spans is very interesting. \n\nWeaknesses:\nThe paper is 8.5 pages long.\n\nThe method did not out-perform other very related structured attention methods (86.8, Kim et al., 2017, 86.9, Liu and Lapata, 2017)\n\nAside from the time complexity from the inside-outside algorithm (as mentioned by the authors in conclusion), the comparison among all pairs of spans is O(n^4), which is more expensive. Am I missing something about the algorithm?\n\nIt would be nice to show, quantitatively, the agreement between the latent trees and gold/supervised syntax. The paper claimed “the model is able to recover tree structures that very closely mimic syntax”, but it’s hard to draw this conclusion from the two examples in Figure 2.\n", "This paper proposes a model of \"structured alignments\" between sentences as a means of comparing two sentences by matching their latent structures. Overall, this paper seems a straightforward application of the model first proposed by Kim et al. 2017 with latent tree attention.\n\nIn section 3.1, the formula for p(c|x) looks wrong: c_{ijk} are indicator variables. but where are the scores for each span? I think it should be c_{ijk} * \\delta_{ijk} under the summations instead.\n\nIn the same section, the expression for \\alpha_{ij} seems to assume that \\delta_{ijk} = \\dlta_{ij} regardless of k. I.e. there are no production rule scores (transitions). This seems rather limiting, can you comment on that?\n\nIn the answer selection and NLI experiments, the proposed model does not beat the SOTA, and is only marginally better than unstructured decomposable attention. This is rather disappointing.\n\nThe plots in Fig 2 with the marginals on CKY charts are not very enlightening. How do this marginals help solving the NLI task?\n\nMinor comments:\n- Sec. 3: \"Language is inherently tree structured\" -- this is debatable...\n- page 8: (laf, 2008): bad formatted reference", "This paper describes the use of latent context-free derivations, using\na CRF-style neural model, as a latent level of representation in neural\nattention models that consider pairs of sentences. The model implicitly\nlearns a distribution over derivations, and uses marginals under this\ndistribution to bias attention distributions over spans in one sentence\ngiven a span in another sentence.\n\nThis is an intriguing idea. I had a couple of reservations however:\n\n* The empirical improvements from the method seem pretty marginal, to the\npoint that it's difficult to know what is really helping the model. I would\nliked to have seen more explanation of what the model has learned, and\nmore comparisons to other baselines that make use of attention over spans.\nFor example, what happens if every span is considered as an independent random\nvariable, with no use of a tree structure or the CKY chart?\n\n* The use of the \\alpha^0 vs. \\alpha^1 variables is not entirely clear. Once they\nhave been calculated in Algorithm 1, how are they used? Do the \\rho values\nsomewhere treat these two quantities differently?\n\n* I'm skeptical of the type of qualitative analysis in section 4.3, unfortunately.\nI think something much more extensive would be interesting here. As one\nexample, the PP attachment example with \"at a large venue\" is highly suspect;\nthere's a 50/50 chance that any attachment like this will be correct, there's\nabsolutely no way of knowing if the model is doing something interesting/correct\nor performing at a chance level, given a single example. ", "Authors, please post a rebuttal soon if you are planning on it. " ]
[ 6, 5, 5, -1 ]
[ 4, 4, 4, -1 ]
[ "iclr_2018_Byht0GbRZ", "iclr_2018_Byht0GbRZ", "iclr_2018_Byht0GbRZ", "iclr_2018_Byht0GbRZ" ]
iclr_2018_SJZsR7kCZ
Iterative Deep Compression : Compressing Deep Networks for Classification and Semantic Segmentation
Machine learning and in particular deep learning approaches have outperformed many traditional techniques in accomplishing complex tasks such as image classfication, natural language processing or speech recognition. Most of the state-of-the art deep networks have complex architecture and use a vast number of parameters to reach this superior performance. Though these networks use a large number of learnable parameters, those parameters present significant redundancy. Therefore, it is possible to compress the network without much affecting its accuracy by eliminating those redundant and unimportant parameters. In this work, we propose a three stage compression pipeline, which consists of pruning, weight sharing and quantization to compress deep neural networks. Our novel pruning technique combines magnitude based ones with dense sparse dense ideas and iteratively finds for each layer its achievable sparsity instead of selecting a single threshold for the whole network. Unlike previous works, where compression is only applied on networks performing classification, we evaluate and perform compression on networks for classification as well as semantic segmentation, which is greatly useful for understanding scenes in autonomous driving. We tested our method on LeNet-5 and FCNs, performing classification and semantic segmentation, respectively. With LeNet-5 on MNIST, pruning reduces the number of parameters by 15.3 times and storage requirement from 1.7 MB to 0.006 MB with accuracy loss of 0.03%. With FCN8 on Cityscapes, we decrease the number of parameters by 8 times and reduce the storage requirement from 537.47 MB to 18.23 MB with class-wise intersection-over-union (IoU) loss of 4.93% on the validation data.
rejected-papers
This paper presents a new pipeline for nn compression that extends that of Han et. al, but show that it reduces parameters further, maintains higher accuracy and can be applied to methods behind classification (semantic segmentation). While the authors found the paper clearly written, excepting for some typos, and potentially useful, there were questions about originality, and significance. - Reviewers were not completely convinced the method was different enough from deep compression: "The overall pipeline including the last two stage looks quite similar to Han[1].", or that enough focus was paid to the differences inherent with classification focused work: "The paper in the title and abstract refers to segmentation as the main area of focus. However, there does not seem to be much related to it except an experiment on the CityScapes dataset." - In terms of impact, the additional benefits from pruning seem to require a significant amount of computation, and the reviewers were not convinced these were worth a small gain in compression. Furthermore, authors felt that this approach was not being applied to the most state-of-the-art approaches to demonstrate their use.
train
[ "Syn5-0_lM", "BkYtPqKez", "rJrltU5gf", "H1a6sypQf", "SkdxEmgmM", "rkhukfWfG", "H1X-IeeGz", "HyqT8n1fM", "ryU_V00bf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "quality: this paper is of good quality\nclarity: this paper is very clear\noriginality: this paper combines original ideas with existing approaches for pruning to obtain dramatic space reduction in NN parameters.\nsignificance: this paper seems significant.\n\nPROS\n- a new approach to sparsifying that considers different thresholds for each layer\n- a systematic, empirical method to obtain optimal sparsity levels for a given neural network on a task.\n- Very interesting and extensive experiments that validate the reasoning behind the described approach, with a detailed analysis of each step of the algorithm.\n\nCONS\n- Pruning time. Although the authors argue that the pruning algorithm is not prohibitive, I would argue that >1 month to prune LeNet-5 for MNIST is certainly daunting in many settings. It would benefit the experimental section to use another dataset than MNIST (e.g. CIFAR-10) for the image recognition experiment.\n- It is unclear whether this approach will always work well; for some neural nets, the currently used sparsification method (thresholding) may not perform well, leading to very little final sparsification to maintain good performance.\n- The search for the optimal sparsity in each level seems akin to a brute-force search. Although possibly inevitable, it would be valuable to discuss whether or not this approach can be refined.\n\nMain questions\n- You mention removing \"unimportant and redundant weights\" in the pruning step; in this case, do unimportant and redundant have the same meaning (smaller than a given threshold), or does redundancy have another meaning (e.g. (Mariet, Sra, 2016))?\n- Algorithm 1 finds the best sparsity for a given layer that maintains a certain accuracy. Have you tried using a binary search for the best sparsity instead of simply decreasing the sparsity by 1% at each step? If there is a simple correlation between sparsity and accuracy, that might be faster; if there isn't (which would be believable given the complexity of neural nets), it would be valuable to confirm this with an experiment.\n- Have you tried other pruning methods than thresholding to decide on the optimal sparsity in each layer?\n- Could you please report the final accuracy of both models in Table 2?\n\nNitpicks:\n- paragraph break in page 4 would be helpful.", "The paper presents a method for iteratively pruning redundant weights in deep networks. The method is primarily based on a 3-step pipeline to achieve this objective. These three steps consist of pruning, weight sharing and quantization. The authors demonstrate reduction in model size and number of parameters significantly while only undergoing minor decrease in accuracy.\n\nSome of the main points of concern are below :\n\n - Computational complexity - The proposed method of iterative pruning seems quite computationally expensive. In the conclusion, it is mentioned that it takes 35 days of training for MNIST. This seems extremely high, and given this, it is unclear if there is much benefit in further reduction in model sizes and parameters (by the proposed method) than those obtained by existing method such as Han etal.\n\n - The novelty in the paper is quite limited and is mainly based on combining existing methods for pruning, weight sharing and quantization. The main difference from existing method seems to be the inclusion of layerwise threshold for weight pruning instead of using a single global threshold.\n\n - The results shown in Table 2 do not indicate much difference in terms of number of parameters between the proposed method and that of Han etal. For instance, the number of overall remaining parameters is 6.5% for the proposed method versus 8% for Deep Compression. As a result, the impact of the proposed method seems quite limited. \n\n - The paper in the title and abstract refers to segmentation as the main area of focus. However, there does not seem to be much related to it except an experiment on the CityScapes dataset.", "This paper inherits the framework proposed by Han[1]. A pruning, weight sharing, quantization pipeline is refined at each stage. At the pruning stage, by taking into account difference in the distribution across the layers, this paper propose a dynamic threshold pruning, which partially avoids mistakenly pruning important connections. As for the weight sharing stage, this paper explores several ways to initialize the clustering method. The introduction of error tolerance gives us more fine-grained control over the compression process.\n\nHere are some issues to be paid attention to:\n\n1. The overall pipeline including the last two stage looks quite similar to Han[1]. Though different initialization methods are tested in this paper, final conclusion does not change.\n\n2. The dynamic threshold pruning seems to be very time-consuming. As indicated from the paper, only 42 iterations for MNIST and 32 iterations for Cityscapes are required. Whether these number works for each layer or total network should be clarified.\n\n3. Fig 7(a) says it's error rate while it plots accuracy rate.\n\n4. Experiments on popular network structure such as residual connection should be conducted, as they are widely used nowadays.\n\n\nReferences:\n[1] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015 \n", "Changes are made to the paper.", "The number of iterations mentioned for pruning is for the whole network and most of the compression happens in the first 10 iterations, meaning that the method is not so time-consuming as it may seem from the total runtime reported.", "1) We removed unimportant weights (smaller than a given threshold). Or, we can say both have the same meaning.\n\n2) It is really an interesting idea and might perform faster, however, considering the complexity of the network, there might be convergence problem as we would change the sparsity abruptly. Indeed, we could try it out with an experiment. \n\n3) We used the simple heuristic of quantifying the importance of weights using their absolute values. We could try the other ways in future work.\n\n4) We will add it in our next version.\n\n5) We will take this in our next version.", "1) It’s a typo. It took 35 hours for MNIST. We will correct it in our next revision.\n\n2) Following points highlights the differences between the existing and our approach.\n\n -\t We evaluate different threshold initialization methods for 
weight pruning. To determine those thresholds, we \n conducted an experiment in which we calculate the minimum achievable sparsity in each layer.\n\n -\t We explore different clustering techniques to find shared weights. We examine the impact of density based \n meanshift clustering and unsupervised k-means clustering with random and linear centroid initialization.\n\n -\t We also evaluated different weight sharing possibilities. First, only within a layer, that is finding shared weights \n among the multiple connections within a layer (Han et al.) and second, across all the layers, that is, finding \n shared weights among multiple connections across all the layers. We found that the second method \n outperforms the first one.\n\n -\t We show the trade-off between the number of clusters by state-of-the-art weight sharing technique (k-means \n clustering with linear centroid initialization) and network performance. We also proposed and implemented \n ways to improve it. 
\n\n -\t We compress and evaluate our method on a fully convolutional network performing semantic segmentation \n and we are not aware of any state-of-the-art technique that obtains good compression rates for such networks.\n\n3) We successfully demonstrated the flexibility of our method by testing it on fully convolutional network performing other task than classification. \n\nWe also outperformed the existing pruning method (Han et al.) not only in terms of compression statistics but also in accuracy results. \n\nHowever, a better comparison could be done with some other network / dataset, such as inception and ImageNet, but that the focus was indeed on the segmentation.\n\n4) Currently, there is no date set that could adequately captures the complexity of real-world urban scenes [1]. Cityscapes is a benchmark suite and large-scale dataset to address the understanding of complex urban street scenes and there was no experiment performed on this very relevant dataset. So, we focus to use cityscapes high quality images in our experiments and address the problem of real time computation with limited hardware resources in autonomous driving\n\nAlso, in this research work, one of our main goals was to perform compression on networks performing some other tasks than just classification. So, unlike all the previous works, where compression is only performed on networks performing classification, we evaluated and performed compression on networks for semantic segmentation. \n\nReferences: [1] Cordts, Marius, et al. \"The cityscapes dataset for semantic urban scene understanding.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.\n\n", "2) This is for the whole network. In each iteration, we performed pruning and retraining on each layer simultaneously. We will clarify this in the next version. Moreover, most of the compression happens in the first 10 iterations, meaning that the method is not so time-consuming as it may seem from the total runtime reported.\n\n3) We will correct it in our next revision.\n\n4) Yes, it would be really interesting to see how our compression works on residual connections. This could be our future research work.\n\nIn this research work, one of our main goal was to perform compression on networks performing some other tasks than just classification. So, unlike all the previous works, where compression is only performed on networks performing classification, we also evaluated and performed compression on networks for semantic segmentation. In this work, we tried to address the problem of real time computation with limited hardware resources in autonomous driving. Semantic segmentation is greatly useful for understanding scenes in autonomous driving. So, we tried to compress a network performing semantic segmentation on Cityscapes dataset.\n\nAlso, we are not aware of any state-of-the-art technique that obtains good compression rates for fully convolutional networks, so we were interested to see how much compression could be achieved on a network without any fully connected layer. Thus, we decided to compress the fully convolutional network.\n", "Here, we would like to highlight the differences between the Han[1] and our approach for the weight sharing stage: \n\nYes, we evaluated the different initialization methods. And, we also evaluated different weight sharing possibilities. First, only within a layer, that is finding shared weights among the multiple connections within a layer and second, across all the layers, that is, finding shared weights among multiple connections across all the layers. We found that the second method outperforms the first one in our case, however, Han[1] stated and used the first one. Comparison of weight sharing techniques discussed above:\n\n\nWeight sharing techniques for LeNet on Mnist\t Number of clusters found\t Accuracy achieved\n\nk-means with linear initialization within layers [Han]\t 24\t 99.14%\nk-means with linear initialization across all the layers [ours]\t 25\t 99.28%\n\nWe further improved our k-means with linear initialization across all the layers by checking the possibility of reducing down the number of shared weights. For this, we added one more step to the pipeline, that is, pruning of the codebook. For LeNet on Mnist, we reduced the number of shared weights from 25 to 15 by applying codebook pruning with accuracy loss of just 0.01%. So, our approach gives the optimal trade-off between number of shared weights and loss of accuracy.\n" ]
[ 6, 4, 5, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJZsR7kCZ", "iclr_2018_SJZsR7kCZ", "iclr_2018_SJZsR7kCZ", "iclr_2018_SJZsR7kCZ", "Syn5-0_lM", "Syn5-0_lM", "BkYtPqKez", "rJrltU5gf", "rJrltU5gf" ]
iclr_2018_BkQCGzZ0-
Discrete Autoencoders for Sequence Models
Recurrent models for sequences have been recently successful at many tasks, especially for language modeling and machine translation. Nevertheless, it remains challenging to extract good representations from these models. For instance, even though language has a clear hierarchical structure going from characters through words to sentences, it is not apparent in current language models. We propose to improve the representation in sequence models by augmenting current approaches with an autoencoder that is forced to compress the sequence through an intermediate discrete latent space. In order to propagate gradients though this discrete representation we introduce an improved semantic hashing technique. We show that this technique performs well on a newly proposed quantitative efficiency measure. We also analyze latent codes produced by the model showing how they correspond to words and phrases. Finally, we present an application of the autoencoder-augmented model to generating diverse translations.
rejected-papers
This paper presents a different method for learning autoencoders with discrete hidden states (compared to recent discrete-like VAE type models). The reviewers in general like the method being proposed and are convinced that there is worth to the underlying proposal. However there are several shared complaints about the setup and writing of the paper. - Several reviewers complained about the use of qualitative evaluation, particularly in the "Deciphering the latent code" section of the paper. - One reviewer in particular had significant issues with the experimental setup of the paper and felt that there was insignificant quantitative evaluation, particularly using standard metrics for the task (compared to the metric introduced in the paper). - There were further critiques about the "procedural" nature of the writing and the lack of formal justifications for the ideas introduced.
train
[ "Hk_BLd_ef", "HJGJNQcez", "ryeycmTlz", "HJrln8hmG", "H1T2PNnzG", "Bk_yWAofM", "Sk0B0pifG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author" ]
[ "The topic is interesting however the description in the paper is lacking clarity. The paper is written in a procedural fashion - I first did that, then I did that and after that I did third. Having proper mathematical description and good diagrams of what you doing would have immensely helped. Another big issue is the lack of proper validation in Section 3.4. Even if you do not know what metric to use to objectively compare your approach versus baseline there are plenty of fields suffering from a similar problem yet doing subjective evaluations, such as listening tests in speech synthesis. Given that I see only one example I can not objectively know if your model produces examples like that 'each' time so having just one example is as good as having none. ", "This is an interesting paper focusing on building discrete reprentations of sequence by autoencoder. \nHowever, the experiments are too weak to demonstrate the effectiveness of using discrete representations.\nThe design of the experiments on language model is problematic.\nThere are a few interesting points about discretizing the represenations by saturating sigmoid and gumbel-softmax, but the lack of comparisons to benchmarks is a critical defect of this paper. \n\n\nGenerally, continuous vector representations are more powerful than discrete ones, but discreteness corresponds to some inductive biases that might help the learning of deep neural networks, which is the appealing part of discrete representations, especially the stochastic discrete representations. \nHowever, I didn't see the intuitions behind the model that would result in its superiority to the continuous counterpart. \nThe proposal of DSAE might help evaluate the usage of the 'autoencoding function' c(s), but it is certainly not enough to convince people. \nHow is the performance if c(s) is replaced with the representations achieved from autoencoder, variational autoencoder or simply the sentence vectors produced by language model?\nThe qualitative evaluation on 'Deciperhing the Latent Code' is not enough either. \nIn addition, the language model part doesn't sound correct, because the model cheated on seeing the further before predicting the words autoregressively.\nOne suggestion is to change the framework to variational auto-encoder, otherwise anything related to perplexity is not correct in this case.\n\nOverall, this paper is more suitable for the workshop track. It also needs a lot of more studies on related work.", "The authors describe a method for encoding text into a discrete representation / latent space. On a measure that they propose, they should it outperforms an alternative Gumbel-Softmax method for both language modeling and NMT.\n\nThe proposed method seems effective, and the proposed DSAE metric is nice, though it’s surprising if previous papers have not used metrics similar to normalized reduction in log-ppl. The datasets considered in the experiments are also large, another plus. However, overall, the paper is difficult to read and parse, especially since low-level details are weaved together with higher-level points throughout, and are often not motivated.\n\nThe major critique would be the qualitative nature of results in the sections on “Decipering the latent code” and (to a lesser extent) “Mixed sample-beam decoding.” These two sections are simply too anecdotal, although it is nice being stepped through the reasoning for the single example considered in Section 3.3. Some quantitative or aggregate results are needed, and it should at least be straightforward to do so using human evaluation for a subset of examples for diverse decoding.\n", "We'd like to thank the reviewer on the suggestions on how to improve the presentation. We will move Figure 2 to the beginning and add a description of a sequence model. We only omitted it because we considered it standard for ICLR, but we will move it to the front. As for the function c(s), the diagram for it is in Figure 1. The diagram for the entire process is in Figure 2, but when we move it to the front we also plan to expand it to clarify the whole process.\n\nAs for using MOS, it has to our knowledge never been applied to translation and we know of no papers that would report MOS scores for WMT, so our results would be hard to compare. But even if we used a more standard metric, they all (including MOS) have the problem that they do not take the diversity into account at all. The advantage of our approach is that it can generate a diverse set of translations, but we don't know of any metric to quantify this (and we'd be grateful for pointers if one has been used before).", "You are describing a sequence model here without formally giving its mathematical equation as well as what this model depends on. Putting that into a formal equation at the beginning of your paper makes reading immensely easier. This also enables you to introduce Figure 2 at the beginning rather than the end. Why would I want to see it at the end? \n\nYour treatment of auto encoding function c(s) is similar. Why not to give a block diagram to describe the process? Which parts are discrete, which parts are continuous. What would be the relation between dense(w), bottleneck(w) and c(s)? Show how does the training signal goes through these. What is the training objective function? It would have immensely helped to have a diagram of the entire process rather than drawing it myself after reading 2 pages of text. \n\nAs I mentioned in my previous review, a single example is not meaningful and you should use metrics such as mean opinion score (MOS) used in speech synthesis. Generate N samples, ask colleagues that speak German to assess how good they are without telling them from which model these samples came from, rank the results. Find more info online. \n", "We thank the reviewer for the comments, but are puzzled by the sentence \"Having proper mathematical description and good diagrams of what you doing would have immensely helped.\". In the body of the paper, we try to give complete mathematical definitions of every term that appears there, the whole Section 2 is devoted to that, and Figures 1 and 2 are a diagramatic representation. We would like to improve them, but please give us concrete suggestions. We also double-checked the paper with external readers and it seemed to them that every single term was properly mathematically defined -- please clarify which terms are undefined to help us improve the paper. As for Section 3.4, we agree that it would be great to measure the diversity of translations in a quantitative way, not just qualitatively. But we are not aware of any metric of this kind -- we'd be happy to add the results if the reviewer can suggest one. Lack of such metric might be related to the fact that our work, to the best of our knowledge, presents the first time when an autoencoder works well enough for language to allow for such diverse results. We ask the reviewer to take this into account and possibly revise the score.", "We are grateful for the reviewer for bringing up the point of comparison to dense autoencoders, such as VAEs. We've performed a number of experiments with VAEs in the same setting. The reviewer writes \"How is the performance if c(s) is replaced with the representations achieved from autoencoder, variational autoencoder or simply the sentence vectors [...]\". We wanted to assess this, but it is hard in principle to calculate DSAE in this case, as the number of bits in the auto-encoded representation cannot be easily measured in the dense case. In principle it's not clear at all how to measure it for plain autoencoders or sentence vectors as even a 1-d real number can contain an unlimited number of bits (if it's a 32-bit float, it's just 32, but that's still more than our discrete representation). For VAEs, one can use the KL divergence as measure, but this is an approximate notion even in theory. In practice, all dense autoencoders, even into 4-d or 6-d vectors, begin to perfectly autoencode the sequences. So while p' is very low, it's almost impossible later to sample from those dense distributions. We tried this for VAEs as well, but even with different annealing schemes for the KL term we never managed to obtain high-quality samples, nothing comparable to our discrete results. Notably, this experience with dense autoencoders for language has been replicated by others. So our discrete method is at present the only way we know of to make autoencoders work well for language models. We hope that the reviewer will take this into account and revise the score. We are also happy to include more evaluations and will be grateful for more concrete suggestions of metrics that could be used." ]
[ 5, 4, 6, -1, -1, -1, -1 ]
[ 5, 4, 1, -1, -1, -1, -1 ]
[ "iclr_2018_BkQCGzZ0-", "iclr_2018_BkQCGzZ0-", "iclr_2018_BkQCGzZ0-", "H1T2PNnzG", "Bk_yWAofM", "Hk_BLd_ef", "HJGJNQcez" ]
iclr_2018_Bkl1uWb0Z
Inducing Grammars with and for Neural Machine Translation
Previous work has demonstrated the benefits of incorporating additional linguistic annotations such as syntactic trees into neural machine translation. However the cost of obtaining those syntactic annotations is expensive for many languages and the quality of unsupervised learning linguistic structures is too poor to be helpful. In this work, we aim to improve neural machine translation via source side dependency syntax but without explicit annotation. We propose a set of models that learn to induce dependency trees on the source side and learn to use that information on the target side. Importantly, we also show that our dependency trees capture important syntactic features of language and improve translation quality on two language pairs En-De and En-Ru.
rejected-papers
In this work reviewers use structured attention as a way to induce grammatical structure in NMT models. Reviewers liked th motivation of the work and found experiments mostly well done. However reviewers found the paper a bit difficult to follow, with several commenting that distinctions made between the different sub types of attention were not clear. Mainly the reviewers were not overwhelmed by the results of the work, saying that these gains, while clearly isolated to the use of structure were not significantly large. Additionally there were some concerns about the claimed novelty of the work, particularly compared to Liu and Lapata and other use of syntax in translation, and also which aspects were new or necessary.
val
[ "SyOq7nXxf", "Hyg5qAtgf", "rylV679xz", "B1KCT7WNz", "rJtGoXWzz", "r1Jhq7-Gz", "SkrGscezf", "Sk8occlMG", "Bkl159lfG", "HJwRq_ilz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public" ]
[ "This paper adds source side dependency syntax trees to an NMT model without explicit supervision. Exploring the use of syntax in neural translation is interesting but I am not convinced that this approach actually works based on the experimental results.\n\nThe paper distinguishes between syntactic and semantic objectives (4th paragraph in section 1), attention, and heads. Please define what semantic attention is. You just introduce this concept without any explanation. I believe you mean standard attention, if so, please explain why standard attention is semantic.\n\nClarity. What is shared attention exactly? Section 3.2 says that you share attention weights from the decoder with encoder. Please explain this a bit more. Also the example in Figure 3 is not very clear and did not help me in understanding this concept.\n\nResults. A good baseline would be to have two identical attention mechanisms to figure out if improvements come from more capacity or better model structure. Flat attention seems to add a self-attention model and is somewhat comparable to two mechanisms. The results show hardly any improvement over the flat attention baseline (at most 0.2 BLEU which is well within the variation of different random initializations). It looks as if the improvement comes from adding additional capacity to the model. \n\nEquation 3: please define H.", "This paper describes a method to induce source-side dependency structures in service to neural machine translation. The idea of learning soft dependency arcs in tandem with an NMT objective is very similar to recent notions of self-attention (Vaswani et al., 2017, cited) or previous work on latent graph parsing for NMT (Hashimoto and Tsuruoka, 2017, cited). This paper introduces three innovations: (1) they pass the self-attention scores through a matrix-tree theorem transformation to produce marginals over tree-constrained head probabilities; (2) they explicitly specify how the dependencies are to be used, meaning that rather than simply attending over dependency representations with a separate attention, they select a soft word to attend to through the traditional method, and then attend to that word’s soft head (called Shared Attention in the paper); and (3) they gate when attention is used. I feel that the first two ideas are particularly interesting. Unfortunately, the results of the NMT experiments are not particularly compelling, with overall gains over baseline NMT being between 0.6 and 0.8 BLEU. However, they include a useful ablation study that shows fairly clearly that both ideas (1) and (2) contribute equally to their modest gains, and that without them (FA-NMT Shared=No in Table 2), there would be almost no gains at all. Interesting side-experiments investigate their accuracy as a dependency parser, with and without a hard constraint on the system’s latent dependency decisions.\n\nThis paper has some very good ideas, and asks questions that are very much worth asking. In particular, the question of whether a tree constraint is useful in self-attention is very worthwhile. Unfortunately, this is mostly a negative result, with gains over “flat attention” being relatively small. I also like the “Shared Attention” - it makes a lot of sense to say that if the “semantic” attention mechanism has picked a particular word, one should also attend to that word’s head; it is not something I would have thought of on my own. The paper is also marred by somewhat weak writing, with a number of disfluencies and awkward phrasings making it somewhat difficult to follow.\n\nIn terms of specific criticisms:\n\nI found the motivation section to be somewhat weak. We need a better reason than morphology to want to do source-side dependency parsing. All published error analyses of strong NMT systems (Bentivogli et al, EMNLP 2016; Toral and Sanchez-Cartagena, EACL 2017; Isabelle et al, EMNLP 2017) have shown that morphology is a strength, not a weakness of these systems, and the sorts of head selection problems shown in Figure 1 are, in my experience, handled capably by existing LSTM-based systems.\n\nThe paper mentions “significant improvements” in only two places: the introduction and the conclusion. With BLEU score differences being so low, the authors should specify how statistical significance is measured; ideally using a technique that accounts for the variance of random restarts (i.e.: Clark et al, ACL 2011).\nEquation (3): I couldn’t find the definition for H anywhere.\n\nSentence before Equation (5): I believe there is a typo here, “f takes z_i” should be “f takes u_t”.\n\nFirst section of Section 3: please cite the previous work you are talking about in this sentence.\n\nMy understanding was that the dependency marginals in p(z_{i,j}=1|x,\\phi) in Equation (11) are directly used as \\beta_{i,j}. If I’m correct, that’s probably worth spelling out explicitly in Equation (11): \\beta_{i,j} = p(z_{i,j}=1|x,\\phi) = …\n\nI don’t don’t feel like the clause between equations (17) and (18), “when sharing attention weights from the decoder with the encoder” is a good description of your clever “shared attention” idea. In general, I found this region of the paper, including these two equations and the text between them, very difficult to follow.\n\nSection 4.4: It’s very very good that you compared to “flat attention”, but it’s too bad for everyone cheering for linguistically-informed syntax that the results weren’t better.\n\nTable 5: I had a hard time understanding Table 5 and the corresponding discussion. What are “production percentages”?\n\nFinally, it would have been interesting to include the FA system in the dependency accuracy experiment (Table 4), to see if it made a big difference there.", "This paper induces latent dependency syntax in the source side for NMT. Experiments are made in En-De and En-Ru.\n\nThe idea of imposing a non-projective dependency tree structure was proposed previously by Liu and Lapata (2017) and the structured attention model by Kim and Rush (2017). In light of this, I see very little novelty in this paper. The only novelty seems to be the gate that controls the amount of syntax needed for generating each target word. Seems thin for a ICLR paper.\n\nCaption of Fig 1: \"subject/object\" are syntactic functions, not semantic roles.\n\nI don't see how the German verb \"orders\" inflects with gender... Can you post the gold German sentence?\n\nSec 2 is poorly explained. What is z_t? Do you mean u_t instead? This is confusing.\n \nExpressions (12) to (15) are essentially the same as in Liu and Lapata (2017), not original contributions of this paper.\n\nWhy is hard attention (sec 3.3) necessary? It's not differentiable and requires sampling for training. This basically spoils the main advantage of structured attention mechanisms as proposed by Kim and Rush (2017).\n\nExperimentally, the gains are quite small compared to flat attention, which is disappiointing.\n\nIn table 3, it would be very helpful to display the English source.\n\nTable 4 is confusing. The DA numbers (rightmost three columns) are for the 2016 or 2017 dataset?\n\nComparison with predicted parses by Spacy are by no means \"gold\" parses...\n\nMinor comments:\n- Sec 1: \"... optimization techniques like Adam, Attention, ...\" -> Attention is not an optimization technique, but part of a model\n- Sec 1: \"abilities not its representation\" -> comma before \"not\"\n", "Dear reviewers,\nWe would like to let you know that we've updated our paper based on your valuable comments. Again, We thank you for your feedback!", "One of the aspects of this work we don’t feel we made sufficiently clear were the roles of structured and shared attention. \nIn this work, we explore structured attention (SA) as an explicit way to encode hierarchical structure of the source sentence. SA takes the global structure of a sentence into account as it needs to compute the partition function. For completeness, we also compare SA against self-attention (Flat attention) which only consider local dependencies between a word and its head. We evaluate the quality of induced tree on Universal Dependencies dataset. We observe that overall, SA models obtain better DA/UA scores, and SA are highly sensitive with the choice of architectures (sharing attention). Our original motivation for sharing attention is to help biasing SA to capture syntactic dependencies on the source side. This is reflected in both BLEU scores and DA/UA scores. Without sharing attention, DA/UA scores are considerably worse while the models still on par or outperform the NMT baselines. We find this result is exciting. It suggests that there should a better way to exploit SA for improving NMT as well as grammar induction. Further, the fact that the choice of the target slide language changes these values hints at the different agreement syntax of languages so combining models may lead to further gains and syntactic learning in the future.\nFinally, we also note that, for grammar induction, we are particularly interested in DE as long distance dependencies are more common in DE. The results show that structured attention are indeed superior to FA. We leave these insights to future work to explore.\n", "Thank you for your interest and encouragement! We have been working to recompute numbers on UD for reviews (see below) so now that we have those we can try and build out better visualizations to share more standard dependency graphs. We are sorry for the delay.", "Thank you for your feedback, we are sorry that the semantic attention wasn’t explained clearly in the text. We indeed mean semantic attention as standard attention as you’ve guessed. By semantics, we meant word translation semantics (word f is translated to word e). Our assumption is based on the insights from (Koehn and Knowles, 2017) in which they computed match score between the most attended source word and the aligned word (produced by fast-align) and reported the match scores are higher than 70% for English->German, English<->Russian. \n\nBy syntactic attention, we meant that when the model decided to translate a word f in the source side, we want to model also look at syntactic relations of the word f in an explicit way, such as the head word of f. We hope that this approach would project richer structured information from the source to the target.\n\nWe have now included statistical significant test in the 1st revision. You are right that the gain of SA-NMT is not statistically significant when compared to flat (shared)-attention models. We also included significant test when compared against the flat (no-shared)-attention models. The updated results in Table (2) and (3) shows that sharing attention is beneficial for both NMT and grammar induction. Our results also suggest that there are two possible ways to get more structural information from the source side: using Structured Attention and sharing attention. The Flat attention well behaved in our experiments perhaps because the restriction of sharing attention makes it biases further to syntactic information, or dependency head in this case.\n\nWe are sorry that the idea of sharing attention wasn’t well explained in our paper. We are working on the clarification and we will update it soon.\n\nIn equation 3, we meant S\n", "We would like to thank the reviewer for useful comments, and apologize for disfluencies in the original text. We will absolutely prioritize clarity as we rework the writing in the final version of the paper.\n\nWe agree with (Bentivogli et al, EMNLP 2016; Toral and Sanchez-Cartagena, EACL 2017) that NMT handles morphology better than phrase-based MT. Isabelle et al does show that NMT can capture more morphology for French and English. In our work, we choose German as a target language where long distance dependencies commonly occur. We see that branching baselines perform measurably worse on the basic dependents of nouns and verbs in Table 4. We agree that it is reasonable to believe that existing NMT can handle the syntactic dependencies in an implicit manner, in our experiment (Figure 5), we show that if that information is available, the decoder prefers to use them, especially when predicting German verb. Additionally, when we designed our architectures, with the explicit goal of extracting interpretable structures, in part to compare the representations to prior linguistic knowledge of language. Both structured attention and gating norm to certain extent allow us to perform analysis on the task instead of training an additional classifier to probe the ability of the models in capturing linguistic phenomena.\n\nWe have now included statistical significance tests in the 1st revision and we will make sure they are more clearly explained and pronounced in the final version. We have now also included significant tests when comparing against the flat (no-shared)-attention models. We truly appreciate that the reviewer noticed the shared attention mechanism we proposed even though we didn’t explain it well, something we are remedying by getting more eyes on our paper and isolating sources of confusion. We will try our best to make it more accessible in the next revision.\n\nYou are correct in your understanding of the dependency marginals in Equation (11). We have elaborated on this in the revision but are open to further suggestions.\n\nRegarding “production percentages” – While aggregate attachment numbers give a score to how well syntax is induced generally, they don’t give us insight into the grammar. As a proxy for which grammatical rules the system has learned, we choose to analyze the frequency with which specific “head → child rules” were used by our model vs how often that rule exists in the grammar of the language. For example, the three most common verb constructions are verb chains (VERB→ VERB), verb subj/obj (VERB→ NOUN) and verb’s being modified by an adverb (VERB→ ADV). The gold column indicates how common these constructions are in the true data and the remaining columns show how often our systems believe these constructions exist. We will need to spend more time in the next revision clarifying this demonstration.\n\nIn equation 3, we meant S. We are sorry for the typo.\n", "Thank you for your insightful comments, we failed to properly convey the intention and novelty of the work. The field of NLP has long prioritized structured model representations under the assumption that there are a fundamental property of language and therefore often a prerequisite for language tasks. A naive reading of recent research in NMT seems to direct contradict this age old belief. Our goal here was to investigate if the success and seeming necessity of attention mechanisms is related to their ability to capture these structural/hierarchical properties of language. The gating mechanisms and gains in BLEU score exist in service of this exploration, so so we agree are in and of themselves perhaps more minor contributions. For example, you're absolutely correct that some of the basic components of our model are shared with previous work (Kim, 17 and Liu 17), but the shared attention and gating syntax are crucial in our models and their resulting analysis. We will try to make this clearer in the final version.\n\nWe originally ran our evaluation on Spacy’s outputs due to discrepancies between the MT tokens and tokenization and treebanks, but we have remedied this difference and updated all of our numbers to be evaluated against the Universal Dependencies treebanks. The new results are in Table 3. \n\nThere are two main results from Table 3: 1. Sharing attention appears to almost exclusively increase the model’s ability to capture syntax and 2. That structure attention generally outperforms flat attention if viewed through the same lense. Almost secondary to these results is the fact that shared structured attention also benefits translation BLEU scores (updated with statistical significance). This result does however hint that better modeling or inducing linguistic structure might further benefit translation performance.\n\nFinally, you are correct to question the inclusion of hard-attention. While it is harmful for translation it appears to help grammar induction. We hope that understanding this discrepancy and the possible (de-)coupling of the two metrics may lead to new results in future-work. Maybe multiple syntactic analyses should be used as references instead of a single formalism? In our experiments, hard-attention is deterministically computing by taking the max instead of sampling.\n\nWe will rework our example and fix typos like z_t which you are correct should be u_t.\n", "Do you have any examples of the structures learned with hard attention beside the tricky-to-read example in Figure 4?" ]
[ 3, 6, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 5, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Bkl1uWb0Z", "iclr_2018_Bkl1uWb0Z", "iclr_2018_Bkl1uWb0Z", "iclr_2018_Bkl1uWb0Z", "iclr_2018_Bkl1uWb0Z", "HJwRq_ilz", "SyOq7nXxf", "Hyg5qAtgf", "rylV679xz", "iclr_2018_Bkl1uWb0Z" ]
iclr_2018_Syx6bz-Ab
Seq2SQL: Generating Structured Queries From Natural Language Using Reinforcement Learning
Relational databases store a significant amount of the worlds data. However, accessing this data currently requires users to understand a query language such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model uses rewards from in the loop query execution over the database to learn a policy to generate the query, which contains unordered parts that are less suitable for optimization via cross entropy loss. Moreover, Seq2SQL leverages the structure of SQL to prune the space of generated queries and significantly simplify the generation problem. In addition to the model, we release WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables fromWikipedia that is an order of magnitude larger than comparable datasets. By applying policy based reinforcement learning with a query execution environment to WikiSQL, Seq2SQL outperforms a state-of-the-art semantic parser, improving execution accuracy from 35.9% to 59.4% and logical form accuracy from 23.4% to 48.3%.
rejected-papers
This paper introduces a new dataset and method for a "semantic parsing" problem of generating logical sql queries from text. Reviews generally seemed to be very impressed by the dataset portion of the work saying "the creation of a large scale semantic parsing dataset is fantastic," but were less compelled by the modeling aspects that were introduced and by the empirical justification for the work. In particular: - Several reviewers pointed out that the use of RL in particularly this style felt like it was "unjustified", and that the authors should have used simpler baselines as a way of assessing the performance of the system, e.g. "There are far simpler solutions that would achieve the same result, such as optimizing the marginal likelihood or even simply including all orderings as training examples" - The reviewers were not completely convinced that the authors' backed up their claims about the role of this dataset as a novel contribution. In particular there were questions about its structure, e.g. "dataset only covers simple queries in form of aggregate-where-select structure" and about comparisons with other smaller but similar datasets, e.g. "how well does the proposed model work when evaluated on an existing dataset containing full SQL queries, such as ATIS" There was an additional anonymous discussion about the work not citing previous semantic parsing datasets. The authors noted that this discussion inappropriately brought in previous private reviews. However it seems like the main reviewers issues were orthogonal to this point, and so it was not a major aspect of this decision.
train
[ "SkGbiIKxz", "ByL2SX9ez", "HJ75M8ogM", "r1E6tYEmz", "ByyiYYEXz", "ByeStYNQM", "S1cptoZXG", "H1PC1EX1M", "Hk2vAMlJM", "SJmanaxJz", "HJg8aSa0-", "S184E1gyz", "r1qqBPk1z", "By6SuETCZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "public", "author", "public", "author", "public", "author", "public" ]
[ "This paper presents a new approach to support the conversion from natural language to database queries. \n\nOne of the major contributions of the work is the introduction of a new real-world benchmark dataset based on questions over Wikipedia. The scale of the data set is significantly larger than any existing ones. However, from the technical perspective, the reviewer feels this work has limited novelty and does not advance the research frontier by much. The detailed comments are listed below.\n\n1) Limitation of the dataset: While the authors claim this is a general approach to support seq2sql, their dataset only covers simple queries in form of aggregate-where-select structure. Therefore, their proposed approach is actually an advanced version of template filling, which considers the expression/predicate for one of the three operators at a time, e.g., (Giordani and Moschitti, 2012).\n\n2) Limitation of generalization: Since the design of the algorithms is purely based on their own WikiSQL dataset, the reviewer doubts if their approach could be generalized to handle more complicated SQL queries, e.g., (Li and Jagadish, 2014). The high complexity of real-world SQL stems from the challenges on the appropriate connections between tables with primary/foreign keys and recursive/nested queries. \n\n3) Comparisons to existing approaches: Since it is a template-based approach in nature, the author should shrink the problem scope in their abstract/introduction and compare against existing template approaches. While there are tons of semantic parsing works, which grow exponentially fast in last two years, these works are actually handling more general problems than this submission does. It thus makes sense when the performance of semantic parsing approaches on a constrained domain, such as WikiSQL, is not comparable to the proposal in this submission. However, that only proves their method is fully optimized for their own template.\n\nAs a conclusion, the reviewer believes the problem scope they solve is much smaller than their claim, which makes the submission slightly below the bar of ICLR. The authors must carefully consider how their proposed approach could be generalized to handle wider workload beyond their own WikiSQL dataset. \n\nPS, After reading the comments on OpenReview, the reviewer feels recent studies, e.g., (Guu et al., ACL 2017), (Mou et al, ICML 2017) and (Yin et al., IJCAI 2016), deserve more discussions in the submission because they are strongly relevant and published on peer-reviewed conferences.", "This work introduces a new semantic parsing dataset, which focuses on generating SQL from natural language. It also proposes a reinforcement-learning based model for this task.\n\nFirst of all, I'd like to emphasize that the creation of a large scale semantic parsing dataset is fantastic, and it is a much appreciated contribution. However, I find its presentation problematic. It claims to supplant existing semantic parsing and language-to-SQL datasets, painting WikiSQL as a more challenging dataset overall. Given the massive simplifications to what is considered SQL in this dataset (no joins, no subqueries, minimal lexical grounding problem), I am reluctant to accept this claim without empirical evidence. For example, how well does the proposed model work when evaluated on an existing dataset containing full SQL queries, such as ATIS? That being said, I am sympathetic to making simplifications to a dataset for the sake of scalability, but it shouldn't be presented as representative of SQL.\n\nOn the modeling side, the role of reinforcement learning seems oddly central in the paper, even though though the added complexity is not well motivated. RL is typically needed when there are latent decisions that can affect the outcome in ways that are not known a priori. In this case, we know the reward is invariant to the ordering of the tokens in the WHERE clause. There are far simpler solutions that would achieve the same result, such as optimizing the marginal likelihood or even simply including all orderings as training examples. These should be included as baselines.\n\nWhile the data contribution is great, the claims of the paper need to be revised.", "The authors have addressed the problem of translating natural language queries to SQL queries. They proposed a deep neural network based solution which combines the attention based neural semantic parser and pointer networks. They also released a new dataset WikiSQL for the problem. The proposed method outperforms the existing semantic parsing baselines on WikiSQL dataset.\n\nPros:\n1. The idea of using pointer networks for reducing search space of generated queries is interesting. Also, using extrinsic evaluation of generated queries handles the possibility of paraphrasing SQL queries.\n2. A new dataset for the problem.\n3. The experiments report a significant boost in the performance compared to the baseline. The ablation study is helpful for understanding the contribution of different component of the proposed method.\n\nCons:\n1. It would have been better to see performance of the proposed method in other datasets (wherever possible). This is my main concern about the paper.\n2. Extrinsic evaluation can slow down the overall training. Comparison of running times would have been helpful.\n3. More details about training procedure (specifically for the RL part) would have been better.", "Thank you for your comments.\n1. We recognize that the queries in WikiSQL are simple. It is not our intention to supplant existing models for SQL generation from natural languages. Our intention is to tackle the problem of generalizing across tables, which we believe is a key barrier to using such systems in practice. Existing tasks in semantic parsing and natural language interfaces focused on generation queries from natural language with respect to a single table. Our task requires performing this on tables not seen during training. We argue that while WikiSQL is not as complex as existing datasets in its query complexity, it is more complex in its generalization task.\n\n2. We could not find existing tasks that focus on generalization to unseen tables, but recognize that we may have missed existing work that the reviewer is aware of. We would be happy to apply our methods to such a task.\n\n3. We agree that the existing semantic parsing approach we compare against is more general. Our intention is to introduce baselines for the WikiSQL task that generalizes to unseen tables. The baselines are tailored to the particular task of generating SQL queries, but range from general and unstructured (e.g. augmented pointer) to templated and structured (e.g. WikiSQL). In addition, like Guu et al, Mou et al, Yin et al, we use reinforcement learning as a means to address equivalent queries.\n", "Thank you for your comments. \n\n1. It is not at all our intention to claim that WikiSQL supplants existing datasets. Our intended emphasis is that WikiSQL requires that models generalize to tables not seen during training. We are not aware of a semantic parsing dataset that 1. Provides logical forms 2. Requires generalization to unseen tables/schemas 3. Is based on realistic SQL tables in relational databases. We do recognize that WikiSQL, in its current state, contains only simple SELECT-AGGREGATE-WHERE queries. More complex queries contain, as you said, joins and subqueries. We will take this into account and elaborate on the generation of WikiSQL (which we placed into the appendix due to length considerations). In particular, we will explicitly emphasize the fact that WikiSQL does not contain subqueries nor joins.\n\n2. We agree that reinforcement learning seems like a general and complex solution to a specific problem that can be solved in other ways. In fact, another submission to ICLR leverages this insight to incorporate structures into the model to do, say, set prediction of WHERE conditions (https://openreview.net/forum?id=SkYibHlRb&noteId=S12EyE1bz). We chose to use the RL approach as the baseline for WikiSQL because it is easy to generalize this approach to other forms of equivalent queries should we expand WikiSQL in the future. We also found that it is simple to implement in practice. \nWe agree though that given the current state of WikiSQL, there are simpler approaches to tackle the WHERE clause ordering problem. We incorporated your suggestion of augmenting the training set with all permutations of the WHERE clause ordering. By doing this, we obtained 58.97% execution accuracy and 45.32% logical form accuracy on the test set with the Seq2SQL model without RL. The higher execution accuracy and lower logical form accuracy suggests that annotators were biased and tended to agree with the WHERE clause ordering presented to them in the paraphrasing task. Because we permute the ordering of the WHERE clause in training, the model does not see this bias during training and obtains worse logical form accuracy. With RL and augmented training set, we obtained 59.6% execution accuracy and 45.7% logical form accuracy.", "Thank you for your comments. \n\n1. We computed the run time of the model with RL and without RL. There is a subtlety regarding the runtime computation in that we run the evaluation during each batch, which inherently does database lookup (e.g. to calculate the execution accuracy). The result of evaluation is used as reward in the case of reinforcement learning. Because of this, using RL does not really add to the compute cost, apart from propagating the actual policy gradients because reward computation is always done as a part of evaluation. Taking this into account, the per-batch runtime over an epoch for the no-RL model took 0.2316 seconds on average with a standard deviation of 0.1037 seconds, whereas the RL model took 0.2627 seconds on average with a standard deviation of 0.1414 seconds.\n\n2. Regarding your main concern (that we compare to other datasets), we are not aware of other datasets for natural language to SQL generation that requires generalization to new table schemas. For example, WikiSQL contains ~20k table schemas while other SQL generation tasks focus on a single table. As a result, we decided to compare our model against an existing state of the art semantic parser on our task instead. We would be happy to study the effect of our proposed method on other datasets.\n\n3. Our RL model is initialized with the parameters from the best non-RL model. This RL model is trained with Adam with a learning rate of 1e-4 and a batch size of 100. We use an embedding size of 400 (300 for word embeddings, 100 for character ngrams) and a hidden state size of 200. Each BiLSTM has 2 layers. Please let us know if there are any particular points you would like us to elaborate on.", "Yes, this is inappropriate to bring out. I will ask reviewers to ignore the fact of private NIPS comments in their reviews. \n\nHowever, I do think the resulting discussion on past work is relevant and should be considered. (And also note that some conferences (NIPS->AIStats) do share past negative reviews.)\n", "Neat work!\nWe have also released a paper detailing a corpus for language to SQL generation, it might be of interest to you https://arxiv.org/abs/1707.03172", "We thank the anonymous reviewer for the feedback and respectfully disagree regarding the novelty of our work. We refer readers back to our earlier comment regarding how our contribution is distinct from prior art. Once again, we regret not citing the anonymous reviewer's prior work (which we believe, while important, is distinct from ours). To the anonymous reviewer, I emphasize that we are not maliciously ignoring your work. We focused our efforts on addressing the main concern of the only negative review, which was that it was unclear how our model compares to to existing semantic parsing models. We have since addressed this in the fashion described by my previous comment. ", "Regarding the different conclusions drawn by this NIPS review and the other anonymous reviewer, perhaps the authors should consider the possibility that the other anonymous reviewer did not write the NIPS review in question? In any event, I find it disturbing, albeit slightly amusing, that one would bring out recent and anonymous (and private, nonetheless) NIPS reviews in public like this. The area chair should make note of this and consider whether it is appropriate.", "Hi Anonymous,\n\nCan you please clarify your comment \"essentially have done the Seq2SQL thing\"? We reference several semantic parsing papers that convert natural language questions to logical forms. Moreover, we reference works that apply semantic parsing or other neural models to tables. We can and will certainly add the recent work you listed to our paper, but perhaps the phrase \"neglect all previous studies\" is a bit harsh?\n\nI am not certain as to whether it is appropriate to mention anonymous NIPS reviews, but the main concern from our NIPS reviews (e.g. the only negative review) was that we do not compare to semantic parsing results. We have since rewritten the paper to clarify this point. Namely, our baseline is a state-of-the-art neural semantic parser by Dong et al., who demonstrated its effectiveness on four semantic parsing datasets. The particular review you mention was actually the most positive, with its conclusion being \"the experiment part is solid and convincing\" and \"I believe release of the datasets will benefit research in this direction.\"\n\nRegarding your concern about novelty:\n\nPrior work, including those you cite (and which we will certainly add to our references), mainly focus on semantic parsing over knowledge graphs or synthetic datasets. For example, the first work you reference by Liang et al. works on WebQuestionsSP, which has under 5k examples. The second work you reference by Mou et al. uses a dataset by Yin et. al that contains 25k synthetic examples from a single schema (the Olympic games table). Finally, your third reference by Guu et al uses SCONE, which is a synthetic semantic parsing dataset over 14k examples and 3 domains. \n\nIn contrast, WikiSQL (which this paper introduces) spans 80k examples and 24k schemas over real-world web tables - orders of magnitude larger than previous efforts. The number of schemas, in particular, poses a difficult generalization challenge. Moreover, WikiSQL contains natural language utterances annotated and verified by humans instead of generated templates. One of the novelties of our approach (Seq2SQL) is that while we operate on SQL tables, we do not observe the content of the table. That is, the rewards our model observes comes from database execution (as oppose to self execution). This also makes policy learning more challenging, because an important part of the environment (e.g. table content) is not observed. This is distinct from prior work, including those you reference, that learn using table content. Our approach forces the model to learn purely from the question and the table schema. This enables our model to act as a thin and scalable natural language interface instead of a database engine because it does not need to see the database content.\n\nFinally, as an impartial means to gauge the impact of our work, despite not having been published, WikiSQL is already seeing adoption and Seq2SQL is already being used as a reference baseline by the community (including submissions by other groups to this conference).", "The authors cited previous semantic parsing papers using seq2seq models, but ignored all previous reinforcement learning-based Seq2SQL papers. This has already been reminded of by previous conference reviewers, but is completely neglected again in the revision. It is hard to feel the authors' will for making that kind of revision, which (although will significantly diminish the novelty claimed by this paper) however is unavoidable if the authors want to make this paper scientifically sound.", "- There is a typo we will fix in the analysis of the WHERE clause in section 4.2. The example question should be \"which males\" instead of \"which men\". It is impossible for the model to generate the word \"men\" because it is not in the question nor the schema.", "It's amazing how the authors neglect all previous studies that essentially have done the Seq2SQL thing at least 3 times (arXiv:1612.01197; arXiv:1612.02741; arXiv:1704.07926). It's also amazing how the authors neglect the NIPS reviews which have already pointed this out." ]
[ 5, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Syx6bz-Ab", "iclr_2018_Syx6bz-Ab", "iclr_2018_Syx6bz-Ab", "SkGbiIKxz", "ByL2SX9ez", "HJ75M8ogM", "SJmanaxJz", "iclr_2018_Syx6bz-Ab", "S184E1gyz", "HJg8aSa0-", "By6SuETCZ", "HJg8aSa0-", "iclr_2018_Syx6bz-Ab", "iclr_2018_Syx6bz-Ab" ]
iclr_2018_BJInMmWC-
Generative Entity Networks: Disentangling Entitites and Attributes in Visual Scenes using Partial Natural Language Descriptions
Generative image models have made significant progress in the last few years, and are now able to generate low-resolution images which sometimes look realistic. However the state-of-the-art models utilize fully entangled latent representations where small changes to a single neuron can effect every output pixel in relatively arbitrary ways, and different neurons have possibly arbitrary relationships with each other. This limits the ability of such models to generalize to new combinations or orientations of objects as well as their ability to connect with more structured representations such as natural language, without explicit strong supervision. In this work explore the synergistic effect of using partial natural language scene descriptions to help disentangle the latent entities visible an image. We present a novel neural network architecture called Generative Entity Networks, which jointly generates both the natural language descriptions and the images from a set of latent entities. Our model is based on the variational autoencoder framework and makes use of visual attention to identify and characterise the visual attributes of each entity. Using the Shapeworld dataset, we show that our representation both enables a better generative model of images, leading to higher quality image samples, as well as creating more semantically useful representations that improve performance over purely dicriminative models on a simple natural language yes/no question answering task.
rejected-papers
This paper presents a novel model for generating images and natural language descriptions simultaneously. The aim is to distangle representations learned for image generation by connecting them to the paired text. The reviews praise the problem setup and the mathematical formulation. However they point out significant issues with the clarity of the presentation in particular the diagrams, citations, and optimization procedure in general. They also point out issues with the experimental setup in terms of datasets used and lack of natural images for the tasks in question. Reviews are impressively thorough and should be of use for a future submission.
test
[ "ryU5DROgM", "r1MR39Kgz", "BJbx0qYlz", "HyK9VuT7M", "H1WUNO6Xz", "H1zZE_6mf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper presented a Generative entity networks (GEN). It is a multi-view extension of variational autoencoder (VAE) for disentangled representation. It uses the image and its attributes. The paper is very well motivated and tackles an important problem. However, the presentation of the method is not clear, the experiment is not sufficient, and the paper is not polished. \n\nPros:\n1. This paper tackles an important research question. \nLearning a meaningful representation is needed in general. For the application of images, using text description to refine the representation is a natural and important research question. \n\n2. The proposed idea is very well motivated, and the proposed model seems correct. \n\nCons and questions:\n1. The presentation of the model is not clear. \nFigure 2 which is the graphic representation of the model is hard to read. There is no meaningful caption for this important figure. Which notation in the figure corresponds to which variable is not clear at all. This also leads to unclarity of the text presentation of the model, for example, section 3.2. Which latent variable is used to decode which part?\n\n2. Missing important related works.\nThere are a couple of highly related work with multi-view VAE tracking similar problem have been proposed in the past. The paper did not discuss these related work and did not compare the performances. Examples of these related work include [1] and [2] (at the end of the review).\nAdditionally, the idea of factorized representation idea (describable component and indescribable component) has a long history. It can be traced back to [3], used in PGM setting in [4] and used in VAE setting in [1]. This group of related work should also be discussed. \n\n3. Experiment evaluation is not sufficient. \nFirstly, only one toy dataset is used for experimental evaluations. More evaluations are needed to verify the method, especially with natural images. \nSecondly, there are no other state-of-the-art baselines are used. The baselines are various simiplied versions of the proposed model. More state-of-the-art baselines are needed, e.g. [1] and [2].\n\n4. Maybe overclaiming.\nIn the paper, only attributes of objects are used which is not semi-natural languages.\n\n5. The paper, in general, needs to be polished. \nThere are missing links and references in the paper and un-explained notations, and non-informative captions.\n\n6. Possibility to apply to natural images. \nThis method does not model spatial information. How can the method make sure that simple adding generated images with each component will lead to a meaningful image in the end? Especially with natural images, the spacial location and the scale should be critical. \n\n[1] Wang, Weiran, Honglak Lee, and Karen Livescu. \"Deep variational canonical correlation analysis.\" arXiv preprint arXiv:1610.03454 (2016).\n[2] Suzuki, Masahiro, Kotaro Nakayama, and Yutaka Matsuo. \"Joint Multimodal Learning with Deep Generative Models.\" arXiv preprint arXiv:1611.01891 (2016).\n[3] Tucker, Ledyard R. \"An inter-battery method of factor analysis.\" Psychometrika 23.2 (1958): 111-136.\n[4] Zhang, Cheng, Hedvig Kjellström, and Carl Henrik Ek. \"Inter-battery topic representation learning.\" European Conference on Computer Vision. Springer International Publishing, 2016.\n\n", "**Summary**\nThe paper proposes an extension of the attend, infer, repeat generative model of Eslami, 2016 and extends it to handle ``visual attribute descriptions. This straightforward extension is claimed to improve image quality and shown to improve performance on a previously introduced image caption ranking task. In general, the paper shows improvements on an image caption agreement task introduced in Kuhnle and Copestake, 2017. The paper seems to have weaknesses pertaining to the approach taken, clarity of presentation and comparison to baselines which mean that the paper does not seem to meet the acceptance threshold for ICLR. See more detailed points below in Weaknesses.\n\n**Strengths**\nI like the high-level motivation of the work, that one needs to understand and establish that language or semantics can help learn better representations for images. I buy the premise and think the work addresses an important issue. \n\n**Weakness**\n\nApproach:\n* A major limitation of the model seems to be that one needs access to both images and attribute vectors at inference time to compute representations which is a highly restrictive assumption (since inference networks are discriminative). The paper should explain how/if one can compute representations given just the image, for instance, say by not using amortized inference. The paper does propose to use an image-only encoder but that is intended in general as a modeling choice to explain statistics which are not captured by the attributes (in this case location and orientation as explained in the Introduction of the paper).\n\nClarity:\n* Eqn. 5, LHS can be written more clearly as \\hat{a}_k. \n\n* It would also be good to cite the following related work, which closely ties into the model of Eslami 2016, and is prior work: \n\nEfficient inference in occlusion-aware generative models of images,\nJonathan Huang, Kevin Murphy.\nICLR Workshops, 2016\n\n* It would be good to clarify that the paper is focusing on the image caption agreement task from Kuhnle and Copestake, as opposed to generic visual question answering.\n\n* The claim that the paper works with natural language should be toned down and clarified. This is not natural language, firstly because the language in the dataset is synthetically generated and not “natural”. Secondly, the approach parses this “synthetic” language into structured tuples which makes it even less natural. Also, Page. 3. What does “partial descriptions” mean?\n\n* Section 3: It would be good to explicitly draw out the graphical model for the proposed approach and clarify how it differs from prior work (Eslami, 2016).\n\n* Sec. 3. 4 mentions that the “only image” encoder is used to obtain the representation for the image, but the “only image” encoder is expected to capture the “indescribable component” from the image, then how is the attribute information from the image captured in this framework? One cannot hope to do image caption association prediction without capturing the image attributes...\n\n*, In general, the writing and presentation of the model seem highly fragmented, and it is not clear what the specifics of the overall model are. For instance, in the decoder, the paper mentions for the first time that there are variables “z”, but does not mention in the encoder how the variables “z” were obtained in the first place (Sec. 3.1). For instance, it is also not clear if the paper is modeling variable length sequences in a similar manner to Eslami, 2016 or not, and if this work also has a latent variable [z, z_pres] at every timestep which is used in a similar manner to Eqn. 2 in Eslami, 2016. Sec. 3.4 “GEN Image Encoder” has some typo, it is not clear what the conditioning is within q(z) term.\n\n* Comparison to baselines: \n 1. How well does this model do against a baseline discriminative image caption ranking approach, similar to [D]? This seems like an important baseline to report for the image caption ranking task.\n 2. Another crucial baseline is to train the Attend, Infer, Repeat model on the ShapeWorld images, and then take the latent state inferred at every step by that model, and use those features instead of the features described in Sec. 3.4 “Gen Image Encoder” and repeat the rest of the proposed pipeline. Does the proposed approach still show gains over Attend Infer Repeat?\n 3. The results shown in Fig. 7 are surprising -- in general, it does not seem like a regular VAE would do so poorly. Are the number of parameters in the proposed approach and the baseline VAE similar? Are the choices of decoder etc. similar? Did the model used for drawing Fig. 7 converge? Would be good to provide its training curve. Also, it would be good to evaluate the AIR model from Eslami, 2016 on the same simple shapes dataset and show unconditional samples. If the claim from the work is true, that model should be just as bad as a regular VAE and would clearly establish that using language is helping get better image samples.\n\n* Page 2: In general the notion of separating the latent space into content and style, where we have labels for the “content” is an old idea that has appeared in the literature and should be cited accordingly. See [B] for an earlier treatment, and an extension by [A]. See also the Bivcca-private model of [C] which has “private” latent variables for vision similar to this work (this is relevant to Sec. 3.2.)\n\nReferences:\n[A]: Upchurch, Paul, Noah Snavely, and Kavita Bala. 2016. “From A to Z: Supervised Transfer of Style and Content Using Deep Neural Network Generators.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1603.02003.\n\n[B]: Kingma, Diederik P., Danilo J. Rezende, Shakir Mohamed, and Max Welling. 2014. “Semi-Supervised Learning with Deep Generative Models.” arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1406.5298.\n\n[C]: Wang, Weiran, Xinchen Yan, Honglak Lee, and Karen Livescu. 2016. “Deep Variational Canonical Correlation Analysis.” arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1610.03454.\n\n[D]: Kiros, Ryan, Ruslan Salakhutdinov, and Richard S. Zemel. 2014. “Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models.” arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1411.2539.\n", "Summary: The authors observe that the current image generation models generate realistic images however as the dimensions of the latent vector is fully entangled, small changes to a single neuron can effect every output pixel in arbitrary ways. In this work, they explore the effect of using partial natural language scene descriptions for the task of disentangling the latent entities visible in the image. The proposed Generative Entity Networks jointly generates the natural language descriptions and images from scratch. The core model is Variational Autoencoders (VAE) with an integrated visual attention mechanism that also generates the associated text. The experiments are conducted on the Shapeworld dataset.\n\nStrengths:\nSimultaneous text and image generation is an interesting research topic that is relevant for the community.\nThe paper is well written, the model is formulated with no errors (although it could use some more detail) and supported by illustrations (although there are some issues with the illustrations detailed below). \nThe model is evaluated on tasks that it was not trained on which indicate that this model learns generalizable latent representations. \n\nWeaknesses:\nThe paper gives the impression to be rushed, i.e. there are citations missing (page 3 and 6), the encoder model illustration is not as clear as it could be. Especially the white boxes have no labels, the experiments are conducted only on one small-scale proof of concept dataset, several relevant references are missing, e.g. GAN, DCGAN, GAWWN, StackGAN. Visual Question answering is mentioned several times in the paper, however no evaluations are done in this task.\n\nFigure 2 is complex and confusing due to the lack of proper explanation in the text. The reader has to find out the connections between the textual description of the model and the figure themselves due to no reference to particular aspects of the figure at all. In addition the notation of the modules in the figure is almost completely disjoint so that it is initially unclear which terms are used interchangeably.\n\nDetails of the “white components” in Figure 2 are not mentioned at all. E.g., what is the purpose of the fully connected layers, why do the CNNs split and what is the difference in the two blocks (i.e. what is the reason for the addition small CNN block in one of the two)\n\nThe optimization procedure is unclear. What is the exact loss for each step in the recurrence of the outputs (according to Figure 5)? Or is only the final image and description optimized. If so, how is the partial language description as a target handled since the description for a different entity in an image might be valid, but not the current target. (This is based on my understanding that each data point consists of one image with multiple entities and one description that only refers to one of the entities).\n\nAn analysis or explanation of the following would be desirable: How is the network trained on single descriptions able to generate multiple descriptions during evaluation. How does thresholding mentioned in Figure 5 work?\n\nIn the text, k suggests to be identical to the number of entities in the image. In Figure 5, k seems to be larger than the number of entities. How is k chosen? Is it fixed or dynamic?\n\nEven though the title claims that the model disentangles the latent space on an entity-level, it is not mentioned in the paper. Intuitively from Figure 5, the network generates black images (i.e. all values close to zero) whenever the attention is on no entity and, hence, when attention is on an entity the latent space represents only this entity and the image is generated only showing that particular entity. However, confirmation of this intuition is needed since this is a central claim of the paper.\n\nAs the main idea and the proposed model is simple and intuitive, the evaluation is quite important for this paper to be convincing. Shapeworlds dataset seems to be an interesting proof-of-concept dataset however it suffers from the following weaknesses that prevent the experiments from being convincing especially as they are not supported with more realistic setups. First, the visual data is composed of primitive shapes and colors in a black background. Second, the sentences are simple and non-realistic. Third, it is not used widely in the literature, therefore no benchmarks exist on this data. \n\nIt is not easy to read the figures in the experimental section, no walkthrough of the results are provided. For instance in Figure 4a, the task is described as “showing the changes in the attribute latent variables” which gives the impression that, e.g. for the first row the interpolation would be between a purple triangle to a purple rectangle however in the middle the intermediate shapes also are painted with a different color. It is not clear why the color in the middle changes.\n\nThe evaluation criteria reported on Table 1 is not clear. How is the accuracy measured, e.g. with respect to the number of objects mentioned in the sentence, the accuracy of the attribute values, the deviation from the ground truth sentence (if so, what is the evaluation metric)? No example sentences are provided for a qualitative comparisons. In fact, it is not clear if the model generates full sentences or attribute phrases.\n\nAs a summary, this paper would benefit significantly with a more extensive overview of the existing relevant models, clarification on the model details mentioned above and a more through experimental evaluation with more datasets and clear explanation of the findings.", "(a)\t\"A major limitation of the model seems to be that one needs access to both images and attribute vectors at inference time to compute representations which is a highly restrictive assumption (since inference networks are discriminative). The paper should explain how/if one can compute representations given just the image, for instance, say by not using amortized inference. The paper does propose to use an image-only encoder but that is intended in general as a modeling choice to explain statistics which are not captured by the attributes (in this case location and orientation as explained in the Introduction of the paper).\"\n\nThere is a misunderstanding here and we should have clarified this in the paper. The model only needs access to an image as input in order to perform inference. In fact this is how we obtain the representations used in the auxilliary ShapeWorld tasks. We encode images without paired language by ignoring the language inputs in equation (5), instead using \\hat{a} = a^{I}.\n\n\n(b)\t\"* Sec. 3. 4 mentions that the only image encoder is used to obtain the representation for the image, but the only image encoder is expected to capture the indescribable component from the image, then how is the attribute information from the image captured in this framework? One cannot hope to do image caption association prediction without capturing the image attributes...\"\n\nAs we discussed in section 3.1, the image encoder generates both the attribute information as well as the \"indescribable component\". When the encoder is also provided the language, then the multi-modal aggregator is used to coherently combine the attribute predictions generated from the language with the predictions from the image encoder. \n\n(c)\t\"in the decoder, the paper mentions for the first time that there are variables z, but does not mention in the encoder how the variables z were obtained in the first place (Sec. 3.1).”\n\nWe appreciate that this is not sufficiently explained in the paper and will clarify this in future work. We did mention in Sec 3.1 the process for obtaining the visual latent variables z^V_k: \"Finally each visual object representation v_k is passed through an MLP to obtain the parameters of the approximate posterior distribution over that object’s visual latent variables.\" However we did not state the equivalent process for obtaining the attribute latent variables z^A_k (which is achieved by applying a MLP to a^hat_k). \n\n(d)\t“Sec. 3.4 GEN Image Encoder has some typo, it is not clear what the conditioning is within q(z) term.”\nYes this is a typo, the notation should read q*(z | I), where q* is the encoder applied to only to input images with the modification described earlier.\n", "(a)\t\"Visual Question answering is mentioned several times in the paper, however no evaluations are done in this task\"\n\nMost of the mentions of visual question answering in the paper are meant to refer to the general task of answering questions about an image, and in this sense, the Shapeworld caption classification we evaluate on is a visual question answering task.\n\n(b)\t\"The optimization procedure is unclear. What is the exact loss for each step in the recurrence of the outputs (according to Figure 5)? Or is only the final image and description optimized. If so, how is the partial language description as a target handled since the description for a different entity in an image might be valid, but not the current target. (This is based on my understanding that each data point consists of one image with multiple entities and one description that only refers to one of the entities).\"\n\nWe discuss this in Section 3.3 where we say: \"However as the model outputs language predictions for multiple objects, and yet only one object is described in language per scene, we maximize over assignments of predicted language to the true caption.\"\n\n(c)\t\"An analysis or explanation of the following would be desirable: How is the network trained on single descriptions able to generate multiple descriptions during evaluation.\"\n\nThe network always generates multiple descriptions (one for each recurrent step), but as we just highlighted, a loss signal is only generated between the provided description, and the generated description which most closely matches it.\n\n(d)\t\"How does thresholding mentioned in Figure 5 work?\"\nFor every encoded entity, the model decoder outputs word probabilities. We report all words assigned probability greater than 0.5 by the model.\n\n(e)\t\"In the text, k suggests to be identical to the number of entities in the image. In Figure 5, k seems to be larger than the number of entities. How is k chosen? Is it fixed or dynamic?\"\nk is chosen statically as an upper bound on the number of entities in the image. The model can then avoid using entities, by not drawing anything to the image for a given entity, and by generating zeros for all natural language attributes.\n\n(f)\t\"Even though the title claims that the model disentangles the latent space on an entity-level, it is not mentioned in the paper. Intuitively from Figure 5, the network generates black images (i.e. all values close to zero) whenever the attention is on no entity and, hence, when attention is on an entity the latent space represents only this entity and the image is generated only showing that particular entity. However, confirmation of this intuition is needed since this is a central claim of the paper.\"\n\nFigure 4(b)(d) indicates that manipulation of latent variables associated with a particular entity results in visual changes in only one object (e.g. the location / rotation of the green rectangle in 4d). This indicates that the latent representation is disentangled on an entity-level. Did you have a specific experiment in mind that you thought would more clearly show that the representation was disentangled?\n\n(g)\t\"no benchmarks exist on this data\"\nThere do exist a carefully chosen set of benchmarks for the VQA dataset which were adapted for this dataset, and these are the benchmarks that we compare to. But we agree that benchmarks for generative modeling don't exist for this dataset.\n\n(h)\t\"It is not easy to read the figures in the experimental section, no walkthrough of the results are provided. For instance in Figure 4a, the task is described as showing the changes in the attribute latent variables which gives the impression that, e.g. for the first row the interpolation would be between a purple triangle to a purple rectangle however in the middle the intermediate shapes also are painted with a different color. It is not clear why the color in the middle changes.\"\n\nWe attempted to address this issue in the figure caption where we said: \"Note that we should not expect the division between the color and shape semantic attributes to align to the two latent dimensions since the GEN model leaves the encoding of the attribute dimensions completely entangled for a given entity.\"\n\n\n(i)\t\"The evaluation criteria reported on Table 1 is not clear. How is the accuracy measured, e.g. with respect to the number of objects mentioned in the sentence, the accuracy of the attribute values, the deviation from the ground truth sentence (if so, what is the evaluation metric)? No example sentences are provided for a qualitative comparisons. In fact, it is not clear if the model generates full sentences or attribute phrases.\"\n\nWe should have made this more clear in the paper. Each natural language description in the dataset is labeled as either True or False, and the task is to predict this label. So the accuracy numbers simply indicate whether or not a given description is correctly predicted to be True or False.\n\n", "We would like to thank the reviewers for reading the paper so carefully and for their detailed reviews. In addition to polishing the writing, and filling out the related work section, the main weakness of the paper seems to be both our evaluation on only one dataset, as well as comparing to only the dataset baselines rather than more recent stronger baselines. We plan to work on this, and resubmit to a later conference.\n\nHowever, we did want to clarify a few confusions which were brought up in the reviews.\n" ]
[ 4, 5, 5, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1 ]
[ "iclr_2018_BJInMmWC-", "iclr_2018_BJInMmWC-", "iclr_2018_BJInMmWC-", "H1zZE_6mf", "H1zZE_6mf", "iclr_2018_BJInMmWC-" ]
iclr_2018_r1Zi2Mb0-
EXPLORING NEURAL ARCHITECTURE SEARCH FOR LANGUAGE TASKS
Neural architecture search (NAS), the task of finding neural architectures automatically, has recently emerged as a promising approach for unveiling better models over human-designed ones. However, most success stories are for vision tasks and have been quite limited for text, except for a small language modeling setup. In this paper, we explore NAS for text sequences at scale, by first focusing on the task of language translation and later extending to reading comprehension. From a standard sequence-to-sequence models for translation, we conduct extensive searches over the recurrent cells and attention similarity functions across two translation tasks, IWSLT English-Vietnamese and WMT German-English. We report challenges in performing cell searches as well as demonstrate initial success on attention searches with translation improvements over strong baselines. In addition, we show that results on attention searches are transferable to reading comprehension on the SQuAD dataset.
rejected-papers
This paper extends work on neural architecture search by introducing a new framework for searching and experiments on new domains of NMT and QA. The results of the work are beneficial and show improvements using this approach. However the reviewers point out significant issues with the approach itself: - There is skepticism about the use of NAS in general, particular compared to using the same computational power for other types of simpler hyperparameter search. - There is general concern about the use of such large scale brute force methods in general. Several of the reviewers expressed concerns about ever possibly being able to replicate these results. - Given the computational power required, the reviewers feel like the gains are not particularly large, for instance the Squad results not being compared to the best reported systems.
train
[ "Bkyu46Xlz", "r1E8UmKgz", "SJVkJ0Axf", "Hkv5oLTQf", "BywoFAUkM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "The paper explores neural architecture search for translation and reading comprehension tasks. It is fairly clearly written and required a lot of large-scale experimentation. However, the paper introduces few new ideas and seems very much like applying an existing framework to new problems. It is probably better suited for presentation in a workshop rather than as a conference paper.\n\nA new idea in the paper is the stack-based search. However, there is no direct comparison to the tree-based search. A clear like for like comparison would be interesting.\n\nMethodology. The test set newstest2014 of WMT German-English officially contains 3000 sentences. Please check http://statmt.org/wmt14. \nAlso, how stable are the results you obtain, did you rerun the selected architectures with multiple seeds? The difference between the WMT baseline of 28.8 and your best configuration of 29.1 BLEU can often be simply obtained by different random weight initializations.\n\nThe Squad results (table 2) should list a more recent SOTA result to be fair as it gives the impression that the system presented here is SOTA.", "This paper proposes a method to find an effective structure of RNNs and attention mechanisms by searching programs over the stack-oriented execution engine.\n\nAlthough the new point in this paper looks only the representation paradigm of each program: (possibly variable length) list of the function applications, that could be a flexible framework to find a function without any prior structures like Fig.1-left.\n\nHowever, the design of the execution engine looks not well-designed. E.g., authors described that the engine ignores the binary operations that could not be executed at the time. But in my thought, such operations should not be included in the set of candidate operations, i.e., the set of candidates should be constrained directly by the state of the stack.\nAlso, including repeating \"identity\" operations (in the candidates of attention operations) seems that some unnecessary redundancy is introduced into the search space. The same expressiveness could be achieved by predicting a special token only once at the end of the sequence (namely, \"end-of-sequence\" token as just same as usual auto-regressive RNN-based decoder models).\n\nComparison in experiments looks meaningless. Score improvement is slight nevertheless authors paid much computation cost for searching accurate network structures. The conventional method (Zoph&Le,17) in row 3 of Table 1 looks not comparable with proposed methods because it is trained by an out-of-domain task (LM) using conventional (tree-based) search space. Authors should at least show the result by applying the conventional search space to the tasks of this paper.\nIn Table 2, the \"our baseline\" looks cheap because the dot product is the least attention model in those proposed in past studies.\n\nThe catastrophic score drop in the rows 5 and 7 in Table 1 looks interesting, but the paper does not show enough comprehension about this phenomenon, which makes the proposed method hard to apply other tasks.\nThe same problem exists in the setting of the hyperparameters in the reward functions. According to the footnote, there are largely different settings about the value of \\beta, which suggest a sensitivity by changing this parameter. Authors should provide some criterion to choose these hyperparameters.", "This paper experiments the application of NAS to some natural language processing tasks : machine translation and question answering. \n\nMy main concern about this paper is its contribution. The difference with the paper of Zoph 2017 is really slight in terms of methodology. Moving from a language modeling task to machine translation is not very impressive neither really discussed. It could be interesting to change the NAS approach by taking into account this application shift. \n\nOn the experimental part, the paper is not really convincing. The results on WMT are not state of the art. The best system of this year was a standard phrase based and has achieved 29.3 BLEU score (for BLEU cased, otherwise it's one point more). Therefore the results on mt tasks are difficult to interpret. \n\nAt the end , the reader can be sure these experiments required a significant computational power. Beyond that it is difficult to really draw meaningful conclusions. ", "Dear reviewers and the public,\n\nWe would like to thank the reviewers and the public again for weighing in the paper. We will try to improve the evaluation in the next revision of the paper.\n\nAuthors", "Given that the original PTB NAS claims of outperforming LSTMs have been thoroughly debunked by a hyperparameter optimizer with 5x less compute [1], and that hyperparameter optimization is doing qualitatively the same thing as NAS, it'd be good to mention it somewhere in the related work/intro.\n\nShowing marginal improvements off of a very weak SQUAD baseline isn't terribly impressive. According to the latest leaderboard [2], their baseline model, BiDAF from a year ago, is ranked 34th, and their improved model would be 24th (the same as the new BiDAF simple baseline, and 5.4 F1 points below SOTA). Perhaps you should include some more results here to more accurately represent your findings.\n\nCan you give a meaningful comparison of the compute for your NAS versus the amount used for the GNMT baseline? All I see is a mention of using 100-200 GPUs for some unspecified period of time. If you're using more compute than GNMT, can you take that into account for that to produce a meaningful comparison?\n\n[1] https://openreview.net/forum?id=ByJHuTgA-\n[2] https://rajpurkar.github.io/SQuAD-explorer/" ]
[ 3, 4, 3, -1, -1 ]
[ 4, 4, 4, -1, -1 ]
[ "iclr_2018_r1Zi2Mb0-", "iclr_2018_r1Zi2Mb0-", "iclr_2018_r1Zi2Mb0-", "iclr_2018_r1Zi2Mb0-", "iclr_2018_r1Zi2Mb0-" ]
iclr_2018_S1347ot3b
Exploring Sentence Vectors Through Automatic Summarization
Vector semantics, especially sentence vectors, have recently been used successfully in many areas of natural language processing. However, relatively little work has explored the internal structure and properties of spaces of sentence vectors. In this paper, we will explore the properties of sentence vectors by studying a particular real-world application: Automatic Summarization. In particular, we show that cosine similarity between sentence vectors and document vectors is strongly correlated with sentence importance and that vector semantics can identify and correct gaps between the sentences chosen so far and the document. In addition, we identify specific dimensions which are linked to effective summaries. To our knowledge, this is the first time specific dimensions of sentence embeddings have been connected to sentence properties. We also compare the features of different methods of sentence embeddings. Many of these insights have applications in uses of sentence embeddings far beyond summarization.
rejected-papers
This work is interested in using sentence vector representations as a method for both doing extractive summarization and as a way to better understand the structure of vector representations. While the methodological aspects utilize representation learning, the reviewers felt that the main thrust of the work would be better suited for a summarization workshop or even NLP venue, as it did not target DL based contributions. Additionally they felt that the work did not significantly engage with the long literature on the problem of summarization.
train
[ "BkSq8vBxG", "Sk3CQYOgG", "rk-MJ-ceM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors report a number of experiments using off-the-shelf sentence embedding methods for performing extractive summarisation, using a number of simple methods for choosing the extracted sentences. Unfortunately the contribution is too minor, and the work too incremental, to be worthy of a place at a top-tier international conference such as ICLR. The overall presentation is also below the required standard. The work would be better suited for a focused summarisation workshop, where there would be more interest from the participants.\n\nSome of the statements motivating the work are questionable. I don't know if sentence vectors *in particular* have been especially successful in recent NLP (unless we count neural MT with attention as using \"sentence vectors\"). It's also not the case that the sentence reordering and text simplification problems have been solved, as is suggested on p.2. \n\nThe most effective method is a simple greedy technique. I'm not sure I'd describe this as being \"based on fundamental principles of vector semantics\" (p.4).\n\nThe citations often have the authors mentioned twice.\n\nThe reference to \"making or breaking applications\" in the conclusion strikes me as premature to say the least.\n", "This paper explored the effectiveness of four existing sentence embedding models on ten different document summarization methods leveraging various works in the literature. Evaluation has been conducted on the DUC-2004 dataset and ROUGE-1 and ROUGE-2 scores are reported. \n\nOverall, the paper significantly suffered from an immature writing style, numerous typos/grammatical mistakes, inconsistent organization of content, and importantly, limited technical contribution. Many recent sentence embedding models are missed such as those from Lin et al. (2017), Gan et al. (2017), Conneau et al. (2017), Jernite et al. (2017) etc. The evaluation and discussion sections were mostly unclear and the results of poorly performing methods were not reported at all making the comparisons and arguments difficult to comprehend. \n\nIn general, the paper seemed to be an ordinary reporting of some preliminary work, which at its current stage would not be much impactful to the research community.", "This paper examines a number of sentence and document embedding methods for automatic summarization. It pairs a number of recent sentence embedding algorithms (e.g., Paragraph Vectors and Skip-Thought Vectors) with several simple summarization decoding algorithms for sentence selection, and evaluates the resulting output summary on DUC 2004 using ROUGE, based on the general intuition that the selected summary should be similar to the original document in the vector space induced by the embedding algorithm. It further provides a number of analyses of the sentence representations as they relate to summarization, and other aspects of the summarization process including the decoding algorithm.\n\nThe paper was well written and easy to understand. I appreciate the effort to apply these representation techniques in an extrinsic task.\n\nHowever, the signficance of the results may be limited, because the paper does not respond to a long line of work in summarization literature which have addressed many of the same points. In particular, I worry that the paper may in part be reinventing the wheel, in that many of the results are quite incremental with respect to previous observations in the field.\n\nGreedy decoding and non-redundancy: many methods in summarization use greedy decoding algorithms. For example, SumBasic (Nenkova and Vanderwende, 2005), and HierSum (Haghighi and Vanderwende, 2009) are two such papers. This specific topic has been thoroughly expanded on by the work on greedy decoding for submodular objective functions in summarization (Lin and Bilmes, 2011), as well as many papers which focus on how to optimize for both informativeness and non-redundancy (Kulesza and Taskar, 2012). \n\t\nThe idea that the summary should be similar to the entire document is known as centrality. Some papers that exploit or examine that property include (Nenkova and Vanderwende, 2005; Louis and Nenkova, 2009; Cheung and Penn, 2013)\n \nAnother possible reading of the paper is that its novelty lies in the evaluation of sentence embedding models, specifically. However, these methods were not designed for summarization, and I don't see why they should necessarily work well for this task out of the box with simple decoding algorithms without finetuning. Also, the ROUGE results are so far from the SotA that I'm not sure what the value of analyzing this suite of techniques is.\n \nIn summary, I understand that this paper does not attempt to produce a state-of-the-art summarization system, but I find it hard to understand how it contributes to our understanding of future progress in the summmarization field. If the goal is to use summarization as an extrinsic evaluation of sentence embedding models, there needs to be better justification of this is a good idea when there are so many other issues in content selection that are not due to sentence embedding quality, but which affect summarization results.\n\nReferences:\n\nNenkova and Vanderwende, 2005. The impact of frequency on summarization. Tech report.\nHaghighi and Vanderwende, 2009. Exploring content models for multi-document summarization. NAACL-HLT 2009.\nLin and Bilmes, 2011. A class of submodular functions for document summarization. ACL-HLT 2011.\nKulesza and Taskar, 2012. Learning Determinantal Point Processes.\nLouis and Nenkova, 2009. Automatically evaluating content selection in summarization without human models. EMNLP 2009.\nCheung and Penn, 2013. Towards Robust Abstractive Multi-Document Summarization: A Caseframe Analysis of Centrality and Domain. ACL 2013.\n\nOther notes:\nThe acknowledgements seem to break double-blind reviewing." ]
[ 2, 2, 3 ]
[ 5, 5, 5 ]
[ "iclr_2018_S1347ot3b", "iclr_2018_S1347ot3b", "iclr_2018_S1347ot3b" ]
iclr_2018_BJDEbngCZ
Global Convergence of Policy Gradient Methods for Linearized Control Problems
Direct policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model; 2) they are an "end-to-end" approach, directly optimizing the performance metric of interest; 3) they inherently allow for richly parameterized policies. A notable drawback is that even in the most basic continuous control problem (that of linear quadratic regulators), these methods must solve a non-convex optimization problem, where little is understood about their efficiency from both computational and statistical perspectives. In contrast, system identification and model based planning in optimal control theory have a much more solid theoretical footing, where much is known with regards to their computational and statistical properties. This work bridges this gap showing that (model free) policy gradient methods globally converge to the optimal solution and are efficient (polynomially so in relevant problem dependent quantities) with regards to their sample and computational complexities.
rejected-papers
The paper studies the global convergence for policy gradient methods for linear control problems. Multiple reviewers point out strong concerns about the novelty of the results.
train
[ "ry1QoRKlf", "Hy0XRb5lG", "HJaedAnWz", "S1SMngjMf", "r1dg3lsMz", "SyFAixjzf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The work investigates convergence guarantees of gradient-type policies for reinforcement learning and continuous control\nproblems, both in deterministic and randomized case, whiling coping with non-convexity of the objective. I found that the paper suffers many shortcomings that must be addressed:\n\n1) The writing and organization is quite cumbersome and should be improved.\n2) The authors state in the abstract (and elsewhere): \"... showing that (model free) policy gradient methods globally converge to the optimal solution ...\". This is misleading and NOT true. The authors show the convergence of the objective but not of the iterates sequence. This should be rephrased elsewhere.\n3) An important literature on convergence of descent-type methods for semialgebraic objectives is available but not discussed.", "I find this paper not suitable for ICLR. All the results are more or less direct applications of existing optimization techniques, and not provide fundamental new understandings of the learning REPRESENTATION.", "The paper studies the global convergence for policy gradient methods for linear control problems. \n(1) The topic of this paper seems to have minimal connection with ICRL. It might be more appropriate for this paper to be reviewed at a control/optimization conference, so that all the technical analysis can be evaluated carefully. \n\n(2) I am not convinced if the main results are novel. The convergence of policy gradient does not rely on the convexity of the loss function, which is known in the community of control and dynamic programming. The convergence of policy gradient is related to the convergence of actor-critic, which is essentially a form of policy iteration. I am not sure if it is a good idea to examine the convergence purely from an optimization perspective.\n\n(3) The main results of this paper seem technical sound. However, the results seem a bit limited because it does not apply to neural-network function approximator. It does not apply to the more general control problem rather than quadratic cost function, which is quite restricted. I might have missed something here. I strongly suggest that these results be submitted to a more suitable venue.\n\n", "1. What we mean by 'rate of convergence’: as is clear in our theorems, we aim to show convergence rates for the objective value. This is standard in optimization literature. To be more clear and explicit in the abstract and introduction, we will update to say that \"the algorithms converge to a controller K with objective that's epsilon-close to optimal value.\" We could also prove the convergence of the iterate sequence (the parameters) but that is not our main interest.\n\n2. Perhaps what the reviewer is referring to is the literature behind Kurdyka-Lojasiewicz (KL) or Polyak-Lojasiewicz (PL) inequalities and what functions satisfy them---often functions satisfy these properties with a known exponent only *locally* (e.g., families of semialgebraic functions) and then the inequalities are used to show rates of convergence to a stationary point. \nWe are also using the KL inequality, but what's interesting is that for us the KL inequality hold globally (the only assumption is for C(K0) to be bounded) and we are able to show convergence (in function values) to the globally optimal value. This kind of situation is rare, we haven't seen too many nontrivial functions satisfying it.We believe this is an interesting new viewpoint for LQR that the controls community has not taken before.\n", "1. This paper proves the convergence of several algorithms (policy gradient, natural policy gradient) that are widely used in recent developments in reinforcement learning (including the ones using neural networks and learning representations). We believe understanding the behavior of these algorithms in the LQR setting is an important and necessary first step before understanding the more complicated neural network settings. Also, a common technique in practice is to approximate the problem locally as linear dynamical systems, and our results can be applied in these settings.\n2. Our result are not direct applications of existing optimization techniques. As we observed in the paper, the problem is non-convex (and is not even quasi-convex or star-convex) and existing techniques do not work in this setting. We do draw analogs to some familiar concepts (such as smoothness) in optimization, but the way we prove these guarantees is very different from the existing literature.\n", "1. This paper proves the convergence of several algorithms (policy gradient, natural policy gradient) that are widely used in the recent developments in reinforcement learning (including the ones using neural networks and learning representations). We believe understanding the behavior of these algorithms in the LQR setting is an important and necessary first step before understanding the more complicated neural network settings. Also, a common technique in practice is to approximate the problem locally as linear dynamical systems, and our results can be applied in these settings.\n2. We are not aware of any global convergence guarantees in the general non-convex setting. There are some convergence guarantees in convex settings, but even in convex settings there are worst-case examples that require super-polynomial number of iterations for policy iteration (for example, the construction in paper “Sub-exponential lower bounds for randomized pivoting rules for solving linear programs” by Friedmann et al.). In the general non-convex setting, even converge to a local minimum (rather than a saddle point) can take exponential time. Our contribution is to prove that policy gradient actually converges in polynomial number of iterations in the setting of LQR.\n" ]
[ 6, 5, 5, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1 ]
[ "iclr_2018_BJDEbngCZ", "iclr_2018_BJDEbngCZ", "iclr_2018_BJDEbngCZ", "ry1QoRKlf", "Hy0XRb5lG", "HJaedAnWz" ]
iclr_2018_SJICXeWAb
Depth separation and weight-width trade-offs for sigmoidal neural networks
Some recent work has shown separation between the expressive power of depth-2 and depth-3 neural networks. These separation results are shown by constructing functions and input distributions, so that the function is well-approximable by a depth-3 neural network of polynomial size but it cannot be well-approximated under the chosen input distribution by any depth-2 neural network of polynomial size. These results are not robust and require carefully chosen functions as well as input distributions. We show a similar separation between the expressive power of depth-2 and depth-3 sigmoidal neural networks over a large class of input distributions, as long as the weights are polynomially bounded. While doing so, we also show that depth-2 sigmoidal neural networks with small width and small weights can be well-approximated by low-degree multivariate polynomials.
rejected-papers
The reviewers point out that most of the results are already known and are not novel. There are also issues with the presentation. Studying only depth 2 and depth 3 networks is very limiting.
train
[ "rkb3kqxxf", "ry5vY1teG", "rJ7xdwpeM", "SJ63ZL2Xz", "Bk84l8nQM", "BkcJxU2XG", "H1F_0H3XG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper contributes to the growing literature on depth separations in neural network, showing cases where depth is provably needed to express certain functions. Specifically, the paper shows that there are functions on R^d that can be approximated well by a depth-3 sigmoidal network with poly(d) weights, that cannot be approximated by a depth-2 sigmoidal network with poly(d) weights, and with respect to any input distributions with sufficiently large density in some part of the domain. The proof builds on ideas in Daniely (2017) and Shalev-Shwartz et al. (2011). \n\nCompared to previous works, the main novelty of the result is that it applies to a very large family of input distributions, as opposed to some specific distributions. On the flip side, it applies only to networks with sigmoids as activation functions, and the weights need to be polynomially bounded. Moreover, although the result is robust to the choice of input distribution, the function used to get the lower bound is still rather artificial ( x -> sin(N||x||^2) for some large N). In a sense, this is complementary to the separation result in Safran and Shamir (2017), mentioned by the authors, where the function is arguably \"natural\", but the distribution is not. Finally, the proof ideas appear to be not too different than those of Daniely (2017).\n\nOverall, I think this is a decent contribution to this topic, and would recommend accepting it given enough room. It's a bit incremental in light of existing work, but does contribute to the important question of whether we can prove depth separations which are also robust.", "This paper proves a new separation results from 3-layer neural networks to 2-layer neural networks. The core of the analysis is a proof that any 2-layer neural networks can be well approximated by a polynomial function with reasonably low degrees. Then the authors constructs a highly non-smooth function can be represented by a 3-layer network, but impossible to approximate by any polynomial-degree polynomial function.\n\nSimilar results about polynomial approximation can be found in [1] (Theorem 4). To me, the result proved in [1] is spiritually very similar to propositions 3-4. The authors need to justify the difference.\n\nThe main strength of the new separation result is that it holds for a larger class of input distributions. Comparing to Daniely (2017) which requires the input distribution to be spherically uniform, the new result only needs the distribution to be lower bounded by 1/poly(d) in a small ball of radius 1/poly(d). Conceptually I don't think this is a much weaker condition. For a \"truly\" non-uniform distribution, one should allow its density function to be very close to zero at certain regions of the ball. Nevertheless, the result is a step forward from Daniely (2017) and the paper is well written.\n\nI am still in doubt of the practical value of such kind of separation results. The paper proves the separation by constructing a very specific function that cannot be approximated by 2-layer networks. This function has a super large Lipschitz constant, which we don't expect to see in practice. Consider the function f(x)=cos(Nx). When N is chosen large enough, the function f can not be well approximated by any 2-layer network with polynomial size. Does it imply that the family of cosine functions is rich enough so that it is a better family to learn than 2-layer neural networks? I guess the answer would be negative. In addition, the paper doesn't show that any 2-layer network can be well approximated by a 3-layer network, which is a missing piece in justifying the richness of 3-layer nets.\n\nFinally, the constructed \"hard\" function has order d^5 Lipschitz constant, but Theorem 7 assumes that the 2-layer networks' weight must be bounded by O(d^2). This assumption is crucial to the proof but not well justified (especially considering the d^5 factor in the function definition).\n\n[1] On the Computational Efficiency of Training Neural Networks, Livni et al., NIPS'14", "The paper shows that there are functions that can be represented by depth 3 sigmoidal neural networks (with polynomial weights and polynomially many units), but sigmoidal networks of depth 2 with polynomially bounded weights require exponentially many units. There is nothing new technically in the paper and I find the results uninteresting given the spate of results of this kind. I don't share the authors enthusiasm about much more general distributions etc. The approximations the authors are shooting for are much stronger the kind that has been used by Eldan and Shamir (2016) or other such papers. The approximation used here is $\\ell_\\infty$ rather than $\\ell_2$. So a negative result for depth 2 is weaker; the earlier work (and almost trivially by using the work of Cybenko, Hornik, etc.) already shows that the depth -3 approximations are uniform approximators. \n\nThe fact that sigmoidal neural networks with bounded weights can be expressed as \"low\" degree polynomials is not new. Much stronger results including bounds on the weights of the polynomial (sum of squares of coefficients) appear implicitly in Zhang et al. (2014) and Goel et al. (2017). In fact, these last two papers go further and show that this has implications for learnability not just for representation as the current paper shows. \n\nAdditionally, I think the paper is a bit sloppy in the maths. For example, Theorem 7 does not specifiy what delta is. I'm sure they mean that there is a \"small enough \\delta\" (with possible dependence on d, B, etc.). But surely this statement is not true for all values of $\\delta$. For e.g. when $\\delta = 1$, sin(\\pi d^5 \\Vert x \\Vert^2) can rather trivially be expressed as a sigmoidal neural network of depth 2. \n\nOverall, I think this paper has a collection of results that are well-known to experts in the field and add little novelty. It's unlikely that having yet another paper separating depth 2 from depth 3 with some other set of conditions will move us towards further progress in the very important question of depth separation. ", "1. Added a new section (Section 5 on separation under L2).\n2. Moved some of the proofs from Section 3 to the appendix.", "Thank you for a careful and detailed review. \n\nThe general idea of using low degree polynomials for depth separation is well-used in the previous work. We would like to make one remark about the connection with Daniely (2017). While we have to place the extra restriction of upper bounds on weights and type of nonlinearity, our proof is much simpler than Daniely’s (for example, we don’t use spherical harmonics) and this simplicity lends it flexibility which allows us to prove lower bounds for a general class of distributions. We agree that our proofs are simple modifications of previous techniques but we think that simplicity ought to be valued over complexity if it produces interesting results. Please see our discussion of points raised by AnonReviewer 2 and 3. \n", "Thank you for a careful and detailed review. \n\nOur result does allow non-uniform distributions that are close to zero in certain regions. For example, Theorem 7 only requires the density function to be lower bounded by 1/poly(d) in a small ball of radius 1/poly(d); the density can be zero or close to zero anywhere outside this small ball. In response to AnonReviewer 2, we have pointed out that our proof also works for a stronger depth-2-vs-3 separation for L_2 approximation (instead of L_\\infty) under a general class of distributions. \n\nThank you for providing the Livni et al. (2014) reference. The low-degree polynomial approximation in Section 3 are known and follow easily from the previous work of Shalev-Shwartz et al. (2011), which is clearly cited by us as well as by Livni et al. (2014). Section 3 is only for completeness as previous work we cited did not have the precise statements of lemmas used in our proof of Theorem 7. As you pointed out, our Proposition 4 is essentially Theorem 4 in Livni et al. (2014), so we will correct that. Our Proposition 5 is its straightforward extension to higher depths.\n\nAll depth-2-vs-3 separation results that we are aware of use poly(d)-Lipschitz functions. In dimension 1, any function with poly(d)-Lipschitz constant can be well-approximated by a depth-2 networks of size poly(d), ref. Debao (1993). We do not use sine in any crucial way; in fact, any function that is far from low-degree polynomials would do (as in Daniely). Thus, the class of functions for which the separation works is more general than what we get by using just sine. The recent progression of results by Eldan-Shamir (COLT’16), Safran-Shamir (ICML’17), Daniely (COLT’17) points to depth-width trade-offs for approximating “natural” functions under “natural” distributions as an important open problem. Safran-Shamir consider approximations of “natural” functions under carefully chosen distributions. Daniely considers uniform distribution on S^{d-1} x S^{d-1} as an instance of “natural” distribution. The definition of “natural” is debatable, so one would ideally like to prove such results for distributions as general as possible. To the best of our knowledge, our work is the first attempt in this direction. \n\nWe do not understand your point about “richness” of cosines vs. sigmoid neural networks. The fact that cos(Nx) cannot be well-approximated by 2-layer networks does not mean that the family of cosines is “richer”, if richness is taken to mean the ability to approximate a larger set of functions. For example, the class of single cosine functions (as opposed to, say, their linear combinations) cannot approximate step functions in a bounded interval, but 2-layer networks can come arbitrarily close to step functions. \n\nWe only claimed that there are functions which are well-approximable by depth-3 networks but not by depth-2 networks for a wide class of distributions. However, here is a construction to show that any sigmoid depth-2 network N (of size |N|) can be approximated by a sigmoid depth-3 network N’ of size |N|+d (d is the number of input coordinates). We do this by adding a layer of sigmoids between the inputs and N. For convenience, we will describe the construction using the closely related tanh gates instead (tanh(u) := 2\\sigma(u) -1). Using sigmoids requires some minor changes in the construction. Each new tanh acts as (approximate) identity. In a little more detail, for each input coordinate x_i, we add a new sigmoid (2/C) \\tanh(C x_i) where C is a small constant. It can be seen that (2/C)\\tanh(C x_i) \\approx x_i for all bounded x_i (by choosing constant C small enough one can make the approximation as close as one wishes for a given range of x_i). The output of this new layer is passed on to N. Since the new layer acts as approximate identity, the new network N’ approximates N. In the above construction, the only property of sigmoids (or tanh) that we used was that it is able to represent the identity function after applying a linear transformation (at least approximately). Thus the construction applies to networks that use different nonlinearities with this property.\n\nThe bound of d^2 on weights is not special. In general, given any bound B on the weights of the 2-layer network we can construct an L-Lipschitz function with L = d^3 B such that this function can be well-approximated by a small 3-layer network but any 2-layer network requires a large size for the type of distributions mentioned in the paper. \n", "Thank you for a careful and detailed review. \n\nWe completely agree with you that a negative result for L_2 approximation is stronger than for L_\\infty. Our technique indeed works for L_2 as mentioned in the remark after Theorem 7, which is our main result. We have updated our paper by adding Section 5 containing separation under L2 under a large class of distributions. \n\nWe strongly disagree with you that depth-2-vs-3 separation for L_2 approximation under general distributions is uninteresting or known. If you believe this is already known, please provide a reference. The recent progression of results by Eldan-Shamir (COLT’16), Safran-Shamir (ICML’17), Daniely (COLT’17) points to depth-width trade-offs for approximating “natural” functions under “natural” distributions as an important open problem. Safran-Shamir consider approximations of “natural” functions under carefully chosen distributions. Daniely considers uniform distribution on S^{d-1} x S^{d-1} as an instance of “natural” distribution. The definition of “natural” is debatable, so one would ideally like to prove such results for distributions as general as possible. To the best of our knowledge, our work is the first attempt in this direction. \n\nWe agree with you that our techniques are simple modifications of existing ideas, however, simplicity ought to be valued over complicated proofs if it leads to interesting results. We find it interesting that a simple proof yields depth-2-vs-3 separation of sigmoid networks for L_2 approximation under a general class of distributions. We cite Shalev-Shwartz et al. (2011) and others clearly, since the low-degree polynomial approximations in Section 3 are known and follow easily from their work. We will add similar results from Goel et al. (2017) too. Section 3 exists only for completeness as the previous work does not contain the precise statements of lemmas required in our proof. However, our main result on depth-width trade-off (Theorem 7) is in Section 4 (and in the updated version, also in Section 5). Depth separation and learnability are related but different problems, and the results in Zhang et al. (2014) and Goel et al. (2017) have no bearing on our Section 4 as far as we can see.\n\nThank you for pointing out other minor errors such as being specific about “small enough \\delta” in Theorem 7. We will re-write them with precise bounds for \\delta, d, n, B etc. In particular, \\delta < 1/3 works. \n" ]
[ 6, 5, 3, -1, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1, -1 ]
[ "iclr_2018_SJICXeWAb", "iclr_2018_SJICXeWAb", "iclr_2018_SJICXeWAb", "iclr_2018_SJICXeWAb", "rkb3kqxxf", "ry5vY1teG", "rJ7xdwpeM" ]
iclr_2018_rJR2ylbRb
Spectral Graph Wavelets for Structural Role Similarity in Networks
Nodes residing in different parts of a graph can have similar structural roles within their local network topology. The identification of such roles provides key insight into the organization of networks and can also be used to inform machine learning on graphs. However, learning structural representations of nodes is a challenging unsupervised-learning task, which typically involves manually specifying and tailoring topological features for each node. Here we develop GraphWave, a method that represents each node’s local network neighborhood via a low-dimensional embedding by leveraging spectral graph wavelet diffusion patterns. We prove that nodes with similar local network neighborhoods will have similar GraphWave embeddings even though these nodes may reside in very different parts of the network. Our method scales linearly with the number of edges and does not require any hand-tailoring of topological features. We evaluate performance on both synthetic and real-world datasets, obtaining improvements of up to 71% over state-of-the-art baselines.
rejected-papers
The reviewers present strong concerns about the lack of novelty in the paper. Further there are strong concerns about how the experiments are conducted. I recommend the authors to carefully go through the reviews.
train
[ "BJnpN5cgf", "rk9wKvM-z", "SJqU9kAWf", "SkYmLUkGz", "HygUoe6Wz", "SJq3YgabG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The term \"structural equivalence\" is used incorrectly in the paper. From sociology, two nodes with the same position are in an equivalence relation. An equivalence, Q, is any relation that satisfies these three conditions:\n - Transitivity: (a,b), (b,c) ∈ Q ⇒ (a,c) ∈Q\n - Symmetry: (a, b) ∈ Q if and only if (b, a) ∈Q\n - Reflexivity: (a, a) ∈Q\n\nThere are three deterministic equivalences: structural, automorphic, and regular.\n\nFrom Lorrain & White (1971), two nodes u and v are structurally equivalent if they have the same relationships to all other nodes. Exact structural equivalence is rare in real-world networks.\n\nFrom Borgatti, et al. (1992) and Sparrow (1993), two nodes u and v are automorphically equivalent if all the nodes can be relabeled to form an isomorphic graph with the labels of u and v interchanged.\n\nFrom Everett & Borgatti (1992), two nodes u and v are regularly equivalent if they are equally related to equivalent others.\n\nParts of this statement are false: \"A notable example of such approaches is RolX (Henderson et al., 2012), which aims to recover a soft-clustering of nodes into a predetermined number of K distinct roles using recursive feature extraction (Henderson et al., 2011).\" RolX (as described in KDD 2012 paper) uses MDL to automatically determine the number of roles.\n\nAs indicated above, this statement is also false: \"We note that RolX requires the number of desired structural classes as input, ...\".\n\nThe paper does not discuss how the free parameter d (which represents the number of evenly spaced sample points) is chosen. \n\nThis statement is misleading: \"In particular, a small perturbation of the graph yields small perturbations of the eigenvalues.\" What is considered a small perturbation? One can delete an edge (seemingly a small perturbation) and change the eigenvalues of the Laplacian dramatically -- e.g., deleting an edge that increases the number of connected components.\n\nThe barbell graph experiment seemed contrived. Why would one except such a graph to have 8 classes? Why not 3? One for cliques, one for the chain, and one for connectors of the clique to the chain.\n\nIn Section 4.2, how many roles were selected for RolX?\n\nThe paper states: \"Furthermore, nodes from different graphs can be embedded into the same space and their structural roles can be compared across different graphs.\" Experiments were not conducted to see how the competing approaches such as RolX compare with GraphWave on transfer learning tasks.\n\nGilpin et al (KDD 2013) extended RolX to incorporate sparsity and diversity constraints on the role space and showed that their approach is superior to RolX on measuring distances. This is applicable to experiments in Figure 4.\n\nI strongly recommend running experiments that test the predictive power of the roles found by GraphWave.\n", "The paper derived a way to compare nodes in graph based on wavelet analysis of graph laplacian. The method is correct but it is not clear whether the method can match the performance of state-of-the-art methods such as graph convolution neural network of Duvenaud et al. and Structure2Vec of Dai et al. in large scale datasets. \n1. Convolutional Networks on Graphs for Learning Molecular Fingerprints. D Duvenaud et al., NIPS 2015. \n2. Discriminative embeddings of latent variable models for structured data. Dai et al. ICML 2016.\n\n", "The paper proposes a method for quantifying the similarity between the local neighborhoods of nodes in a graph/network.\n\nThere are many ways in which such a distance/similarity metric between nodes could be defined. For example, once could look at the induced subgraph G_i formed by the k-neighborhood of node i, and the induced subgraph G_j of the k-neighborhood of node j, and define the similarity as k(G_i,G_j) where k is any established graph kernel. Moreover, the task is unsupervised, which makes it hard to compare the performance of different methods. Most of the experiments in the paper seem a bit contrived.\n\nRegarding the algorithm, the question is: “sure, but why this way?”. The authors take the heat kernel matrix on the graph, treat each column as a probability distribution, compute its characteristic function, and define a distance between characteristics functions. This seems pretty arbitrary and heuristic. I also find it confusing that they refer to the heat kernel as wavelets. The spectral graph wavelets of Hammond et al is a beautiful construction, but, as far as I remember, it is explicitly emphasized that the wavelet generating function g must be continuous and satisfy g(0)=0. By setting g(\\lambda)=e^{-s \\lambda}, the authors just recover the diffusion/heat kernel of the graph. That’s not a wavelet. Why call this a “spectral graph wavelet” approach then? The heat kernel is much simpler. I find this misleading.\n\nI also feel that the mathematical results in the paper have little depth. Diffusion is an inherently local process. It is natural then that the diffusion matrix can be approximated by a polynomial in the Laplacian (in fact, it is sufficient to look at the power series of the matrix exponential). It is not surprising that the diffusion function captures some local properties of the graph (there are papers by Reid Andersen/ Fan Chung/ Kevin Lang, as well as by Mahoney, I believe on localized PCA in graphs following similar ideas). Again, there are many ways that this could be done. The particular way it is done in the paper is heuristic and not supported by either math or strong experiments.\n", "We thank the reviewer for his or her comments. We address them below. \n\n#1: “There are many ways in which such a distance could be defined [...].”\n\nWhile we agree with the reviewer that there are many sensible definitions of node similarity, we would like to note that our goal here was broader than plain similarity search. In particular, we aimed to define a structural signature, i.e., an embedding for each node, which requires O(N) memory, instead of O(N^2) for the kernel-based pairwise comparisons suggested by the reviewer. We note that learning embeddings for graphs is a very common problem in machine learning (Henderson et al., 2012; Grover et al., 2016; Ribeiro et al., 2017)\n\n#2: \"Moreover, the task is unsupervised, which makes it hard to compare the performance of different methods. Most of the experiments in the paper seem a bit contrived.\"\n\nWe respectfully disagree with this characterization of our unsupervised task. Specifically, we developed multiple synthetic experiments and two real-world case studies to quantitatively compare GraphWave with two state-of-the-art approaches for solving the same unsupervised problem (struc2vec and RolX). Our experiments built upon those from these recent papers (e.g., the Barbell graph was a direct adaptation of an experiment in Ribeiro et al., 2017). In addition, we developed experiments that evaluated GraphWave in a variety of more complex settings (see Sections 4.1 and 4.2 in the paper). Overall, we believe that these experiments sufficiently demonstrate the benefits of GraphWave. However, we would appreciate any additional feedback on specific examples of how to further improve our experiments.\n\n#3: Concerns about spectral graph wavelet transform (SGWT) definition.\n\nThe reviewer is correct in stating that the SGWT requires g(0)=0. However in Section 4.2 of their paper, Hammond et al., 2010 introduces a \"second class of waveforms,\" which they call \"spectral graph scaling functions.\" As Hammond et al., 2010 states, these waveforms are \"analogous to the lowpass residual scaling functions from classical wavelet analysis.[...] They will be determined by a single real valued function h : R+ → R, which acts as a lowpass filter, and satisfies h(0) > 0 and h(x) → 0 as x → ∞. \" As we mention in Section 2.1, the heat kernel in GraphWave is a function of this class, and as such, it falls under Hammond et al.’s general SGWT framework. Because our work directly builds on Hammond et al.’s definition, we use the term spectral graph wavelet, rather than “heat kernel”, even though either term would be appropriate. \n\n#4: ''sure but why this way?'' [...] Diffusion is a local process [...] There are many ways in which this could be done\". \n\nWe agree with the reviewer in that GraphWave relies on an inherently local diffusion process. However, comparing diffusions across nodes in the graph to recover structural similarities is a tricky problem. Without an a-priori-known one-to-one mapping between neighborhoods, we are not aware of a computationally tractable method for comparing diffusions localized in different parts of the graph. For this reason, we suggested considering these diffusions as distributions, thus making the signature permutation-invariant to the labeling of the nodes.\n\nWe thank the reviewer for taking the time to read our response, and we hope that he or she will consider our arguments and help us improve our methodology.\n", "\nWe thank the reviewer for the detailed comments and questions regarding our submission. Here, we try to clarify some details and address the reviewer’s concerns:\n\n#1: The term \"structural equivalence\" is used incorrectly in the paper. \n\nWe emphasize that we are using the same definition of structural equivalence as Lorrain and White, 1971. However, perfect structural equivalence, as the reviewer points out, is extremely rare in real-world networks. Therefore, instead of looking for nodes with exact equivalence, we instead recover a low-dimensional embedding, or a structural signature, to find structurally similar nodes. We note that this notion of structural similarity is a commonly-used term in network science (Airoldi et al., 2008; Hoff et al., 2008; Newman, 2011; Henderson et al., 2012; Grover et al., 2016; Ribeiro et at., 2017; etc).\n\n#2: RolX uses MDL to automatically determine the number of roles.\n\nWhile RolX algorithm requires a pre-determined number of clusters, the reviewer is correct in mentioning that the RolX authors do include a method of automatically selecting this number using MDL. We thank the reviewer for this comment and will update the manuscript to reflect this fact. We note however that in our experiments, as we point out in Section 4 of our paper, we used RolX as an oracle estimator (providing it with the “correct” number of classes, the best-case scenario for RolX). \n\n#3: The paper does not discuss how the parameter d is chosen. \n\nWe set d=100 in all experiments. This parameter corresponds to the number of sampling points along the characteristic parametric curves (example is shown in Figure 3C). We have not put any special effort to tune this parameter.\n\n#4: What is considered a small perturbation? One can delete an edge (seemingly a small perturbation) and change the eigenvalues of the Laplacian dramatically.\n\nAs correctly highlighted by the reviewer, a small perturbation cannot be defined simply through the Hamming distance between the original and perturbed adjacency matrices. In our paper, we use definition of a small perturbation as defined in Spielman, Spectral Graph Theory (Chapter 16), 2011. That is, a small perturbation of the k-hop neighborhood corresponds to a set of edge additions/deletions that have a small impact on the graph Laplacian L. As proved by Spielman and studied by Milanese et al., 2010, in this setting, the perturbation induced on the eigenspectrum and the eigenvectors of L is small. Thus, the difference between the original Laplacian L and the perturbed Laplacian \\tilde{L} is small as well (sup ||L^k -\\tilde{L}^k|| <eps). We thank the reviewer for pointing out that this definition was unclear, and we will make sure to clarify it in the revised paper.\n\n#5: Why would one expect the barbell graph to have 8 classes? Why not 3? One for cliques, one for the chain, and one for connectors of the clique to the chain.\n\nUsing the definition of structural equivalence as defined by Lorrain and White, 1970, the barbell graph has exactly 8 structurally equivalent classes (one corresponding to the nodes in the cliques, and the seven others comprising the nodes in the chain at a given distance level to the cliques). We note that GraphWave can recover all 8 classes, whereas RolX is only able to discover 3 classes, indicating that GraphWave can recover fine grain structural information.\n\n#6: In Section 4.2, how many roles were selected for RolX?\n\nPlease see our answer to Comment #2 above. We used RolX as an oracle estimator, providing it with the correct number of classes, the best-case scenario for RolX.\n\n#7: Experiments were not conducted on transfer learning tasks.\n\nWhile we mention transfer learning as a potential application of this work, we had originally left the formal analysis of such methods to future work, as we discussed in the conclusion. However, due to the reviewer’s comments we ran a new transfer learning experiment (see the APPENDIX in our response to Reviewer 1). These results show that GraphWave outperforms several state of the art methods for the transfer learning task.\n\n#8: We did not compare with Gilpin et al., 2013.\n\nWe thank the reviewer for pointing out the reference to Gilpin et al., 2013, and their method, GLRD. We were not aware of it and will add it to the related work. While the code for the method was not published online by the authors, we implemented a simplified version of their method ourselves (incorporating sparsity constraints). In additional experiments described in our response to Reviewer 1 (APPENDIX), GraphWave outperformed GLRD by 260% in homogeneity, 430% in completeness, and by 500% in silhouette score. \n", "We thank the reviewer for pointers to these papers, which we have carefully reviewed. However, we would like to explicitly point out that the methods developed in the mentioned papers have a different goal to GraphWave’s. In particular, both Molecular Fingerprints and Structure2Vec are solving a “graph-level embedding” problem, which converts an entire graph into a single low-dimensional vector. In contrast, GraphWave is solving the “node-level embedding” problem, where it generates a low-dimensional vector for each node based on node’s structural role in the graph -- which is why we had not originally intended comparing GraphWave to these methods. \n\nHowever, in certain settings, both Molecular Fingerprints and Structure2Vec can be compared with GraphWave. Specifically, these two methods yield node embeddings as a by-product of their algorithm, but only in supervised settings across multiple graphs where the graphs have “ground truth” labels. Following the reviewer’s suggestion, we developed an additional experiment to compare GraphWave with these methods in a specific supervised setting (see APPENDIX below). We note that GraphWave is much more general, and can yield node embeddings on a single graph, or across multiple unlabeled graphs, something that Molecular Fingerprint and Structure2Vec are unable to do. As shown in the experiments (APPENDIX, results), GraphWave outperformed Molecular Fingerprints by 37% in homogeneity score, 11% in completeness, and 890% in silhouette score. Additionally, GraphWave outperformed Structure2Vec by 4% in homogeneity, though Structure2Vec had a 7% higher completeness and 54% higher silhouette score. \n\nOverall, GraphWave outperforms the state-of-the-art in unsupervised settings (see the experiments in the paper) and yields very strong performance in supervised settings, even when compared against supervised methods (as shown in the APPENDIX here). \n\nTo address these comments by the reviewer, we will add both references to the related work section.\n\n------------------------------------------------\nAPPENDIX: Additional experiments.\n\n*Experimental setup.* \nThe goal of these experiments is to assess the predictive power of embeddings. That is, we analyze how well we can recognize structural similarity of nodes across different graphs. Note that the setup is a slight adaptation of the experiments in the paper. This was required in order to work across multiple graphs --- which was necessary to evaluate Molecular Fingerprints and Structure2Vec methods --- rather than within a single graph. \n\nIn particular, we generate 200 graphs, with ground truth labels corresponding to the true structural classes of each node. Each graph was generated as follows: \nWe generate its basis (a cycle, as in Figure 3A) of different (random) length.\nWe plant a random number of different shapes (houses, fans or stars, as shown in Figure 3A) on this cycle. Our experiment is set up so that with 60% probability, the graph only comprises one type of shape repeated multiple times (20% house, 20% fan, 20% stars), and with 40% chance, the graph comprises all of these shapes in varied numbers.\nWe have fixed a priori the scale in GraphWave to s=3. We trained Neural Fingerprints and Structure2Vec by providing each graph with a label (1: house, 2: fan, 3: star, 4: varied). We note that in this setting, the graph labels highly correlated with the structural roles of the nodes. This gives the supervised methods (Molecular Fingerprints and Structure2Vec) an advantage over the unsupervised GraphWave approach. This is necessary because without these labels, the supervised methods cannot be applied.\n\nWe run each algorithm, then fit k-means on the embeddings of the first 150 graphs to try to recover the 15 different structural roles of this experiments. We evaluate the performance of the clustering on the remaining 50 graphs in the test set. \n\n*Results.* \nResults are shown in the following table.\n\nMethod\t\t\t\t\t | Homogeneity | Completeness | Silhouette\n-----------------------------------------------------------------------------------------------------------------------------\nRolX (Henderson et al., 2012)\t\t\t 0.688\t\t 0.352\t\t 0.466\nGLRD (Gilpin et al., 2013)\t\t\t\t 0.329\t\t 0.175\t\t 0.101\nStructure2Vec (Dai et al., 2016)\t\t\t 0.825\t\t 0.811\t\t 0.890\nMolecular Fingerprints (Duvenaud et al., 2015) 0.626\t 0.681\t\t 0.065\nGraphWave (this paper)\t\t\t\t 0.860\t\t 0.756\t\t 0.579\n" ]
[ 5, 5, 3, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_rJR2ylbRb", "iclr_2018_rJR2ylbRb", "iclr_2018_rJR2ylbRb", "SJqU9kAWf", "BJnpN5cgf", "rk9wKvM-z" ]
iclr_2018_r1CE9GWR-
Understanding GANs: the LQG Setting
Generative Adversarial Networks (GANs) have become a popular method to learn a probability model from data. Many GAN architectures with different optimization metrics have been introduced recently. Instead of proposing yet another architecture, this paper aims to provide an understanding of some of the basic issues surrounding GANs. First, we propose a natural way of specifying the loss function for GANs by drawing a connection with supervised learning. Second, we shed light on the statistical performance of GANs through the analysis of a simple LQG setting: the generator is linear, the loss function is quadratic and the data is drawn from a Gaussian distribution. We show that in this setting: 1) the optimal GAN solution converges to population Principal Component Analysis (PCA) as the number of training samples increases; 2) the number of samples required scales exponentially with the dimension of the data; 3) the number of samples scales almost linearly if the discriminator is constrained to be quadratic. Moreover, under this quadratic constraint on the discriminator, the optimal finite-sample GAN performs simply empirical PCA.
rejected-papers
While the reviewers agree that this is an important topic, there are numerous concerns novelty, correctness and limitations.
test
[ "H1PuapUef", "HyW8p6vxM", "r1WP0c2bz", "B1cmO6LQM", "B1UKOpIQG", "BJG4hf8lz", "SJFExL7eM", "BJ7s1MTyz", "Sks3SeNJf", "BJxqtFwAW", "B1zBD53A-", "H1oRM3cCZ", "HJkAme4AZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "author", "public", "author", "author", "public", "public" ]
[ "*Paper summary*\n\nThe paper considers GANs from a theoretical point of view. The authors approach GANs from the 2-Wasserstein point of view and provide several insights for a very specific setting. In my point of view, the main novel contribution of the paper is to notice the following fact:\n\n(*) It is well known that the 2-Wasserstein distance W2(PY,QY) between multivariate Gaussian PY and its empirical version QY scales as $n^{-2/d}$, i.e. converges very slow as the dimensionality of the space $d$ increases. In other words, QY is not such a good way to estimate PY in this setting. A somewhat better way is use a Gaussian distribution PZ with covariance matrix S computed as a sample covariance of QY. In this case W2(PY, PZ) scales as $\\sqrt{d/n}$.\n\nThe paper introduces this observation in a very strange way within the context of GANs. Moreover, I think the final conclusion of the paper (Eq. 19) has a mistake, which makes it hard to see why (*) has any relation to GANs at all.\n\nThere are several other results presented in the paper regarding relation between PCA and the 2-Wasserstein minimization for Gaussian distributions (Lemma 1 & Theorem 1). This is indeed an interesting point, however the proof is almost trivial and I am not sure if this provides any significant contribution for the future research.\n\nOverall, I think the paper contains several novel ideas, but its structure requires a *significant* rework and in the current form it is not ready for being published. \n\n*Detailed comments*\n\nIn the first part of the paper (Section 2) the authors propose to use the optimal transport distance Wc(PY, g(PX)) between the data distribution PY (or its empirical version QY) and the model as the objective for GAN optimization. This idea is not novel: WGAN [1] proposed (and successfully implemented) to minimize the particular case of W1 distance by going through the dual form, [2] proposed to approach any Wc using auto-encoder reformulation of the primal (and also shoed that [5] is doing exactly W2 minimization), and [3] proposed the same using Sinkhorn algorithm. So this point does not seem to be novel.\n\nThe rest of the paper only considers 2-Wasserstein distance with Gaussian PY and Gaussian g(PX) (which I will abbreviate with R), which looks like an extremely limited scenario (and certainly has almost no connection to the applications of GANs).\n\nSection 3 first establishes a relation between PCA and minimizing 2-Wasserstein distance for Gaussian distributions (Lemma 1, Theorem 1). Then the authors show that if R minimizes W2(PY, R) and QR minimizes W2(QY, QR) then the excess loss W2(PY, QR) - W2(PY, R) approaches zero at the rate $n^{-2/d}$ (both for linear and unconstrained generators). This result basically provides an upper bound showing that GANs need exponentially many samples to minimize W2 distance. I don't find these results novel, as they already appeared in [4] with a matching lower bound for the case of Gaussians (Theorem B.1 in Appendix can be modified easily to show this). As the authors note in the conclusion of Section 3, these results have little to do with GANs, as GANs are known to learn quite quickly (which contradicts the theory of Section 3).\n\nFinally, in Section 4 the authors approach the same W2 problem from its dual form and notice that for the LQG model the optimal discriminator is quadratic. Based on this they reformulate the W2 minimization for LQG as the constrained optimization with respect to p.d. matrix A (Eq 16). The same conclusion does not work unfortunately for W2(QY, R), which is the real training objective of GANs. Theorem 3 shows that nevertheless, if we still constrain discriminator in the dual form of W2(QY, R) to be quadratic, the resulting soliton QR* performs the empirical PCA of Pn. \n\nThis leads to the final conclusion of the paper, which I think contains a mistake. In Eq 19 the first equation, according to the definitions of the authors, reads\n\\[\nW2(PY, QR) = W2(PY, PZ), (**)\n\\]\nwhere QR is trained to minimize min_R W2(QY, R) and PZ is as defined in (*) in the beginning of these notes. \nHowever, PZ is not the solution of min_R W2(QY, R) as the authors notice in the 2nd paragraph of page 8. Thus (**) is not true (at least, it is not proved in the current version of the text). PZ is a solution of min_R W2(QY, R) *where the discriminator is constrained to be quadratic*. This mismatch is especially strange, given the authors emphasize in the introduction that they provide bounds on divergences which are the same as used during the training (see 2nd paragraph on page 2) --- here the bound is on W2, but the empirical GAN actually does a regularized training (with constrained discriminator).\n\nFinally, I don't think the experiments provide any convincing insights, because the authors use W1-minimization to illustrate properties of the W2. Essentially the authors say \"we don't have a way to perform W2 minimization, so we rather do the W1 minimization and assume that these two are kind of similar\".\n\n* Other comments *\n(1) Discussion in Section 2.1 seems to never play a role in the paper.\n(2) Page 4: in p-Wasserstein distance, ||.|| does not need to be a Euclidean metric. It can be any metric.\n(3) Lemma 2 seems to repeat the result from (Canas and Rosasco, 2012) as later cited by authors on page 7?\n(4) It is not obvious how does Theorem 2 translate to the excess loss? \n(5) Section 4. I am wondering how exactly the authors are going to compute the conjugate of the discriminator, given the discriminator most likely is a deep neural network?\n\n\n[1] Arjovsky et al., Wasserstein GAN, 2017\n[2] Bousquet et al, From optimal transport to generative modeling: the VEGAN cookbook, 2017\n[3] Genevay et al., Learning Generative Models with Sinkhorn Divergences, 2017\n[4] Arora et al, Generalization and equilibrium in GANs, 2017\n[5] Makhazani et al., Adversarial Autoencoders, 2015", "First of all, let me state this upfront: despite the sexy acronym \"GAN\" in the title, this paper does not provide any genuine understanding of GANs. Conceptually, GANs are an algorithmic instantiation of a classic idea in statistics, mamely minimum-distance estimation, originally introduced by Jacob Wolfowitz in 1957 (*). This provides the 'min' part. The 'max' part comes from considering distances that can be expressed as a supremum over a class of test functions. Again, this is not new -- for instance, empirical risk minimization, in both supervised and unsupervised learning, can be phrased as precisely such a minimax problem by casting the convergence analysis in terms of suprema of suitable empirical processes (see, e.g., \"Empirical Processes in M-Estimation\" by Sara Van De Geer). Moreover, even the minimax (and, more broadly, game-theoretic) criteria go back all the way to the foundational papers of Abraham Wald.\n\nNow, the conceptual innovation of GANs is that this minimax formulation can be turned into a zero-sum game played by two algorithmic architectures, the generator and the discriminator. The generator proposes a model (which is assumed to be easy to sample from) and generates a sample starting from a fixed instrumental distribution; the discriminator evaluates the current proposal against a class of test functions, which, again, are assumed to be easily computable, e.g., by a neural net. One can also argue that the essence of GANs is precisely the architectural constraints on both the generator and the discriminator that make their respective problems amenable to 'differentiable' approaches, e.g., gradient descent/ascent with backpropagation. Without such a constraint, the saddle point is either trivial or reduces to finding a worst-case Bayes estimate, as classical statistical theory would predict.\n\nThis paper essentially strips away the essence of GANs and considers a stylized minimum-distance estimation problem, where both the target and the instrumental distributions are Gaussian, and the 'distance' between statistical models is the quadratic Wasserstein distance induced by the Euclidean norm. This, essentially, stacks the deck in favor of linear strategies, and it is not surprising at all that PCA emerges as the solution. It is very hard to see how any of this helps our understanding of either strengths or shortcomings of GANs (such as mode collapse or stability issues). Moreover, the discussion of supervised and unsupervised paradigms is utterly unconvincing, especially in light of the above comment on minimum-distance estimation underlying both of these paradigms. In either setting, a learning algorithm is obtained from the population version of the problem by substituting the empirical distribution of the observed data for the unknown population law.\n\nAdditional minor comments on proper attribution and novelty of results:\n\n1) Lemma 3 (structural result for optimal transport with L_2 Wasserstein cost) is not due to Chernozhukov et al., it is a classic result in the theory of optimal transportation, in various forms due to Brenier, McCann, and others -- cf., e.g., Chapters 2 and 3 of C. Villani, \"Topics in Optimal Transportation.\"\n\n2) The rate-distortion formulation with fixed input and output marginal in Appendix A, while interesting, is also not new. Precise characterizations in terms of optimal transport are available, see, e.g., N. Saldi, T. Linder, and S. Yuksel, \"Randomized Quantization and Source Coding With Constrained Output Distribution,\" IEEE Transactions on Information Theory, vol. 61, no. 1., pp. 91-106, January 2015.\n\n(*) The method of Wolfowitz is not restricted to distance functions in the mathematical sense; it can work equally well with monotone functions of metrics -- e.g., the square of a metric.", "\nSummary:\nThis paper studies GANs in the following LQG setting: Input data distribution (P_Y) is Gaussian with zero mean and Identity covariance. Loss function is quadratic. Generator is also considered to be a Gaussian distribution (linear function of the input Gaussian noise). The paper considers two settings for discriminator: 1)Unconstrained and 2) Quadratic function. For these settings, the paper studies the generalization error rates, or the gap between Wasserstein loss of the population version (P_Y) and the finite sample version (Q_n(Y)). The paper shows that when the discriminator is unconstrained, even though the generator is constrained to be linear, the convergence rates are exponentially slow in the dimension. However constraining the discriminator improves the rates to 1/\\sqrt{#samples}. This is shown by establishing the equivalence of this setting to PCA.\n\n\nComments:\n\n\n1) This paper studies the statistical aspects of GANs, essentially the sample complexity required for small generalization error, for the simpler LQG setting. The LQG setting reduces the GAN optimization to essentially PCA and I believe is too simple to give insights into more complex GANs. \n\n2)The results show that using high capacity neural network architectures can result in having solutions with high variance/generalization error. However, it is known even for classification that neural networks used in practice have high capacity (https://arxiv.org/abs/1611.03530) yet generalize well on *real* tasks. So, having slow worst case convergence may not necessarily be an issue with higher capacity GANs, and this paper does not address this issue with the results.\n\n3) The discussion on what is natural loss is very confusing and doesn't add to the results. While least squares loss is the simplest to study and generally offers good insights, I don't think it is either natural or the right loss to consider for GANs.\n\n4) Also the connection to supervised learning seems very weak. In supervised learning generally Y is smaller dimensional compared to X, and generalization of g(X) depends on its ability to compress X, but still represent Y. On the contrary, in GANs, X is much smaller dimensional than Y.", "Some responses to specific comments by reviewers:\n\n- Connection between supervised and unsupervised learning: This is a key step in our work. One reviewer seems to be saying that the connection is not new. However we cannot find a previous result on this. Can the reviewer give us a citation to a specific result? Another reviewer says that the connection is very weak because in a supervised setting typically the feature vector X is of a higher dimension than the target variable Y where in the GAN setting it is the reverse. We disagree with this reasoning. Take for example neural networks. They are invented in the supervised setting where the input X is typically of a higher dimension than the output Y. But when used as generators for GANs and also for autoencoders, the input is low dimensional and the output is high dimensional. So this already gives a hint of the connection between supervised and unsupervised learning. What we are doing in our paper is to make this connection explicit and at the problem formulation level rather than at the implementation level.\n\n- The generalization distance Eq. (19): There seems a misunderstanding by Reviewer 2 who believes Eq. (19) contains a mistake. Actually the derivation is indeed correct although we think that the presentation can be improved. For clarification, let us elaborate the derivation as below. First of all, we emphasize that this derivation is w.r.t. the case where the discriminator is constrained to be quadratic (perhaps the misunderstanding on this misled the reviewer.) Since k=d, the optimal solution g*(X) for the population GAN matches the distribution of the real data Y, which yields: W_2^2(P_Y, P_{g*(X)}) = 0. This together with Eq. (10) then gives:\nd_G(P_Y, Q_Y^n) = W_2^2(P_Y, P_{\\hat{g}(X)}).\nHere P_{\\hat{g}(X)} indicates the optimal generated distribution w.r.t. “the case where the discriminator is constrained to be quadratic”. Hence, as per the derivation in Eqs. (17) & (18), P_{\\hat{g}(X)} should be the same as the distribution generated by the sample covariance matrix (denoted by P_Z). This gives the one claimed in Eq. (19).\n\n- “Neural networks have high capacity but can still generalize on real data, so worst case convergence results may not be relevant,” : Note that the poor generalization of GANs studied in this (and also in the paper by Arora et al) is due to the use of the Wasserstein distance measure in the inference. It is an orthogonal issue to the fact that neural networks have high capacity.\n\n- Connection with N. Saldi, T. Linder, and S. Yuksel, \"Randomized Quantization and Source Coding With Constrained Output Distribution,\" Thanks to the reviewer for the reference, which we didn't know of and will add in the revision. Theorem 7 in the reference is the result which is closest to Theorem 4 in our paper, if we set mu= psi= P_Y. The function D(R) is the same as both cases. However, W_2^2(P_Y, Q_Y^n) (with Q_Y_n = {y_1,.....y_n} the data points randomly drawn from P_Y) in our paper is different from L_n(P_Y,P_Y, R) in the reference. In the language of randomized quantization, W_2^2(P_Y,Q_Y^n) is the minimum quantization error achieved with the constraint that each quantization point is equally likely under each realization of the random quantizer, while L_n(P_Y,P_Y,R) is the minimum quantization error achieved with the constraint that the overall distribution of the random quantizer's output matches P_Y. Since the latter constraint is looser, it can be seen that W_2(P_Y,Q_Y^n) \\ge L_n(P_Y,P_Y,R). But it is not obvious that asymptotically they are equal. That's what we showed in our submission. However, we find that to explain carefully this subtle but important difference will take us too far from the main thrust of the paper. So we have decided to remove the appendix and develop the material elsewhere.\n\n", "Thank you to the reviewers for the detailed comments. We will try to state clearly what we believe is the main contribution of our submission and then use it to answer the main questions of the reviewers. \n\nThe driving question of our paper is that when the real data is Gaussian, what should be the natural GAN optimization problem? Our answer is\n\nmin_G max_{A psd} \\sum_i y_i^t (I-A) y_i - yy_i^t (A^\\dagger - I)yy_i ----------------(1)\n\nHere y_i is the real data, yy_i = Gx_i is the fake data generated from the randomness x_i’s, and A^\\dagger is the pseudo-inverse of the matrix A. \n\nTo make this contribution clear, we added a discussion section 6 in the revision. This optimization problem is highlighted in Figure 4.\n\n1) Is this result novel? We believe so. We have never seen it in the literature.\n\n2) Is this GAN? We believe so. It is a game played between the generator and a discriminator. The objective is a differentiable function of both G and A and so it is a specific algorithmic instantiation of the general abstract minimum distance estimation problem, tailored to Gaussian data. \n\n3) Why are there no neural networks? They are not necessary for Gaussian data. The absence of neural networks is a consequence of our derivation, not an assumption.\n\n4) Is the result too simple? We do not believe so. Even though there are no neural networks, the GAN objective function in (1) is far from trivial. While it is natural to have linear generators for a Gaussian problem, the specific form of the quadratic discriminator is not obvious and derived through a principled approach. In fact, there is a general belief in the field that the simplest discriminator is linear, but we show that even for a simple model like Gaussian data, linear discriminators do not suffice.\n\n5) Is the Gaussian model relevant? Even though real world data is definitely more complex than Gaussian, it is a common and useful practice across many fields to understand a problem first in the Gaussian context (eg. linear regression, the Kalman filter). Then one can build on this result to extend to more general data distributions, introducing neural networks in the process (think about the evolution from linear regression to the perceptron to deep learning in supervised learning) The optimization problem (1) can serve as a useful concrete baseline for studying phenomena that would be too complex to first study in the general setting. For example, stability of training GANs, an important problem in general, can be studied first in the context of (1). If we do alternate gradient descent steps for G and for A, will we converge to the Nash equilibrium of the game? This is one among several follow up questions we are currently studying.\n", "Hi Anna, \n\nThank you for your comment. Below are some clarifications: \n\n1) Our main results show that without constraining the discriminator, quadratic GAN has a poor generalization even in the simple LQG setup. In this setup, we show that the proper constraint on the discriminator is the quadratic constraint which makes its convergence exponentially fast. Note that this constraint does not change the population limit compared to the unconstrained case (see Fig 1 in the paper). Indeed implementation of the quadratic GAN with quadratic discriminator is also straightforward. \n\n2) In this work, we have not analyzed/implemented quadratic GAN with neural network generators/discriminators. We believe that understanding GANs in the simple setup of the LQG paves the way to understand it (and properly implement it) in a more general setup. We agree with you that implementing the quadratic GAN with neural nets is more challenging than that of the WGAN (Arjovskey et al.) due in part to the computation of the convex conjugate. Note that if you use the gradient descent to implement this part, the extra running time should be comparable with the running time of optimizing the generator and the discriminator. Thus, even though the overall running time may be larger than that of current GAN implementations, it should be in the same order.\n\n\n", "Dear Authors,\n\nThanks for your reply. \n\n1) I agree that computing the convex conjugate of a quadratic function \\psi is easy (a very well known result indeed). However, this is cannot be used for implementation of a neural network. Also, one could directly use least squares if you want to stick to the linear setting.\n\n2) Regarding my implementation, I was solving for \\psi^* by gradient descent. What method did you try?\n\nHere are my final empirical conclusions:\n\nThe only practical way to calculate psi^* is to perform many steps of gradient descent for *every* single step of GAN training and this completely destroys the computational efficiency of neural network. Thus unfortunately this method is not practical by any measure. I wonder if this is the reason why this was not mentioned in \"Wasserstein GAN\" by Arjovsky et. al.", "Hi Anna,\n\nThank you for your interest in our work. \n\nImplementation of quadratic GAN (which has a convex discriminator) using deep neural nets (DNNs) is challenging due in part to the computation of the convex \nconjugate of the discriminator function \\psi, which is a DNN. We are currently working on this part and haven’t extensively tested it. \n\nHowever, implementation of the method which we show that has a fast generalization rate in the LQG setup (quadratic GAN with convex quadratic discriminator) is easy since there is a closed-form solution for the convex conjugate of a convex quadratic function \\psi.\n\nWe are interested to know how are you implementing the quadratic GAN as the implementation details can affect the convergence. \n\nBest,\nOn behalf of authors", "Hi\n\nAs suggested to by paper's main result (but not shown experimentally), I tried training GANs with squared W_2 loss on LSUN-Bedrooms dataset over the last few days. However, my neural networks were not converging even after heavily tweaking hyper parameters (like learning rate, batch size etc). \n\nAlso, I was not able to find any suggestions in the longer version of this paper (from arxiv). I was wondering if you have any suggestions which could help me?\n\nI tried using the following generator neural network models:\n 1) 4 relu hidden layers and 512 units (as in Arjovsky's paper).\n 2) 6 relu hidden layers and 512 units.\n 3) 8 relu hidden layers and 256 units.\n\nThanks\nAnna\n\n", "Just two points of clarification about our approach to the problem:\n\n1) There are two key aspects to GANs: i) the novel game theoretic learning architecture, ii) the use of neural networks as function classes over which the game theoretic objective is optimized. The complexity of understanding GANs stems from the combination of these two aspects. Our approach allows us to focus on aspect (i) by assuming a Gaussian data distribution. In this case, the class of linear generators is natural. Still it is not obvious what should be the class of discriminators for fast learning. Our result shows that one should use a class of quadratic discriminators to balance against linear generators. Thus our results are non-trivial and shed light on appropriate architectures for GANs even without bringing DNNs into the picture. \n\n2) We want to emphasize we are NOT showing that Least-Square under the Gaussian setting is PCA. We start with proposing a general formulation for GANs with a general loss function and a general data distribution. Then we show what happens when specializing this formulation to quadratic loss. The resulting problem is not least squares; it's minimizing the quadratic Wasserstein distance from the true distribution. The connection with PCA is also more subtle. While in the population limit the solution is PCA, we show that without constraints on the discriminator the solution is NOT empirical PCA when there are finite many samples. Only when we put the quadratic constraint on the discriminator do we get back empirical PCA. Again, our results point to the importance of an appropriate constraining of the discriminator.", "Response to the 4 comments:\n\n\"Gaussian assumption on data might be reasonable but to analyze GANs linear generators are not natural assumption (as its the non-linearity that makes them powerful)\"\n\nOur result showed the opposite. Without any constraints on the generator and allowing full nonlinearity, lemma 2 shows that the generalization ability of GAN is very poor, needing an exponential number of samples (exponential with the number of samples). With linear generators and quadratic discriminators, the generalization ability is much better, only linear number of samples (Theorem 3)\n\n\"Simplifying GANs to linear (control) system with feedback removes the very essence GANs!\"\n\nWe do not agree. As can be seen from the paper, there are lots of complexity in the problem even under the LQG setting. \n\n\"\"We start with proposing a general formulation for GANs with a general loss function and a general data distribution\" Wasn't this proposed by Goodfellow et. al. ? Am I missing something?\"\n\nNo the general formulation is not proposed by Goodfellow. For example, Wasserstein GANs are not covered by Goodfellow's formulation. Our formulation is a generalization of Wasserstein GAN's formulation. (See Section 2)\n\n\"Under the LGQ model, the paper solves : min_g E[||Y-g*X||^2] for gaussian X, Y and linear operator g. Since the error term Y-g*X is gaussian , least square solution (obtained in the paper) is the MLE estimator (and MMSE). \"\n\nNo. min_g E[||Y - g*X||^2] is the supervised learning problem. The GAN problem is unsupervised and is given by\n\nmin_g min_{P_X,Y} E [||Y - g*X||^2] \n\nNote that for the supervised learning problem, the empirical joint distribution p_X,Y is already given from the data. On the other hand, for the unsupervised learning problem, the joint distribution is not given and is part of the optimization. Hence a more complex problem. See Section 2 for more discussions. \n\n\n", "\"(i) by assuming a Gaussian data distribution. In this case, the class of linear generators is natural \" - \n Gaussian assumption on data might be reasonable but to analyze GANs linear generators are not natural assumption (as its the non-linearity that makes them powerful)\n\n(This) \"shed light on appropriate architectures for GANs even without bringing DNNs into the picture\"\n Simplifying GANs to linear (control) system with feedback removes the very essence GANs!\n\n\"We start with proposing a general formulation for GANs with a general loss function and a general data distribution\"\n Wasn't this proposed by Goodfellow et. al. ? Am I missing something?\n\n\"The resulting problem is not least squares; it's minimizing the quadratic Wasserstein distance from the true distribution\"\n Under the LGQ model, the paper solves : min_g E[||Y-g*X||^2] for gaussian X, Y and linear operator g. Since the error term Y-g*X is gaussian , least square solution (obtained in the paper) is the MLE estimator (and MMSE). ", "I came across this paper while searching for submissions on GAN theory. After going through this, my overall impression is that this paper doesn't analyze GANs but instead it trivially shows the solution to Least-Square under the Gaussian setting is PCA (a well known fact for more than 50 years).\n\nQuick summary: This paper considers a GAN setting with a linear generator under squared error loss with Gaussian features. It shows that under the above assumptions, the optimal solution of the GAN is nothing but the PCA solution. The paper also has some basic simulations supporting Theorem 3 (the statement in the previous para). However, the paper has following major weakness:\n\n1) One of the key drivers of NNs is the non-linearity between different layers. However, this paper restricts the generator to be linear, hence missing the key ingredient of NNs.\n\n2) The main result says that GANs under second order Wasserstein loss is equivalent to PCA. However, in practice it is known that GANS are not doing PCA. In fact, GANS are able to achieve results superior than PCA on many datasets of interest. Isn't this a clear mismatch between practical observations and main conclusion of the paper itself? This raises the questions about the suitability of 'LQG' model itself.\n\n3) The theory section of the paper motivates l_2 (2nd order wassertein) over the l_1 loss. However ironically, the simulations use l_1 loss to justify the use of l2 of loss! Did I miss anything here?\n\n4) The paper doesn't use the fact that the generator and discriminator are NNs themselves. Thus the papre has nothing to with GANs as commonly understood by the community.\n\n" ]
[ 4, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1CE9GWR-", "iclr_2018_r1CE9GWR-", "iclr_2018_r1CE9GWR-", "iclr_2018_r1CE9GWR-", "iclr_2018_r1CE9GWR-", "SJFExL7eM", "BJ7s1MTyz", "Sks3SeNJf", "iclr_2018_r1CE9GWR-", "HJkAme4AZ", "H1oRM3cCZ", "BJxqtFwAW", "iclr_2018_r1CE9GWR-" ]
iclr_2018_HyY0Ff-AZ
Representing Entropy : A short proof of the equivalence between soft Q-learning and policy gradients
Two main families of reinforcement learning algorithms, Q-learning and policy gradients, have recently been proven to be equivalent when using a softmax relaxation on one part, and an entropic regularization on the other. We relate this result to the well-known convex duality of Shannon entropy and the softmax function. Such a result is also known as the Donsker-Varadhan formula. This provides a short proof of the equivalence. We then interpret this duality further, and use ideas of convex analysis to prove a new policy inequality relative to soft Q-learning.
rejected-papers
The reviewers point out that this is a well known result and is not novel.
train
[ "r1FjCQKxG", "rJ9BTHFez", "B1_rmNyZz", "S1-FOw67M", "Sk1y8waXf", "SyDO4PpXf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper uses a well-known variational representation of the relative entropy (the so-called Donsker-Varadhan formula) to derive an expression for the Bellman error with entropy regularization in terms of a certain log-partition function. This is stated in Equation (13) in the paper. However, this precise representation of the Bellman error (with costs instead of rewards and with minimization instead of maximization) has been known in the literature on risk-sensitive control, see, e.g., P. D. Pra, L. Meneghini, and W. J. Runggaldier, “Connections between stochastic control and dynamic games,” Math. Control Signals Systems, vol. 9, pp. 303–326, 1996. The same applies to contraction results for the \"softmax\" Bellman operator -- these results are not novel at all, see, e.g., D. Hernandez-Hernandez and S. I. Marcus, “Risk sensitive control of Markov processes in countable state space,” Systems and Control Letters, vol. 29, pp. 147–155, 1996.\n\nAlso, there are some errors in the paper: for example, the functional of $\\pi(a|s)$ in Eq. (2) is concave, not convex, since the expression for the Shannon entropy in Eq. (3) has the wrong sign.", "Summary\n*******\nThe paper provides a collection of existing results in statistics.\n\nComments\n********\nPage 1: references to Q-learning and Policy-gradients look awkwardly recent, given that these have been around for several decades.\n\nI dont get what is the novelty in this paper. There is no doubt that all the tools that are detailed here are extremely useful and powerful results in mathematical statistics. But they are all known.\n\nThe Gibbs variational principle is folklore, Proposition 1,2 are available in all good text books on the topic, \nand Proposition 4 is nothing but a transportation Lemma.\nNow, Proposition 3 is about soft-Bellman operators. This perhaps is less standard because contraction property of soft-Bellman operator in infinite norm is more recent than for Bellman operators.\nBut as mentioned by the authors, this is not new either. \nAlso I don't really see the point of providing the proofs of these results in the main material, and not for instance in appendix, as there is no novelty either in the proof techniques.\n\nI don't get the sentence \"we have restricted so far the proof in the bandit setting\": bandits are not even mentioned earlier.\n\nDecision\n********\nI am sorry but unless I missed something (that then should be clarified) this seems to be an empty paper: Strong reject.", "Clarity: The paper is easy to follow and presents quite well the equivalence. \n\nOriginality: The results presented are well known and there is no clear contribution algorithmic-wise to the field of RL. The originality comes from the conciseness of the proof and how it relates to other works outside ML. Thus, this contribution seems minor and out of the scope of the conference which focus on representation learning for ML and RL.\n\nSuggestion: I strongly suggest the authors to work on a more detailed proof for the RL case explaining for instance the minimal conditions (on the reward, on the ergodicity of the MDP) in which the equivalence holds and submit it to a more theoretically oriented conference such as COLT or NIPS. \n", "We thank the reviewer for their evaluation, and acknowledge it. However, we are not in full agreement with the specific concern voiced here - our shorter proof indeed uses fairly well-known statistical tools, for instance in the Legendre transform of the log-Laplace. But of the several papers highlighting the equivalence of soft Q-learning and entropy regularized policy gradients published this year (at least three, see for instance Nachum et al.'s https://arxiv.org/abs/1702.08892 or Schulman et al.'s https://arxiv.org/abs/1704.06440, or the anonymous submission https://openreview.net/pdf?id=HJjvxl-Cb), none to our knowledge used this representation formula that expedites the proof singularly. The technique gives very intuitively soft Q-learning as a Cramer-Chernoff transform, and could be applied to other regularizers ; furthermore, the paper highlights a connection with large deviations that could be helpful in future work, for instance by applying relevant changes of measure.\n\nThe sentence 'we have restricted so far the proof in the bandit setting' refers to applying the representation formula in the one-step return case for clarity ; this is terminology used for instance in Schulman et al.'s article https://arxiv.org/abs/1704.06440.\n\nWe do agree - and state in the abstract - that the proof of equivalence of soft Q-learning and entropy regularization is not a novelty of our article.", "We take due note of the fact that these duality results are already known in reinforcement learning - we were not aware of either 1996-dated paper, and want to respectfully thank the reviewer for their mention, and sharing their extensive knowledge of the field.\n\nThe sign typo regarding the J functional is a valid point that has been corrected.", "We respectfully acknowledge the reviewer's comments, and will indeed endeavour to take a more theoretical angle on minimal conditions for the proof in further work. This is an extremely helpful suggestion. We are thankful for comments on clarity/writing style.\n\nWe want to sincerely thank the reviewer for their insight and their time." ]
[ 5, 2, 5, -1, -1, -1 ]
[ 5, 5, 4, -1, -1, -1 ]
[ "iclr_2018_HyY0Ff-AZ", "iclr_2018_HyY0Ff-AZ", "iclr_2018_HyY0Ff-AZ", "rJ9BTHFez", "r1FjCQKxG", "B1_rmNyZz" ]
iclr_2018_BJcAWaeCW
Graph Topological Features via GAN
Inspired by the success of generative adversarial networks (GANs) in image domains, we introduce a novel hierarchical architecture for learning characteristic topological features from a single arbitrary input graph via GANs. The hierarchical architecture consisting of multiple GANs preserves both local and global topological features, and automatically partitions the input graph into representative stages for feature learning. The stages facilitate reconstruction and can be used as indicators of the importance of the associated topological structures. Experiments show that our method produces subgraphs retaining a wide range of topological features, even in early reconstruction stages. This paper contains original research on combining the use of GANs and graph topological analysis.
rejected-papers
The reviewers present strong concerns regarding presentation of the paper. The approach appears overly complex, some design choices are not clear and the experiments are not conducted properly. I recommend the authors to carefully go through the reviews.
train
[ "rycdDEtxz", "HJukFfcxf", "BypRYU5xM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors try to combine the power of GANs with hierarchical community structure detections. While the idea is sound, many design choices of the system is questionable. The problem is particularly aggravated by the poor presentation of the paper, creating countless confusions for readers. I do not recommend the acceptance of this draft.\n\nCompared with GAN, traditional graph analytics is model-specific and non-adaptive to training data. This is also the case for hierarchical community structures. By building the whole architecture on the Louvain method, the proposed method is by no means truly model-agnostic. In fact, if the layers are fine enough, a significant portion of the network structure will be captured by the sum-up module instead of the GAN modules, rendering the overall behavior dominated by the community detection algorithm. \n\nThe evaluation remains superficial with minimal quantitative comparisons. Treating degree distribution and clustering coefficient (appeared as cluster coefficient in draft) as global features is problematic. They are merely global average of local topological features which is incapable of capturing true long-distance structures in graphs. \n\nThe writing of the draft leaves much to be desired. The description of the architecture is confusing with design choices never clearly explained. Multiple concepts needs better introduction, including the very name of their model GTI and the idea of stage identification. Not to mention numerous grammatical errors, I suggest the authors seek professional English writing services.", "Quality: The work has too many gaps for the reader to fill in. The generator (reconstructed matrix) is supposed to generate a 0-1 matrix (adjacency matrix) and allow backpropagation of the gradients to the generator. I am not sure how this is achieved in this work. The matrix is not isomorphic invariant and the different clusters don’t share a common model. Even implicit models should be trained with some way to leverage graph isomorphisms and pattern similarities between clusters. How can such a limited technique be generalizing? There is no metric in the results showing how the model generalizes, it may be just overfitting the data.\n\nClarity: The paper organization needs work; there are also some missing pieces to put the NN training together. It is only in Section 2.3 that the nature of G_i^\\prime becomes clear, although it is used in Section 2.2. Equation (3) is rather vague for a mathematical equation. From what I understood from the text, equation (3) creates a binary matrix from the softmax output using an indicator function. If the output is binary, how can the gradients backpropagate? Is it backpropagating with a trick like the Gumbel-Softmax trick of Jang, Gu, and Poole 2017 or Bengio’s path derivative estimator? This is a key point not discussed in the manuscript. \nAnd if I misunderstood the sentence “turn re_G into a binary matrix” and the values are continuous, wouldn’t the discriminator have an easy time distinguishing the generated data from the real data. And wouldn’t the generator start working towards vanishing gradients in its quest to saturate the re_G output?\n\nOriginality: The work proposes an interesting approach: first cluster the network, then learning distinct GANs over each cluster. There are many such ideas now on ArXiv but it would be unfair to contrast this approach with unpublished work. There is no contribution in the GAN / neural network aspect. It is also unclear whether the model generalizes. I don’t think this is a good fit for ICLR.\n\nSignificance: Generating graphs is an important task in in relational learning tasks, drug discovery, and in learning to generate new relationships from knowledge bases. The work itself, however, falls short of the goal. At best the generator seems to be working but I fear it is overfitting. The contribution for ICLR is rather minimal, unfortunately.\n\nMinor:\n\nGTI was not introduced before it is first mentioned in the into.\n\nY. Bengio, N. Leonard, and A. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv:1308.3432, 2013.\n\n", "The proposed approach, GTI, has many free parameters: number of layers L, number of communities in each layer, number of non-overlapping subgraphs M, number of nodes in each subgraph k, etc. No analysis is reported on how these affect the performance of GTI.\n\nGTI uses the Louvain hierarchical community detection method to identify the hierarchy in the graph and METIS to partition the communities. How important are these two methods to the success of GTI?\n\nWhy is it reasonable to restore a k-by-k adjacency matrix from the standard uniform distribution (as stated in Section 2.1)?\n\nWhy is the stride for the convolutional/deconvoluational layers set to 2 (as stated in Section 2.1)?\n\nEquation 1 has a symbol E in it. E is defined (in Section 2.2) to be \"all the inter-subgraph (community) edges identified by the Louvain method for each hierarchy.\" However, E can be intra-community because communities are partitioned by METIS. More discussion is needed about the role of edges in E. \n\nEquation 3 sparsifies (i.e. prunes the edges) of a graph -- namely $re_{G}$. However, it is not clear how one selects a $re^{i}{G}$ from among the various i values. The symbol i is an index into $CV_{i}$, the cut-value of the i-th largest unique weight-value.\n\nWas the edge-importance reported in Section 2.3 checked against various measures of edge importance such as edge betweenness?\n\nTable 1 needs more discussion in terms of retained edge percentage for ordered stages. Should one expect a certain trend in these sequences?\n\nAlmost all of the experiments are qualitative and can be easily made quantitive by comparing PageRank or degree of nodes.\n\nThe discussion on graph sampling does not include how much of the graph was sampled. Thus, the comparisons in Tables 2 and 3 are not fair.\n\nThe most realistic graph generator is the BTER model. See http://www.sandia.gov/~tgkolda/bter_supplement/ and http://www.sandia.gov/~tgkolda/feastpack/doc_bter_match.html.\n\nA minor point: The acronym GTI is never defined." ]
[ 3, 4, 4 ]
[ 4, 4, 5 ]
[ "iclr_2018_BJcAWaeCW", "iclr_2018_BJcAWaeCW", "iclr_2018_BJcAWaeCW" ]
iclr_2018_HyEi7bWR-
Orthogonal Recurrent Neural Networks with Scaled Cayley Transform
Recurrent Neural Networks (RNNs) are designed to handle sequential data but suffer from vanishing or exploding gradients. Recent work on Unitary Recurrent Neural Networks (uRNNs) have been used to address this issue and in some cases, exceed the capabilities of Long Short-Term Memory networks (LSTMs). We propose a simpler and novel update scheme to maintain orthogonal recurrent weight matrices without using complex valued matrices. This is done by parametrizing with a skew-symmetric matrix using the Cayley transform. Such a parametrization is unable to represent matrices with negative one eigenvalues, but this limitation is overcome by scaling the recurrent weight matrix by a diagonal matrix consisting of ones and negative ones. The proposed training scheme involves a straightforward gradient calculation and update step. In several experiments, the proposed scaled Cayley orthogonal recurrent neural network (scoRNN) achieves superior results with fewer trainable parameters than other unitary RNNs.
rejected-papers
The authors use the Cayley transform representation of an orthogonal matrix to provide a parameterization of an RNN with orthogonal weights. The paper is clearly written and the formulation is simple and elegant. However, I share the concerns of reviewer 3 about the significance of another method for parameterizing orthogonal RNN, as there has not been a lot of evidence that these have been useful on real problems (and indeed, on most of the toys used show the value of orthogonal RNN, one can get good results just by orthogonal initialization, e.g. as in Henaff et. al. as cited in this work). This work does not compare experimentally against many of the other methods, e.g. https://arxiv.org/pdf/1612.00188.pdf, the two Jing et. al. works cited, simple projection methods (either full projections at each step or stochastic projections as in Henaff et. al.). It does not cite or compare against the approach in https://arxiv.org/pdf/1607.04903.pdf.
train
[ "SkLk8W9lM", "rkQbrzqxM", "HyGpBPslM", "HkNx0LpXM", "SyFq28pQM", "HkhweWhGG", "HywrlZhzz", "HJGWlb2zf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This manuscript introduce a scheme for learning the recurrent parameter matrix in a neural network that uses the Cayley transform and a scaling weight matrix. This scheme leads to good performance on sequential data tasks and requires fewer parameters than other techniques\n\nComments:\n-- It’s not clear to me how D is determined for each test. Given the definition in Theorem 3.1 it seems like you would have to have some knowledge of how many eigenvalues in W you expect to be close to -1. \n-- For the copying and adding problem test cases, it might be useful to clarify or cite something clarifying that the failure mode RNNs run into with temporal ordering problems is an exploding gradient, rather than any other pathological training condition, just to make it clear why these experiments are relevant.\n-- The ylabel in Figure 1 is “Test Loss” which I didn’t see defined. Is this test loss the cross entropy? If so, I think it would be more effective to label the plot with that.\n-- The plots in figure 1 and 2 have different colors to represent the same set of techniques. I would suggest keeping a consistent color scheme\n-- It looks like in Figure 1 the scoRNN is outperformed by the uRNN in the long run in spite of the scoRNN convergence being smoother, which should be clarified.\n-- It looks like in Figure 2 the scoRNN is outperformed by the LSTM across the board, which should be clarified.\n-- How is test set accuracy defined in section 5.3? Classifying digits? Recreating digits? \n-- When discussing table 1, the manuscript mentions scoRNN and Restricted-capacity uRNN have similar performance for 16k parameters and then state that scoRNN has the best test accuracy at 96.2%. However, there is no example for restricted-capacity uRNN with 69k parameters to show that the performance of restricted-capacity uRNN doesn't also increase similarly with more parameters.\n-- Overall it’s unclear to me how to completely determine the benefit of this technique over the others because, for each of the tests, different techniques may have superior performance. For instance, LSTM performs best in 5.2 and in 5.3 for the MNIST test accuracy. scoRNN and Restricted-capacity uRNN perform similarly for permuted MNIST Test Accuracy in 5.3. Finally, scoRNN seems to far outperform the other techniques in table 2 on the TIMIT speech dataset. I don’t understand the significance of each test and why the relative performance of the techniques vary from one to the other.\n-- For example, the manuscript seems to be making the case that the scoRNN gradients are more stable than those of a uRNN, but all of the results are presented in terms of network accuracy and not gradient stability. You can sort of see that generally the convergence is more gradual for the scoRNN than the uRNN from the training graphs but it'd be nice if there was an actual comparison of the stability of the gradients during training (as in Figure 4 of the Arjovsky 2016 paper being compared to for instance) just to make it really clear.", "This paper suggests an RNN reparametrization of the recurrent weights with a skew-symmetric matrix using Cayley transform to keep the recurrent weight matrix orthogonal. They suggest that they reparametrization leads to superior performance compare to other forms of Unitary Recurrent Networks.\n\nI think the paper is well-written. Authors have discussed previous works adequately and provided enough insight and motivation about the proposed method.\n\nI have two questions from authors:\n\n1- What are the hyperparameters that you optimized in experiments?\n\n2- How sensitive is the results to the number of -1 in the diagonal matrix?\n\n3- ince the paper is not about compression, it might be unfair to limit the number of hidden units in LSTMs just to match the number of parameters to RNNs. In MNIST experiment, for example, better numbers are reported for larger LSTMs. I think matching the number of hidden units could be helpful. Also, one might want to know if the scoRNN is still superior in the regime where the number of hidden units is about 1000. I appreciate if authors can provide more results in these settings.\n\n", "The paper is clearly written, with a good coverage of previous relevant literature. \nThe contribution itself is slightly incremental, as several different parameterization of orthogonal or almost-orthogonal weight matrices for RNN have been introduced.\nTherefore, the paper must show that this new method performs better in some way compared with previous methods. They show that the proposed method is competitive on several datasets and a clear winner on one task: MSE on TIMIT.\n\nPros:\n1. New, relatively simple method for learning orthogonal weight matrices for RNN\n\n2. Clearly written\n\n3. Quite good results on several relevant tasks.\n\nCons:\n1. Technical novelty is somewhat limited\n\n2. Experiments do not evaluate run time, memory use, computational complexity, or stability. Therefore it is more difficult to make comparisons: perhaps restricted-capacity uRNN is 10 times faster than the proposed method?", "8.) “When discussing table 1, the manuscript mentions scoRNN and Restricted-capacity uRNN have similar performance for 16k parameters and then state that scoRNN has the best test accuracy at 96.2%. However, there is no example for restricted-capacity uRNN with 69k parameters to show that the performance of restricted-capacity uRNN doesn't also increase similarly with more parameters.”\n\nWe have completed MNIST experiments for the restricted-capacity uRNN with 69k parameters and have included the results in Section 5.3 in the most recent paper revision. This machine's performance is comparable to the n=360 scoRNN on unpermuted MNIST and slightly worse than the n=360 scoRNN on permuted MNIST. The runtime for a single epoch was 50 minutes, which was nearly 7 times slower than the n=360 scoRNN, which also had approximately 69k parameters. We have updated Appendix D to include this information.", "3.) “Since the paper is not about compression, it might be unfair to limit the number of hidden units in LSTMs just to match the number of parameters to RNNs.” \n\nAs noted in the previous comment, we have been running the n=1000 LSTM but it will not finish completely by the deadline. However, partial results indicate that the n=1000 LSTM will not improve over the n=256 or n=512 LSTM results. ", "Thank you for the comments. Please see below.\n1.) “It’s not clear to me how D is determined for each test.” \n- It is not known a priori the optimal number of negative ones that should be included in the scaling matrix. In this work, the percentage of negative ones in the diagonal matrix D is considered a hyperparameter and is tuned for each experiment. See Response to Reviewer 1 for details.\n\n2.) “For the copying and adding problem test cases,...make it clear why these experiments are relevant.”\n-This is a good point. We have clarified why these experiments are useful.\n\n3.) “The ylabel in Figure 1 is “Test Loss” which I didn’t see defined.”\n -The loss function is indeed cross-entropy. This was clarified in the updated submittal.\n\n4.) “The plots in figure 1 and 2 have different colors to represent the same set of techniques.”\n- The figures have been modified to reflect this in the updated submittal.\n\n5.) “It looks like in Figure 1 the scoRNN is outperformed by the uRNN in the long run in spite of the scoRNN convergence being smoother, which should be clarified.”\n -This has been noted in the updated submittal.\n\n6.) “It looks like in Figure 2 the scoRNN is outperformed by the LSTM across the board, which should be clarified.” \n-This has been emphasized more in the updated submittal.\n\n7.) “How is test set accuracy defined in section 5.3?”\n- For the MNIST experiment, the test set accuracy is the percentage of digits in the test set that were classified accurately. It has been clarified what is being tested in each experiment.\n\n8.) “When discussing table 1, the manuscript mentions scoRNN and Restricted-capacity uRNN have similar performance for 16k parameters and then state that scoRNN has the best test accuracy at 96.2%. However, there is no example for restricted-capacity uRNN with 69k parameters to show that the performance of restricted-capacity uRNN doesn't also increase similarly with more parameters.”\n -In order to match the same number of parameters of approx. 69k, the restricted-capacity uRNN will require a hidden size of 2,170 units. This is much larger than the hidden size of all the other models and we are not sure if we will be able to tune the machine in time but we are running a few experiments with this hidden size.\n \n\n9.) “Overall it’s unclear to me how to completely determine the benefit of this technique over the others...”\n-We agree that the results of each model on each task are not necessarily well-defined, and will add some description in Sections 5.1 and 5.2 to address this. Our intent was to show each model's performance in a variety of contexts where vanishing and exploding gradients occur. Although the scoRNN model does not always achieve the best results for each experiment, it is among the stronger performers and offers smoother convergence with smaller hidden states. Thus, the model is a competitive alternative to the LSTM model and the tested uRNNs.\n\n10.) “...stability of the gradients...”\n -We have carried out experiments to compare gradient and hidden state stability of the LSTM and scoRNN models, similar to the figure in the 2016 Arjovsky paper. Our results show the norm of scoRNN hidden state gradients staying near constant at 10^-4 over 500 timesteps, while the LSTM gradient norm decays to 0 after 200-300 timesteps. These results have been included in a figure in the revised submittal in Appendix C. We are having some difficulty in obtaining hidden state gradients from the full-capacity and restricted-capacity uRNNs, and will include these results if we are able.", "Thank you for the comments. Please see below.\n\n1.)\t“What are the hyperparameters that you optimized in experiments?”\n- For all methods, we tuned the optimizer type and learning rate. For scoRNN, we also tuned the percentage of negative 1's on the diagonal D. See No. 2 below.\n\n2.) “How sensitive is the results to the number of -1 in the diagonal matrix?”\n- We tuned the percentage of -1s on the diagonal matrix first by multiples of 25% (i.e. 0, 25%, 50%, 75%, 100%) and then by multiples of 5% or 10%. Tuning by multiplies of 5% and 10% did not usually affect the results significantly with very small differences when doing so.\n\n3.) “Since the paper is not about compression, it might be unfair to limit the number of hidden units in LSTMs just to match the number of parameters to RNNs.” \n-We have increased the number of hidden units for the scoRNN and LSTM models on the MNIST experiment to 512, and will include the new results in Section 5.3 in the revised submittal. The results indicate little to no improvements for both of these models from the paper results and we suspect that increasing the hidden sizes further to 1000 will have no significant improvement. We have also attempted to run LSTM with n=1000 but this turns out to be so slow in our system and uses up the entire GPU memory that we would not be able to tune the hyperparameters. If we get results in time, we will include them in the paper.\n", "Thank you for the comments. Please see below.\n\n1.) “Technical novelty is somewhat limited.”\n- We believe that although there are several orthogonal RNNs, the scoRNN architecture has a new and much simpler scheme and is numerically more stable by maintaining orthogonality in the recurrent matrix. \n\n2.) “Experiments do not evaluate run time, memory use, computational complexity, or stability.”\n-We have carried out additional experiments to examine run time and the following results will be included in the revision. The Full-Capacity uRNN and Restricted-Capacity uRNNs codes tested are from the Wisdom et al. 2016 paper and the LSTM is based on the builtin model in Tensorflow. The results indicate that scoRNN is faster than the Full-Capacity uRNN and Restricted-Capacity uRNNs but slightly slower than Tensorflow's builtin implementation of LSTM. We believe that the LSTM is faster because it is a builtin model that has been optimized within Tensorflow; there is virtually no change in runtime from hidden sizes 128 to 512. Please see the table below for the unpermuted MNIST experiment. \n\n\nModel: Hidden Size: Approx. # Parameters Time per Epoch(minutes):\nscoRNN 170 16k 5.3\nrestr.-uRNN 512 16k 8.2\nfull-uRNN 116 16k 10.8\nLSTM 128 68k 5.0\nscoRNN 360 69k 7.4\nscoRNN 512 137k 11.2\nfull-uRNN 360 137k 25.8\nLSTM 256 270k 5.2\nfull-uRNN 512 270k 27.9\nLSTM 512 1,058k 5.6\n \nFor memory usage and computational complexity, the scoRNN architecture is identical to a standard RNN except with the added storage for the n(n-1)/2 entries of the skew-symmetric matrix, and increased computational complexity from forming the recurrent weight matrix which is calculated once per training iteration. This computational cost is small compared to the cost of the forward and backward propagations through all time steps and for all examples in a training batch. Thus, scoRNN’s complexity is almost identical to a standard RNN. The basic RNNs should compare favorably with other methods in complexity although we couldn’t find a reference for this. \n\n As for stability, we assume the referee refers to the stability of the gradients with respect to hidden states during training (as in Figure 4 of the Arjovsky 2016) that referee 2 points out. We have carried out experiments to compare gradient stability of the LSTM and scoRNN models with results indicating the scoRNN model has significantly more stable gradients than LSTM. These results will be included in the revised submittal. We are having some difficulties in obtaining hidden state gradients from the full-capacity and restricted-capacity uRNNs codes. \n\nThese additions will be in two new appendices; stability will be addressed in Appendix C, and complexity & speed will be addressed in Appendix D." ]
[ 7, 6, 5, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyEi7bWR-", "iclr_2018_HyEi7bWR-", "iclr_2018_HyEi7bWR-", "HkhweWhGG", "HywrlZhzz", "SkLk8W9lM", "rkQbrzqxM", "HyGpBPslM" ]
iclr_2018_B1EPYJ-C-
Federated Learning: Strategies for Improving Communication Efficiency
Federated Learning is a machine learning setting where the goal is to train a high-quality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections. We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server, where the client-side updates are aggregated to compute a new global model. The typical clients in this setting are mobile phones, and communication efficiency is of the utmost importance. In this paper, we propose two ways to reduce the uplink communication costs: structured updates, where we directly learn an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, where we learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling before sending it to the server. Experiments on both convolutional and recurrent networks show that the proposed methods can reduce the communication cost by two orders of magnitude.
rejected-papers
The authors study the problem of reducing uplink communication costs in training a ML model where the training data is distributed over many clients. The reviewers consider the problem interesting, but have concerns about the extent of the novelty of the approach. As the reviewers and authors agree that the paper is an empirical study, and the authors agree that the novelty is in the problem studied and the combination of approaches used, a more thorough experimental analysis would benefit the paper.
train
[ "Hk_LRZ5gG", "BJhrvHcgf", "BkWhzt0ez", "ByrcIc-GM", "rkZHL9ZMz", "HkJcS5WGz", "H15iEq-MM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes several client-server neural network gradient update strategies aimed at reducing uplink usage while maintaining prediction performance. The main approaches fall into two categories: structured, where low-rank/sparse updates are learned, and sketched, where full updates are either sub-sampled or compressed before being sent to the central server. Experiments are based on the federated averaging algorithm. The work is valuable, but has room for improvement.\n\nThe paper is mainly an empirical comparison of several approaches, rather than from theoretically motivated algorithms. This is not a criticism, however, it is difficult to see the reason for including the structured low-rank experiments in the paper (itAs a reader, I found it difficult to understand the actual procedures used. For example, what is the difference between the random mask update and the subsampling update (why are there no random mask experiments after figure 1, even though they performed very well)? How is the structured update \"learned\"? It would be very helpful to include algorithms.\n\nIt seems like a good strategy is to subsample, perform Hadamard rotation, then quantise. For quantization, it appears that the HD rotation is essential for CIFAR, but less important for the reddit data. It would be interesting to understand when HD works and why, and perhaps make the paper more focused on this winning strategy, rather than including the low-rank algo. \n\nIf convenient, could the authors comment on a similarly motivated paper under review at iclr 2018:\nVARIANCE-BASED GRADIENT COMPRESSION FOR EFFICIENT DISTRIBUTED DEEP LEARNING\n\npros:\n\n- good use of intuition to guide algorithm choices\n- good compression with little loss of accuracy on best strategy\n- good problem for FA algorithm / well motivated\n- \n\ncons:\n\n- some experiment choices do not appear well motivated / inclusion is not best choice\n- explanations of algos / lack of 'algorithms' adds to confusion\n\na useful reference:\n\nStrom, Nikko. \"Scalable distributed dnn training using commodity gpu cloud computing.\" Sixteenth Annual Conference of the International Speech Communication Association. 2015.\n\n", "\nThe authors examine several techniques that lead to low communication updates during distributed training in the context of Federated learning (FL). Under the setup of FL, it is assumed that training takes place over edge-device like compute nodes that have access to subsets of data (potentially of different size), and each node can potentially be of different computational power. Most importantly, in the FL setup, communication is the bottleneck. Eg a global model is to be trained by local updates that occur on mobile phones, and communication cost is high due to slow up-link.\n\nThe authors present techniques that are of similar flavor to quantized+sparsified updates. They distinguish theirs approaches into 1) structured updates and 2) sketched updates. For 1) they examine a low-rank version of distributed SGD where instead of communicating full-rank model updates, the updates are factored into two low rank components, and only one of them is optimized at each iteration, while the other can be randomly sampled.\nThey also examine random masking, eg a sparsification of the updates, that retains a random subset of the entries of the gradient update (eg by zero-ing out a random subset of elements). This latter technique is similar to randomized coordinate descent.\n\nUnder the theme of sketched updates, they examine quantized and sparsified updates with the property that in expectation they are identical to the true updates. The authors specifically examine random subsampling (which is the same as random masking, with different weights) and probabilistic quantization, where each element of a gradient update is randomly quantized to b bits. \n\nThe major contribution of this paper is their experimental section, where the authors show the effects of training with structured, or sketched updates, in terms of reduced communication cost, and the effect on the training accuracy. They present experiments on several data sets, and observe that among all the techniques, random quantization can have a significant reduction of up to 32x in communication with minimal loss in accuracy.\n\nMy main concern about this paper is that although the presented techniques work well in practice, some of the algorithms tested are similar algorithms that have already been proven to work well in practice. For example, it is unclear how the performance of the presented quantization algorithms compares to say QSGD [1] and Terngrad [2]. Although the authors cite QSGD, they do not directly compare against it in experiments.\n\nAs a matter of fact, one of the issues of the presented quantized techniques (the fact that random rotations might be needed when the dynamic range of elements is large, or when the updates are nearly sparse) is easily resolved by algorithms like QSGD and Terngrad that respect (and promote) sparsity in the updates. \n\nA more minor comment is that it is unclear that averaging is the right way to combine locally trained models for nonconvex problems. Recently, it has been shown that averaging can be suboptimal for nonconvex problems, eg a better averaging scheme can be used in place [3]. However, I would not worry too much about that issue, as the same techniques presented in this paper apply to any weighted linear averaging algorithm.\n\nAnother minor comment: The legends in the figures are tiny, and really hard to read.\n\nOverall this paper examines interesting structured and randomized low communication updates for distributed FL, but lacks some important experimental comparisons.\n\n\n[1] QSGD: Communication-Optimal Stochastic Gradient Descent, with Applications to Training Neural Networks https://arxiv.org/abs/1610.02132\n[2] TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning\nhttps://arxiv.org/abs/1705.07878\n[3] Parallel SGD: When does averaging help? \nhttps://arxiv.org/abs/1606.07365\n\n", "This paper proposes a new learning method, called federated learning, to train a centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections. Experiments on both convolutional and recurrent networks are used for evaluation. \n\nThe studied problem in this paper seems to be interesting, and with potential application in real settings like mobile phone-based learning. Furthermore, the paper is easy to read with good organization. \n\nHowever, there exist several major issues which are listed as follows:\n\nFirstly, in federated learning, each client independently computes an update to the current model based on its local data, and then communicates this update to a central server where the client-side updates are aggregated to compute a new global model. This learning procedure is heuristic, and there is no theoretical guarantee about the correctness (convergence) of this learning procedure. The authors do not provide any analysis about what can be learned from this learning procedure. \n\nSecondly, both structured update and sketched update methods adopted by this paper are some standard techniques which have been widely used in existing works. Hence, the novelty of this paper is limited. \n\nThirdly, experiments on larger datasets, such as ImageNet, will improve the convincingness. \n", "Thank you for your feedback, helping us see which parts are not communicated clearly enough. Please see also our response to all reviewers above.\n\nDifference between Random Mask and Subsampling - These are techniques presented in Sections 2 and 3, respectively. For Random Mask, we compute and apply the gradient only to pre-selected coordinates. For subsampling, we compute and apply gradients without constraint, and subsample at the end. If we were to run the local optimization for just a single gradient update, these updates/gradients would be identical; however, the subsequent gradients computed locally before communicating, would already be different as they would be computed in different points.\nWe did not continue with Random Mask experiments further, as it is not straightforward to use this jointly with the other techniques, such as structured random rotation. The most important gains were obtained as a combination of these multiple techniques. If we trained the Random Mask update in the rotated space, we would make the training procedure significantly more expensive, as applying the rotation would be necessary for every gradient computation. However, applying structured random rotation only once at the end is negligible compared to total cost of training.\nWe will make these points more clear in the submission.\n\nCIFAR vs. Reddit data\nWe don’t intend to emphasize the CIFAR data too much, as it is relatively small, and artificially partitioned by us to fit the setting of FL. The Reddit dataset comes with natural user partition and is much more reflective of actual application in practice. The HD rotation do actually improve performance significantly - this is perhaps more clearly visible in Figure 5 where we experiment with more clients per round, and can compress very aggressively - 1% subsampling and 1 bit quantization! We will stress the Reddit experiments more in final version.\n\nPointer to other ICLR submissions (Variance-based Gradient Compression for Efficient Distributed Deep Learning) - Note there is also another submission in similar spirit (Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training)\nIn both of these works, the central part of proposed techniques keeps track of compression/quantization error incurred in previous rounds, and adds this in the current round before applying compression/quantization. This is not applicable in the setting of Federated Learning, as we cannot remember such errors - think of the billions of eligible phones in the world, but selecting only thousands to participate in a given round.", "Thank you for your encouraging review. Below are remarks and responses to your highlighted concerns.\n\nYou remark that we achieve up to 32x reduction in communication. We would like to stress that we can achieve a lot more - with combining subsampling, rotations, and quantization without impacting convergence speed. See the extreme in Figure 5 where we subsample 1% of the elements and then quantize to 1bit (3200x on the compressed layers, although with a drop in performance).\n\n[See also response to all reviewers above for comparison with other methods]\nWe also experimented with various adaptive methods which overall provided slightly worse results, before we were aware of the mentioned works. Nevertheless, our very recent preliminary experiment suggests that performance of QSGD improves when we use it with the subsampling and structured random rotation proposed in our work, and is roughly on par with the experiments we present.\n\nSparsity: Note that if the updates are sparse, it is possible to use a sparse representation first, and then apply the presented techniques to compress list of nonzero values of the sparse representation. It is not ideal, but QSGD degrades in a similar way, as the gaps between non-zero values encoded using Elias coding are no longer necessarily small numbers, making the whole compression slightly weaker.\n\nWe agree that it should be possible to do better than averaging within the Federated Averaging of McMahan et al. However, this problem is clearly out of scope for this work, and probably worth a separate paper altogether.", "Thank you for your review, highlighting good motivation and organization of the work. Let us address the three specific issues you highlighted.\n\nThe remark on our proposed procedure being heuristic being an issue is in our opinion misplaced. \nThe learning procedure (Federated Averaging) is in the first place not the contribution of our paper - it was proposed in McMahan et al., it was shown to work well for large-scale problems and has been successfully deployed in production environment by Google (see McMahan and Ramage), and we build on top of it. This is in line with other optimization techniques for deep learning - they usually have very well understood parallels in the convex setting, but are not really understood in the landscape of deep learning - only empirically observed to typically still work. This procedure is also an extension of existing techniques which are properly analysed in the convex setting - see Ma et al. and Reddi et al. The central part of our contribution does have a proper theoretical justification - see Suresh et al.\n\nWhile individual building blocks have been used in various works, we are not aware of some of them being used in the context of reducing update size in deep learning. Please see also response to all reviewers above for why some of them are not practical in the standard data-center training.\n\nWe have tested our method on the large-scale Reddit dataset, which is highly representative of the types of problems suited to federated learning (unlike ImageNet). The CIFAR experiment can be seen as proof-of-concept but we had to artificially split the dataset into “clients”, and hence does not reflect the practical setting well. The same would be true for ImageNet. The Reddit dataset comes with natural user-based partitioning, and in terms of number of datapoints, is actually much larger than ImageNet.\n", "We would like to thank all reviewers for their feedback. The following is response relevant to all reviewers, and explains a particular point we will stress more clearly in the submission.\n\nIn the best technique we used (subsampling + rotation + quantization), the related recently proposed methods such as QSGD or TernGrad are an alternative to the quantization part, not for the whole procedure. If used separately, they yield a significantly weaker result. Note that the results in QSGD paper, authors generally use more than 1bit per element on average. (see also Corollary 3.3 which promises ~2.8 bits per element asymptotically). In contrast we reduced the communication significantly below 1bit per element.\n\nOur technique yields sparse objects whose sparsity pattern is independent of the objects we are trying to compress. This lets us to communicate only the quantized values, and not the indices those values correspond to - those can be recovered from a shared random seed. Further, applying structured random rotation improves the performance of quantization. These are however more computationally expensive operations (especially rotation), which makes it impractical in the setting of the above mentioned methods (MPI-based GPU-to-GPU communication on small minibatches). Nevertheless, this is a component that significantly improves our performance, and it could actually now become practical also in data-center training, together with the trend shifting to large-batch training (see for instance works on training ImageNet in 1hour, 24 min, 15 mins...)" ]
[ 5, 7, 5, -1, -1, -1, -1 ]
[ 3, 5, 5, -1, -1, -1, -1 ]
[ "iclr_2018_B1EPYJ-C-", "iclr_2018_B1EPYJ-C-", "iclr_2018_B1EPYJ-C-", "Hk_LRZ5gG", "BJhrvHcgf", "BkWhzt0ez", "iclr_2018_B1EPYJ-C-" ]
iclr_2018_r1SuFjkRW
Discrete Sequential Prediction of Continuous Actions for Deep RL
It has long been assumed that high dimensional continuous control problems cannot be solved effectively by discretizing individual dimensions of the action space due to the exponentially large number of bins over which policies would have to be learned. In this paper, we draw inspiration from the recent success of sequence-to-sequence models for structured prediction problems to develop policies over discretized spaces. Central to this method is the realization that complex functions over high dimensional spaces can be modeled by neural networks that predict one dimension at a time. Specifically, we show how Q-values and policies over continuous spaces can be modeled using a next step prediction model over discretized dimensions. With this parameterization, it is possible to both leverage the compositional structure of action spaces during learning, as well as compute maxima over action spaces (approximately). On a simple example task we demonstrate empirically that our method can perform global search, which effectively gets around the local optimization issues that plague DDPG. We apply the technique to off-policy (Q-learning) methods and show that our method can achieve the state-of-the-art for off-policy methods on several continuous control tasks.
rejected-papers
The reviewers consider the paper to promising, but raise issues with the increase in the complexity of the MDP caused by the authors' parameterization of the action space, and comparisons with earlier work (Pazis and Lagoudakis). While the authors cite this work, and say that they that they needed to make changes to PL to make it work in their setting (in addition to adding the deep networks), they do not explicitly show comparisons in the paper to any other discretization schemes.
train
[ "H13MV5tkM", "rkaH5MsgM", "S1m6NXjlM", "SJ0-Es9XG", "rkI8QoqXz", "S1fQmo5mz", "S15a-iqQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper presents Sequential Deep Q-Networks (SDQNs), which select actions from discretized high-dimensional action spaces. This is done by introducing another, undiscounted MDP in which each action dimension is chosen sequentially by an agent. By training a Q network to best choose these action dimensions, and loosely enforcing equality between the original and new MDPs at points where they are equivalent, the new MDP can be successfully navigated, resulting in good action selection for the original MDP. This is experimentally compared against DDPG in several domains. There are no theoretical results.\n\nThis work is correct and clearly written. Experiments do demonstrate improved effectiveness in the chosen domains, and the authors do a nice job of illustrating the range of performance by their approach (which has low variance in some domains, but high variance in others). Because of the clarity of the paper, the effectiveness of the approach, and the high quality experiments, I encourage acceptance.\n\nIt doesn't strike me as world-changing, however. The MDP-within-an-MDP approach is quite similar to the Pazis and Lagoudakis MDP decomposition for the same problem (work which is appropriately cited, but maybe too briefly compared against). In other words, it strikes me as merely being P&L plus networks, dampening my enthusiasm.\n\nMy one question for the authors is how much the order of action dimension selection matters. This seems probably quite important practically, but is undiscussed.", "Originality\n--------------\nWhen the action space is N-dimensional, computing argmax could be problematic. The paper proposes to address the problem by creating N MDPs with 1-D actions. \n\nClarity\n---------\n1) Explicitly writing down DDPG will be helpful\n2) The number of actions in each of the domains will also be useful\n\nQuality\n----------\n1) The paper reports experimental results on order of actions as well as binning, and the results confirm with what one would expect from intuition. \n2) It will be important to talk about the case when the action dimension N is very large, what happens in that case? Does the proposed method would work in such a scenario? A discussion is needed.\n3) Given that the ordering of actions does not matter, what is the real take away of looking at them as 'sequence' (which has not temporal structure because action order could be arbitrary)?\n\n\nSignificance\n----------------\nWhile the proposed method seems a reasonable approach to handle the argmax problem, it still requires training multiple networks for Q^i (i=1,..N) for Q^L, which is a limitation. Further, since the actions could be arbitrary, it is unclear where 'sequence' approach helps. These limit the understand and hence significance.\n", "The paper describes a new RL technique for high dimensional action spaces. It discretizes each dimension of the action space, but to avoid an exponential blowup, it selects the action for each dimension in sequence. This is an interesting approach. The paper reformulates the MDP with a high dimensional action space into an equivalent MDP with more time steps (one per dimension) that each selects the action in one dimension. This makes sense.\n\nWhile I do like very much the model, I am perplex about the training technique. The lower MDP is precisely the new proposed model with unidimensional actions and therefore it should be sufficient. However, the paper also describes an upper MDP that seems to be superfluous. The two MDPs are mathematically equivalent, but their Q-values are obtained differently (TD-0 for the upper MDP and Q-learning for the lower MDP) and yet the paper tries to minimize the Euclidean distance between them. This is really puzzling since the different training algorithms suggest that the Q-values should be different while minimizing the Euclidean distance between them tries to make them equal. The paper suggests that divergence occurs without the upper MDP. This is really suspicious. The approach feels like a band-aid solution to cover a problem that the authors could not identify. While the empirical results are good, I don't think the paper should be published until the authors figure out a principled way of training.\n\nThe proposed approach reformulates the MDP with high dimensional actions into an equivalent one with uni dimensional actions. There is a catch. This approach effectively hides the exponential action space into the state space which becomes exponential. Since u contains all the actions of the previous dimensions, we are effectively increasing the state space by an exponential factor. The paper should discuss this and explain what are the consequences in practice. In the end, the MDP does not become simpler.\n\nOverall, this is an interesting paper with a good idea, but the training technique is not mature enough for publication.", "We have updated the paper with the following:\n- added few sentences on the complexity of action space being shifted into the MDP\n- added equation for DDPG update\n- added more related work from Pazis and Lagoudakis\n- added action space dimensionality for each environment in experiments\n", "Thank you for your thoughtful review.\n\nWe were not actually aware of Pazis and Lagoudakis's work on this subject (we did cite them, but for another one of there papers, not the paper we believe you are referencing.). We have updated the text to include a section on this work. As per the differences, we are using neural network function approximators. Naively applying this decomposition increases the time dependences in the MDP, as such when using function approximators error accumulates. While attempting to train like this does work, it is incredibly unstable and hyperparameter sensitive. Our second contribution is thus a modified way of training these networks by training the hierarchy of MDP together -- using the upper to bootstrap the lower. This, unlike the original PL-like algorithm, is much more stable as it reduces function overestimation approximator error. With these improvements we are able to train on more complex tasks than originally explored.\n\nAs per your question on action ordering: we have an experiment (section 4.4). We found on that problem at least that there was little to no change in performance given different action orders.\n", "Thank you for your thoughtful review. We will try to address your concerns, as follows:\n\n# Clarity\nWe agreed on both fronts, and we have updated the text. \n\n# Quality\n2. Huge action dimensions would be an interesting application but are outside the scope of our focus: continuous control for robotics tasks. Theoretically, our algorithm scales linearly in terms of compute but exponentially in terms of the MDP (although in practice there is a lot of independence between actions which again makes it closer to linear scaling). We would expect that as N grows, learning the lower MDP will become harder and harder due to the increased temporal dependencies. For very large N, we would almost surely expect that one would need a logarithmic hierarchy or a technique similar to [1].\n3. As a baseline while developing this work (not included in this paper), we used algorithms that were not sequence based. This algorithm predicted Q values independently for each action dimension. While the algorithm worked, the lack of action-to-action conditioning greatly restricted the functional form of our model and resulted in sub-par performance.\nThe sequence version allows these previously missing action-to-action interactions while keeping maxation tractable. By putting action dimensions in a sequence, we are able to easily condition results on the previous action dimensions. The fact that ordering does not matter is a good thing for us and allows this technique to work! It is possible to construct some set based interaction scheme that has a similar ability to preserve conditioning, but we are not aware of any such constructions that support explicit maxation while retaining this interaction action conditioning.\n\n# Significance\nWe do not see training multiple networks as a limitation as long as sample complexity does not suffer (as we have shown in regard to DDPG). In the robotics settings, the compute cost is often much, much smaller in comparison to the hardware / robot cost. The run time is no different than say running a single RNN over the action dimensions, and in terms of memory, these models are also quite small ~ order 0.1 - 1Mb per action dimension depending on network sizes. Additionally, not all of the components need to be separate. In a tasks involving vision, for example, one could use a common feature extractor.\n\n[1]Dulac-Arnold, Gabriel, et al. \"Deep reinforcement learning in large discrete action spaces.\" arXiv preprint arXiv:1512.07679(2015).\n", "Thank you for your thoughtful review. We will try to address your concerns bellow:\n\n# Hierarchy\nWe agree that, in theory, the lower MDP should be sufficient. In practice, as we pointed out in the paper, Q learning with (deep) function approximators is unstable and sensitive to hyperparameters. To our knowledge, this phenomenon is not thoroughly understood. There have been many papers, however, describing potential failure points, proposing a solution, and showing improvement however -- examples include Double DQN [1], Dueling networks [2], all improvements from rainbow networks [3], and many more. We see our 2-layer training in a similar vein to these works. In particular, the training procedure described here is closely related to Double DQN in theory and implementation.\nThe issue we seek to address is the failure in the \"Bellman backup\" through time due to repeated function approximator error. When working with long MDP, learning associations between states and action sequences have been shown to be hard [4]. [4] shows that this effect is so impactful that by lowering the control frequency (increasing the frame-skip count) actually increased performance in some tasks. Additionally, in the policy gradient algorithms, increasing the frequency of states has been shown to increase gradient variance with the number of timesteps [5].\nInitially we explored just the lower MDP and achieved reasonable performance, but the resulting algorithm was incredibly sensitive to hyperparameter and quite unstable, partially due to Q value overestimation. Our hierarchy is a way to address this instability. It does so in a similar manner to that employed in Double DQN; the use of two networks to combat overestimation. Still, solving the root instability of Q-learning with function approximators is an open question and something that interests us greatly.\n\n# Exponential problems\nThank you for this observation. This is true and it is worth calling more attention to it, which we have done in the text (now updated). The exponential action space does turn into a exponential MDP. Luckily for us though, many problems do not actually require full search of this exponential space. Early in this work, we hypothesized that the space was largely independent between action dimensions. This fact is often exploited in policy gradient approaches as the policy distribution is often parameterized as a diagonal covariance normal[6, 7]. We tested this independence hypothesis in the Q learning settings (using a novel Q learning algorithm, not included in this paper) where the Q values were the sum of terms computed from each action dimension independently) and found that we were able achieve reasonable performance, though not state of the art. In general, we don't expect to be able to search the full exponential space. Early in training the interactions will mostly be linear / independent due to the nature of these neural networks at initialization. As training progresses, we do expect to be able to capture some interaction relationships. In our experiments, adding in this conditioning does increase performance of the final algorithm.\n\n\n[1] Van Hasselt, Hado, Arthur Guez, and David Silver. \"Deep Reinforcement Learning with Double Q-Learning.\" AAAI. 2016.\n[2] Wang, Ziyu, et al. \"Dueling network architectures for deep reinforcement learning.\" arXiv preprint arXiv:1511.06581(2015).\n[3] Hessel, Matteo, et al. \"Rainbow: Combining Improvements in Deep Reinforcement Learning.\" arXiv preprint arXiv:1710.02298 (2017).\n[4] Braylan, Alex, et al. \"Frame skip is a powerful parameter for learning to play atari.\" Space 1600 (2000): 1800.\n[5] Salimans, Tim, et al. \"Evolution strategies as a scalable alternative to reinforcement learning.\" arXiv preprint arXiv:1703.03864 (2017).\n[6]Schulman, John, et al. \"Trust region policy optimization.\" Proceedings of the 32nd International Conference on Machine Learning (ICML-15). 2015.\n[7]Mnih, Volodymyr, et al. \"Asynchronous methods for deep reinforcement learning.\" International Conference on Machine Learning. 2016.\n" ]
[ 7, 5, 4, -1, -1, -1, -1 ]
[ 5, 1, 5, -1, -1, -1, -1 ]
[ "iclr_2018_r1SuFjkRW", "iclr_2018_r1SuFjkRW", "iclr_2018_r1SuFjkRW", "iclr_2018_r1SuFjkRW", "H13MV5tkM", "rkaH5MsgM", "S1m6NXjlM" ]
iclr_2018_HyXBcYg0b
Residual Gated Graph ConvNets
Graph-structured data such as social networks, functional brain networks, gene regulatory networks, communications networks have brought the interest in generalizing deep learning techniques to graph domains. In this paper, we are interested to design neural networks for graphs with variable length in order to solve learning problems such as vertex classification, graph classification, graph regression, and graph generative tasks. Most existing works have focused on recurrent neural networks (RNNs) to learn meaningful representations of graphs, and more recently new convolutional neural networks (ConvNets) have been introduced. In this work, we want to compare rigorously these two fundamental families of architectures to solve graph learning tasks. We review existing graph RNN and ConvNet architectures, and propose natural extension of LSTM and ConvNet to graphs with arbitrary size. Then, we design a set of analytically controlled experiments on two basic graph problems, i.e. subgraph matching and graph clustering, to test the different architectures. Numerical results show that the proposed graph ConvNets are 3-17% more accurate and 1.5-4x faster than graph RNNs. Graph ConvNets are also 36% more accurate than variational (non-learning) techniques. Finally, the most effective graph ConvNet architecture uses gated edges and residuality. Residuality plays an essential role to learn multi-layer architectures as they provide a 10% gain of performance.
rejected-papers
The authors make an experimental study of the relative merits of RNN-type approaches and graph-neural-network approaches to solving node-labeling problems on graphs. They discuss various improvements in gnn constructions, such as residual connections. This is a borderline paper. On one hand, the reviewers feel that there is a place for this kind of empirical study, but on the other, there is agreement amongst the reviewers that the paper is not as well written as it could be. Furthermore, some reviewers are worried about the degree of novelty (of adding residual connections to X). I will recommend rejection, but urge the authors to clarify the writing and expand on the empirical study and resubmit.
train
[ "BJWd98x4z", "ryFFX8Yxf", "Sy7QPPYxM", "r1-tSfqeG", "HycHDznMM", "rJ_yvM3fz", "SyffDMnff", "HyshLz3fz" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "We would like to thank the referee for her/his time reviewing the revised paper and for improving her/his evaluation score. ", "The authors revised the paper according to all reviewers suggestions, I am satisfied with the current version.\n\nSummary: this works proposes to employ recurrent gated convnets to solve graph node labeling problems on arbitrary graphs. It build upon several previous works, successively introducing convolutional networks, gated edges convnets on graphs, and LSTMs on trees. The authors extend the tree LSTMs formulation to perform graph labeling on arbitrary graphs, merge convnets with residual connections and edge gating mechanisms. They apply the 2 proposed models to 3 baselines also based on graph neural networks on two problems: sub-graph matching (expressing the problem of sub-graph matching as a node classification problem), and semi supervised clustering. \n\nMain comments:\nIt would strengthen the paper to also compare all these network learning based approaches to variational ones. For instance, to a spectral clustering method for the semi supervised clustering, or\nsolving the combinatorial Dirichlet problem as in Grady: random walks for image segmentation, 2006.\n\nThe abstract and the conclusion should be revised, they are very vague.\n- The abstract should be self contained and should not contain citations.\n- The authors should clarify which problem they are dealing with.\n- instead of the \"numerical result show the performance of the new model\", give some numerical results here, otherwise, this sentence is useless.\n- we propose ... as propose -> unclear: what do you propose?\n \n\nMinor comments:\n- You should make sentences when using references with the author names format. Example: ... graph theory, Chung (1997) -> graph theory by Chung (1997)\n- As Eq 2 -> As the minimization of Eq 2 (same with eq 4)\n- Don't start sentences with And, or But\n\n", "The paper proposes an adaptation of existing Graph ConvNets and evaluates this formulation on a several existing benchmarks of the graph neural network community. In particular, a tree structured LSTM is taken and modified. The authors describe this as adapting it to general graphs, stacking, followed by adding edge gates and residuality.\n\nMy biggest concern is novelty, as the modifications are minor. In particular, the formulation can be seen in a different way. As I see it, instead of adapting Tree LSTMs to arbitary graphs, it can be seen as taking the original formulation by Scarselli and replacing the RNN by a gated version, i.e. adding the known LSTM gates (input, output, forget gate). This is a minor modification. Adding stacking and residuality are now standard operations in deep learning, and edge-gates have also already been introduced in the literature, as described in the paper.\n\nA second concern is the presentation of the paper, which can be confusing at some points. A major example is the mathematical description of the methods. When reading the description as given, one should actually infer that Graph ConvNets and Graph RNNs are the same thing, which can be seen by the fact that equations (1) and (6) are equivalent.\n\nAnother example, after (2), the important point to raise is the difference to classical (sequential) RNNs, namely the fact that the dependence graph of the model is not a DAG anymore, which introduces cyclic dependencies. \n\nGenerally, a clear introduction of the problem is also missing. What are the inputs, what are the outputs, what kind of problems should be solved? The update equations for the hidden states are given for all models, but how is the output calculated given the hidden states from variable numbers of nodes of an irregular graph?\n\nThe model has been evaluated on standard datasets with a performance, which seems to be on par, or a slight edge, which could probably be due to the newly introduced residuality.\n\nA couple of details :\n\n- the length of a graph is not defined. The size of the set of nodes might be meant.\n\n- at the beginning of section 2.1 I do not understand the reference to word prediction and natural language processing. RNNs are not restricted to NLP and I think there is no need to introduce an application at this point.\n\n- It is unclear what does the following sentence means: \"ConvNets are more pruned to deep networks than RNNs\"?\n\n- What are \"heterogeneous graph domains\"?\n", "The paper proposes a new neural network model for learning graphs with arbitrary length, by extending previous models such as graph LSTM (Liang 2016), and graph ConvNets. There are several recent studies dealing with similar topics, using recurrent and/or convolutional architecture. The Related work part of this paper makes a good description of both topics. \n\nI would expect the paper elaborate more (at least in a more explicit way) about the relationship between the two models (the proposed graph LSTM and the proposed Gated Graph ConvNets). The authors claim that the innovative of the graph Residual ConvNets architecture, but experiments and the model section do not clearly explain the merits of Gated Graph ConvNets over Graph LSTM. The presentation may raise some misunderstanding. A thorough analysis or explanation of the reasons why the ConvNet-like architecture is better than the RNN-like architecture would be interesting. \n\nIn the section of experiments, they compare 5 different methods on two graph mining tasks. These two proposed neural network models seem performing well empirically. \n\nIn my opinion, the two different graph neural network models are both suitable for learning graphs with arbitrary length, \nand both models worth future stuies for speicific problems. ", "We are thankful to the reviewer for her/his comments and time. We hope our answers will clarify the importance of this work and the referee will be inclined to improve her/his evaluation score. \n\nQ: Compare learning based approaches to variational ones\nA: We solved the combinatorial Dirichlet problem with labeled and unlabelled data using [Grady’06, Random walks for image segmentation, Eq. 11, Section B]. The average accuracy (over 100 experiments) for this variational technique is 45.37% (we remind that only 1 label per class is used, and random choice is around 5-15%), while the performance of the best learning technique is 82%. Learning techniques produce better performances with a different paradigm as they use training data with ground truth, while variational techniques do not use such information. The downside is the need to see 2000 training graphs to get to 82%. However, when the training is done, the test complexity of these learning techniques is O(E), where E is the number of edges in the graph. This is an advantage over the variational Dirichlet model that solves a sparse linear system of equations with complexity O(E^1.5) [Lipton-Rose-Tarjan’79]. We thank the referee for this useful comment. We added this comment in the paper. \n\nQ: Abstract, conclusion should be revised\nA: We revised the abstract and conclusion. \n\nQ: The authors should clarify which problem they are dealing with\nA: The general problem we want to solve is learning meaningful representations of graphs with variable length using either ConvNet or RNN architectures. These graph representations can be applied to different tasks such as vertex classification (in this paper for graph matching and graph clustering) and also graph classification, graph regression, graph visualization, graph generative model, etc. We added this comment in the paper.\n\nQ: Give some numerical results\nA: Here is the summary of the results:\n1. Sub-graph matching:\n (a) Accuracy of shallow graph NNs is 79% for RNNs and 67% for the proposed ConvNet.\n (b) Accuracy of deep graph NNs (L=10) is 87% for RNNs and 90% for the proposed ConvNet.\n2. Semi-supervised graph clustering:\n (a) Accuracy of shallow graph NNs is 69% for RNNs and 41% for the proposed ConvNet.\n (b) Accuracy of deep graph NNs (L=10) is 65% for RNNs and 82% for the proposed ConvNet.\n3. Computational times for graph RNNs is 1.5-4x slower than the proposed ConvNet.\nWe added these results in the abstract.\n\nQ: Minor comments\nA: Thank you. We revised the paper accordingly.\n", "We thank the reviewer for her/his time and comments. We provide below specific answers to the questions. We hope the reviewer will update positively her/his decision in view of our answers. \n\nQ: My biggest concern is novelty\nA: Several techniques for graph NNs have been published in the last two years. None of the existing works compare with rigorous numerical experiments which type of architectures (RNNs or ConvNets) should be used for graphs with variable length. The main contribution and novelty of this work is to answer this fundamental question, and give the reader the winning architecture. By running controlled numerical experiments on two basic graph analysis tasks, sub-graph matching and semi-supervised clustering, we reached the conclusion that ConvNets architectures should be used, and the best formulation of graph ConvNets uses edge gates and residuality. We believe such result to be important for future models in this domain (and also a bit controversial) as most graph NNs published in the literature focused on RNN architectures. \n\nQ: Adding stacking and residuality are now standard operations\nA: When we started this work, we doubted that stacking and residuality were helpful for the class of graphs with variable length. Graphs are different data than images: arbitrary graph structures are irregular (e.g. molecule graphs or gene networks), graph convolutional operations are not shift-invariant, and multi-scale structures depend on graph topology. Our original motivation was to numerically study the stacking and residuality properties for graph RNNs and ConvNets, and see if it would be useful. \nWe found out the important result that without residuality, *none* of the existing graph NNs can stack more than 2 layers. They simply do not work; they are not able to learn good representations to solve the matching and clustering tasks. Hence, although residuality is quite common in computer vision tasks, our experiments showed that this property is even *more* important for graphs than for images. Quantitatively, we got a boost by at least 10% of accuracy when we stacked more than 6 layers. So, it seemed to us and to other researchers to be a useful result for future research in this domain (that includes applications in chemistry, physics, neuroscience). \n\nQ: The model has been evaluated on standard datasets with a performance, which seems to be on par, or a slight edge, which could probably be due to the newly introduced residuality.\nA: Residuality plays indeed an essential role for graph learning tasks. Without residuality, the existing techniques such as Li-etal, Sukhbaatar-etal, Marcheggiani-Titov are far behind (more than 10% - they actually do not benefit much from multiple layers) than the proposed gated graph residual model. Note that in the experiments, we *did* upgrade the existing techniques with residuality. We could have simply reported the (lower) performances of the original methods, which would have been more impressive on the plots for our model but also not informative. \nThe proposed graph ConvNet actually offers a slight improvement compared to Sukhbaatar-etal and Marcheggiani-Titov *when* these models are upgraded with residuality. However, the paper is not an application paper (we do not claim any SOTA on any benchmark dataset), but rather an investigation paper where we want to convey the message that, after rigorous numerical experiments, graph ConvNet architectures should be preferred when we want to design deep learning techniques on arbitrary graphs such as for drugs design. \n\nQ: Graph ConvNets and Graph RNNs are the same thing, equations (1) and (6) are equivalent.\nA: We disagree with this comment - equations 1 and 6 are as different as standard RNNs are distinct from standard ConvNets. The purpose of the mathematical formulations 1 and 6 is to generalize standard ConvNets and RNNs not only to image domains but to arbitrary graph domains (1 and 6 reduce to original RNNs and ConvNets for regular grids). Figure 1 illustrates the fundamental difference between both graph architectures. \n\nQ: An introduction of the problem is missing. What kind of problems should be solved? What are the inputs, the outputs?\nA: The general problem we want to solve is learning meaningful representations of graphs with variable length using either ConvNet or RNN architectures. These graph representations can be applied to different tasks such as vertex classification (for graph matching and clustering in this work) and also graph classification, graph regression, graph visualization, and graph generative model. \nIn this work, inputs are graphs with variable size and outputs are vertex classification vectors of input graphs. We added this answer in the paper.", "Q: Taking original Scarselli and replacing the RNN by LSTM gates\nA: Yes, this is what we did and explained in Section 3. We do not claim any major contribution for graph LSTM. Our goal was to compare all graph RNN architectures (GRU and LSTM) vs. graph ConvNet architectures. As graph LSTM was not available in the literature, we simply used Scarselli-etal and Tai-etal to extend LSTM to arbitrary graphs. \n\nQ: A second concern is the presentation of the paper, which can be confusing at some points. \nA: We improved the abstract, conclusion and revised some parts of the paper in view of the reviewer’s questions.\n\nQ: After (2), the important point to raise is the fact that the dependence graph of the model is not a DAG anymore\nA: Agreed - we added this comment in the paper. \n\nQ: How is the output calculated given the hidden states from variable numbers of nodes of an irregular graph?\nA: The output is a simple fully connected layer from the convolutional graph features. We added this comment in the paper.\n\nQ: The length of a graph is not defined\nA: Beginning of Sections 4.1 and 4.2 explained how the graph size is designed for each experiment. For graph matching, the size varies randomly between 170 and 270 nodes, and for graph clustering the length is between 50 and 250. \n\nQ: At the beginning of section 2.1 I do not understand the reference to word prediction and NLP. \nA: Similar to the beginning of Section 2.2, the beginning of section 2.1 uses the most well-known example of RNN tasks (word prediction in NLP) and ConvNet task (feature extraction in computer vision) to define the notion of neighbourhood for these architectures. This is simply didactic - these examples are used as a first step to understand the extension of neighbourhood from regular grid (1D for NLP and 2D for computer vision) to arbitrary graphs (brain networks, social networks, etc) for RNNs and ConvNets. \n\nQ: ConvNets are more pruned to deep networks than RNNs\nA: It simply means that graph ConvNets performance (with residuality) scales better than graph RNNs (with residuality). \n\nQ: What are \"heterogeneous graph domains\"?\nA: Homogeneous graph domains refer to regular lattices and heterogeneous graph domains refer to graphs with complex variable structures like proteins, brain connectivity, gene regulatory network, etc. ", "We thank the reviewer for her/his time and valuable comments. We hope to clarify any misunderstanding below and show the importance of this work in the field of deep learning on graphs. \n\nQ: Relationship between the two models\nA: There is no direct relationship between the proposed graph LSTM and the proposed graph ConvNet. We simply wanted to compare the best possible graph RNN architectures vs. the best graph ConvNets to find out what type of graph NNs should be used when dealing with problems involving graphs of variable length. As graph LSTM was not available in the literature, we simply used Scarselli-etal and Tai-etal to extend LSTM to arbitrary graphs. For the proposed graph ConvNet, we merged Sukhbaatar-etal and Marcheggiani-Titov, and added residuality to define the most possible generic ConvNet architecture for arbitrary graphs. Then, we performed several numerical experiments on graph matching and graph clustering to reach the conclusion that graph ConvNets should be preferred over RNN models for the class of variable graphs (such as molecules in quantum chemistry, gene regulatory networks for genetic disorders and particle physics for jet constituents). \n \nQ: Experiments on the merits of Graph ConvNets over Graph LSTM\nA: The most important advantage of graph ConvNets over graph LSTM is the multi-scale property. Graph ConvNet architectures have a monotonous increase of performance/accuracy when the network gets deeper, unlike RNN architectures for which performance decreases for a large number of layers. This property is illustrated in both graph experiments, see Figures 3 and 5 middle row. This makes ConvNet architectures more robust w.r.t. network design than RNN systems: Hyper-parameters such as L (nb of layers) and T (nb of inner RNN iterations, Fig4) must be carefully selected for graph RNNs, unlike graph ConvNets. Besides, RNN architectures are 1.5-4x slower than ConvNets (right column Figs 3 and 5) and they converge slower, Fig6. \n\nQ: Analysis why the ConvNet-like architecture is better \nA: We do agree such analysis would be important and we would like to carry it out in a future work. However, it is a challenging analysis as the data domain does not hold a nice mathematical structure like Euclidean lattices for images. This will require time and new analysis tools to develop such theory (given also that the standard theory for regular grids/images is still open). \nIn the meantime, we hope the reviewer considers the rigorous numerical experiments - two fundamental graph experiments with controlled analytical settings (stochastic block models for the graph distributions) that offer a clear conclusion about graph ConvNets, which can be leveraged to build better NNs in the fast-emerging domain of deep learning on graphs. \n" ]
[ -1, 7, 3, 6, -1, -1, -1, -1 ]
[ -1, 4, 4, 3, -1, -1, -1, -1 ]
[ "HycHDznMM", "iclr_2018_HyXBcYg0b", "iclr_2018_HyXBcYg0b", "iclr_2018_HyXBcYg0b", "ryFFX8Yxf", "Sy7QPPYxM", "rJ_yvM3fz", "r1-tSfqeG" ]
iclr_2018_HyI5ro0pW
Neural Networks with Block Diagonal Inner Product Layers
Artificial neural networks have opened up a world of possibilities in data science and artificial intelligence, but neural networks are cumbersome tools that grow with the complexity of the learning problem. We make contributions to this issue by considering a modified version of the fully connected layer we call a block diagonal inner product layer. These modified layers have weight matrices that are block diagonal, turning a single fully connected layer into a set of densely connected neuron groups. This idea is a natural extension of group, or depthwise separable, convolutional layers applied to the fully connected layers. Block diagonal inner product layers can be achieved by either initializing a purely block diagonal weight matrix or by iteratively pruning off diagonal block entries. This method condenses network storage and speeds up the run time without significant adverse effect on the testing accuracy, thus offering a new approach to improve network computation efficiency.
rejected-papers
The authors propose a technique for weight pruning that leaves block diagonal weights, instead of unstructured sparse weights, leading to faster inference. However, the experiments demonstrating the quality of the pruned models are insufficient. The authors also discuss connections to random matrix theory; but these connections are not worked out in detail.
train
[ "rygiBN8Vz", "Bk9pmiuxM", "S1IUpXKgG", "H1hqfaKeM", "BkRhT_OMG", "HJabtddzG", "r1FWLOuzG", "HJWme931M", "r15TXzn1z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "Thanks to the authors for responding (we have read it and taken it into account).", "This is a mostly experimental paper which evaluates the capabilities of neural networks with weight matrices that are block diagonal. The authors describe two methods to obtain this structure: (1) enforced during training, (2) enforced through regularization and pruning. As a second contribution, the authors show experimentally that the random matrix theory can provide a good model of the spectral behavior of the weight matrix when it is large. However, the authors only conjecture as to the potential of this method without describing clear ways of approaching this subject, which somewhat lessens the strength of their argument.\n\nQuality: this paper is of good quality\nClarity: this paper is clear, but would benefit from better figures and from tables to report the numerical results instead of inserting them into plain text.\nOriginality: this paper introduces block diagonal matrices to structure the weights of a neural network. The idea of structured matrices in this context is not new, but the diagonal block structure appears to be. \nSignificance: This paper is somewhat significant.\n\nPROS \n- A new approach to analyzing the behavior of weight matrices during learning\n- A new structure for weight matrices that provides good performance while reducing matrix storage requirements and speeding up forward and backward passes.\n\nCONS\n- Some of the figures are hard to read (in particular Fig 1 & 2 left) and would benefit from a better layout.\n- It would be valuable to see experiments on bigger datasets than only MNIST and CIFAR-10. \n- I understand that the main advantage of this method is the speedup; however, providing the final accuracy as a function of the nonzero entries for slower methods (e.g. the sparse pruning showed in Fig 1. a) would provide a more complete picture.\n\nMain questions:\n- Could you briefly comment on the training time in section 4.1? \n- Could you elaborate on the last sentence of section 4.1?\n- You state: \"singular values of an IP layer behave according to the MP distribution even after 1000s of training iterations.\" Is this a known fact, or something that you observed empirically? In practice, how large must the weight matrix be to observe this behavior?\n\nNitpicks:\n- I believe the term \"fully connected\" is more standard than \"inner product\" and would add clarity to the paper, but I may be mistaken. ", "The paper proposes to make the inner layers in a neural network be block diagonal, mainly as an alternative to pruning. The implementation of this seems straightforward, and can be done either via initialization or via pruning on the off-diagonals. There are a few ideas the paper discusses:\n\n(1) compared to pruning weight matrices and making them sparse, block diagonal matrices are more efficient since they utilize level 3 BLAS rather than sparse operations which have significant overhead and are not \"worth it\" until the matrix is extremely sparse. I think this case is well supported via their experiments, and I largely agree.\n\n(2) that therefore, block diagonal layers lead to more efficient networks. This point is murkier, because the paper doesn't discuss possible increases in *training time* (due to increased number of iterations) in much detail. At if we only care about running the net, then reducing the time from 0.4s to 0.2s doesn't seem to be that useful (maybe it is for real-time predictions? Please cite some work in that case)\n\n(3) to summarize points (1) and (2), block diagonal architectures are a nice alternative to pruned architectures, with similar accuracy, and more benefit to speed (mainly speed at run-time, or speed of a single iteration, not necessarily speed to train)\n\n[as I am not primarly a neural net researcher, I had always thought pruning was done to decrease over-fitting, not to increase computation speed, so this was a surprise to me; also note that the sparse matrix format can increase runtime if implemented as a sparse object, as demonstrated in this paper, but one could always pretend it is sparse, so you never ought to be slower with a sparse matrix]\n\n(4) there is some vague connection to random matrices, with some limited experiments that are consistent with this observation but far from establish it, and without any theoretical analysis (Martingale or Markov chain theory)\n\nThis is an experimental/methods paper that proposes a new algorithm, explained only in general details, and backs up it up with two reasonable experiments (that do a good job of convincing me of point (1) above). The authors seem to restrict themselves to convolutional networks in the first paragraph (and experiments) but don't discuss the implications or reasons of this assumption. The authors seem to understand the literature well, and not being an expert myself, I have the impression they are doing a fair job.\n\n\nThe paper could have gone farther experimentally (or theoretically) in my opinion. For example, with sparse and block diagonal matrices, reducing the size of the matrix to fit into the cache on the GPU must obviously make a difference, but this did not seem to be investigated. I was also wondering about when 2 or more layers are block sparse, do these blocks overlap? i.e., are they randomly permuted between layers so that the blocks mix? And even with a single block, does it matter what permutation you use? (or perhaps does it not matter due to the convolutional structure?)\n\nThe section on the variance of the weights is rather unclear mathematically, starting with the abstract and even continuing into the paper. We are talking about sample variance? What does DeltaVar mean in eq (2)? The Marchenko-Pastur theorem seemed to even be imprecise, since if y>1, then a < 0, implying that there is a nonzero chance that the positive semi-definite matrix XX' has a negative eigenvalue.\n\nI agree this relationship with random matrices could be interesting, but it seems too vague right now. Is there some central limit theorem explanation? Are you sure that you've run enough iterations to fully converge? (Fig 4 was still trending up for b1=64). Was it due to the convolutional net structure (you could test this)? Or, perhaps train a network on two datasets, one which is not learnable (iid random labels), and one which is very easily learnable (e.g., linearly separable). Would this affect the distributions?\n\nFurthermore, I think I misunderstood parts, because the scaling in MNIST and CIFAR was different and I didn't see why (for MNIST, it was proportional to block size, and for CIFAR it was independent of block size almost).\n\nMinor comment: last paragraph of 4.1, comparing with Sindhwani et al., was confusing to me. Why was this mentioned? And it doesn't seem to be comparable. I have no idea what \"Toeplitz (3)\" is.", "This paper proposes replacing fully connected layers with block-diagonal fully connected layers and proposes two methods for doing so. It also make some connections to random matrix theory.\n\nThe parameter pruning angle in this paper is fairly weak. The networks it is demonstrated on are not particularly large (largeness usually being the motivation for pruning) and the need for making them smaller is not well motivated. Additionally MNIST is a uniquely bad dataset for evaluating pruning methods, since they tend to work uncharacteristically well on MNIST (This can be seen in some of the references the paper cites).\n\nThe random matrix theory part of this paper is intriguing, but left me wondering \"and then what?\" It is presented as a collection of observations with no synthesis or context for why they are important. I'm usually quite happy to see connections being made to other fields, but it is not clear at all how this particular connection is more than a curiosity. This paper would be much stronger if it offered some way to exploit this connection.\n\nThere are two half-papers here, one on parameter pruning and one on applying insights from random matrix theory to neural networks, but I don't see a strong connection between them. Moreover, they are both missing their other half where the technique or insight they propose is exploited to achieve something. \n", "Thank you for your thorough review. \n\nAt another reviewer’s suggestion, we have chose to split the random matrix theory and the block diagonal inner product layer work into two separate papers. We have decided to submit only the block diagonal inner product layer work for this review. \n\nRe: “Some of the figures are hard to read (in particular Fig 1 & 2 left) and would benefit from a better layout.”\n\nWe improved these figures by moving the labels indicating the values when the layer is fully connected (i.e. blocks=1).\n\nRe: It would be valuable to see experiments on bigger datasets than only MNIST and CIFAR-10. \n\nWe plan to run experiments on the ImageNet dataset using AlexNet to demonstrate our work in a ‘large’ setting. However, we are currently waiting on a larger memory allocation on the Bridges supercomputer to handle this task. This allocation is granted, but will take up to 5 days to be active. We will update the paper with these results when we have them.\n\nRe: I understand that the main advantage of this method is the speedup; however, providing the final accuracy as a function of the nonzero entries for slower methods (e.g. the sparse pruning showed in Fig 1. a) would provide a more complete picture.\n\nWe offered two data points here for each dataset: the accuracy when pruning that yeilds comparable speed and the accuracy for comparable pruning. Comparable pruning (for random entries) is consistenly more accurate, but can have 8x slower execution time.\n\n\nRe: Could you briefly comment on the training time in section 4.1? \n\nWe added an additional figure that may be helpful here. Figure 1 shows how block diagonal inner product layers scale for large weight matrices matrices. We discuss Figure 1 in Section 4: Experiments, Paragraph 2\n\nRe: Could you elaborate on the last sentence of section 4.1?\n\nHere we just wanted to note that an error rate of 4.37% is impressive considering the serious contrain on the flow of information when the first layer is unable to get a full picture.\n\nRe: “I believe the term \"fully connected\" is more standard than \"inner product\" and would add clarity to the paper, but I may be mistaken.”\n\nThe term fully connected is more standard, but this name is descriptive. A layer is fully connected if every node in that layer has a weight for every node in the previous layer, since this is not the case for our block inner product layer we thought using the term, “fully connected” would be misleading. We do offer this term in the beginning of the paper for context.\n\n\n\n\n\nWe have uploaded a revised (working) version of the paper focusing on the block diagonal inner product layer results.\n\nWe list the major changes here:\n\nSection 2: Related Work, Paragraph 3\nHere we focus on the related work we deem most similar to our own. We discuss Group Lasso, Group Convolution and Sindhwani et al. (2015) work with Toeplitz-like transforms as successful options to reduce network size in a structured manner. \n\nSection 4: Experiments, Paragraph 2\nWe discuss the Figure 1. This new figure shows how block diagonal inner product layers scale for large weight matrices matrices.\n\nSection 4: Experiments, Paragraph 3\nHere we examine methods to improve the flow of information in sprase architecture and compare them to block diagonal inner product layer method 2 with pruning. These inclue fixed sub-block shuffling inspired by channel shuffling in Zhang et al. (2017) and random block shuffling. \n\n4.2 CIFAR10, Paragraph 6\nHere we show that block diagonal inner product layer method 2 with pruning shows improved results over fixed sub-block shuffling inspired by channel shuffling in Zhang et al. (2017) and random block shuffling, which are other attempts to improve the flow of information in sparse architecture as discussed in Section 4: Experiments, Paragraph 3. ", "Thank you for your thorough review. \n\nAt another reviewer’s suggestion we have chose to split the random matrix theory and the block diagonal inner product layer work into two separate papers. We have decided to submit only the block diagonal inner product layer work for this review. \n\nWe plan to run experiments on the ImageNet dataset using AlexNet to demonstrate our work in a ‘large’ setting. However, we are currently waiting on a larger memory allocation on the Bridges supercomputer to handle this task. This allocation is granted, but will take up to 5 days to be active. We will update the paper with these results when we have them.\n\nRe: “[B]lock diagonal layers lead to more efficient networks. This point is murkier, because the paper doesn't discuss possible increases in *training time*.”\n\nAll MNIST Lenet-5 experiments were run over 10000 iterations and all Cifar10 experiments were run over 9000 iterations. When implementing block diagonal inner product layers using method 1 without pruning we discussed the speed up of the weight matrix products for various matrix sizes. When implementing block diagonal inner product layers using method 2 with pruning, the speedup depends on the number of iterations it takes to fully prune to acheive the block structure. The pruning process itself only adds O(n/b) work to a layer with n weight parameters and b blocks in one iteration. We left the number of pruning iterations open as a hyperparameter.\n\nRe: “I had always thought pruning was done to decrease over-fitting, not to [decrease] computation speed, so this was a surprise to me; also note that the sparse matrix format can increase runtime if implemented as a sparse object, as demonstrated in this paper, but one could always pretend it is sparse, so you never ought to be slower with a sparse matrix”\n\nMy understanding is that there are two primary reasons for pruning: to reduce overfitting and to reduce memory requirements. Reducing memory requiremens makes storing networks on mobile devices more feasible, for example. An unfortunate side effect is that sparse formats can greatly slow down computation speed.\n\nRe: “The authors seem to restrict themselves to convolutional networks in the first paragraph (and experiments) but don't discuss the implications or reasons of this assumption.”\n\nThe focus on convolutional neural networks is simply because this is where the interest is. CNN’s are more powerful, successful so to convince readers I have focused on this kind of network. In 4.1 MNIST, Paragraph 4 I do touch on networks with only fully connected layers.\n\nRe: “I was also wondering about when 2 or more layers are block sparse, do these blocks overlap? i.e., are they randomly permuted between layers so that the blocks mix? And even with a single block, does it matter what permutation you use? (or perhaps does it not matter due to the convolutional structure?)”\n\nOne can implement consecutive block layers. In the newest version of our paper we discuss a few ways to address the flow of information when consecutive layers are block. A general discussion can be seen in Section 4: Experiments, Paragraph 3. Comparisions are discussed in MNIST and Cifar10 experiment sections.\n\nRe: “Minor comment: last paragraph of 4.1, comparing with Sindhwani et al., was confusing to me. Why was this mentioned? And it doesn't seem to be comparable. I have no idea what \"Toeplitz (3)\" is.”\n\nWe mention this because we deemed this work similar to our own and in need of comparison. We first mentioned Sindhwani et al. in Section 2: Related Work, Paragraph 3.\n\nWe have uploaded a revised (working) version of the paper focusing on the block diagonal inner product layer results.\n\nWe list the major changes here:\n\nSection 2: Related Work, Paragraph 3\nHere we focus on the related work we deem most similar to our own. We discuss Group Lasso, Group Convolution and Sindhwani et al. (2015) work with Toeplitz-like transforms as successful options to reduce network size in a structured manner. \n\nSection 4: Experiments, Paragraph 2\nWe discuss the Figure 1. This new figure shows how block diagonal inner product layers scale for large weight matrices matrices.\n\nSection 4: Experiments, Paragraph 3\nHere we examine methods to improve the flow of information in sparse architecture and compare them to block diagonal inner product layer method 2 with pruning. These include fixed sub-block shuffling inspired by channel shuffling in Zhang et al. (2017) and random block shuffling. \n\n4.2 CIFAR10, Paragraph 6\nHere we show that block diagonal inner product layer method 2 with pruning shows improved results over fixed sub-block shuffling inspired by channel shuffling in Zhang et al. (2017) and random block shuffling, which are other attempts to improve the flow of information in sparse architecture as discussed in Section 4: Experiments, Paragraph 3.", "Thank you for your thorough review. \n\nWe agree with your comment that the random matrix theory and the block diagonal inner product layer should be split into two separate papers. We have split the paper along that line and decided to submit only the block diagonal inner product layer work for this review. \n\nRe: “The networks it is demonstrated on are not particularly large (largeness usually being the motivation for pruning) and the need for making them smaller is not well motivated.”\n\nWe plan to run experiments on the ImageNet dataset using AlexNet to demonstrate our work in a ‘large’ setting. However, we are currently waiting on a larger memory allocation on the Bridges supercomputer to handle this task. This allocation is granted, but will take up to 5 days to be active. We will update the paper with these results when we have them.\n\nRe: “The parameter pruning angle in this paper is fairly weak.”\n\nMethod 2 with pruning did perform better in our experiments on Cifar10. We also added some comparison to other methods that address limited information flow in sparse architecture. When Bridges allows us to run experiments on Imagenet we will be sure to focus on the differences between method 1 and method 2.\n\nRe: “[This is] missing [the] other half where the technique or insight they propose is exploited to achieve something. ”\n\nWe believe that using block diagonal layers in place of fully connected layers acheives a lot. With block diagonal implementation, larger architectures are possible on hardware with memory constraints. We have condensed the storage requirements of a network without sacrificing execution time.\n\n\n\n\n\nWe have uploaded a revised (working) version of the paper focusing on the block diagonal inner product layer results.\n\nWe list the major changes here:\n\nSection 2: Related Work, Paragraph 3\nHere we focus on the related work we deem most similar to our own. We discuss Group Lasso, Group Convolution and Sindhwani et al. (2015) work with Toeplitz-like transforms as successful options to reduce network size in a structured manner. \n\nSection 4: Experiments, Paragraph 2\nWe discuss the Figure 1. This new figure shows how block diagonal inner product layers scale for large weight matrices matrices.\n\nSection 4: Experiments, Paragraph 3\nHere we examine methods to improve the flow of information in sprase architecture and compare them to block diagonal inner product layer method 2 with pruning. These inclue fixed sub-block shuffling inspired by channel shuffling in Zhang et al. (2017) and random block shuffling. \n\n4.2 CIFAR10, Paragraph 6\nHere we show that block diagonal inner product layer method 2 with pruning shows improved results over fixed sub-block shuffling inspired by channel shuffling in Zhang et al. (2017) and random block shuffling, which are other attempts to improve the flow of information in sprase architecture as discussed in Section 4: Experiments, Paragraph 3. ", "We greatly appreciate your comment.\n\nWe agree that group convolution should be mentioned in the final version and we are happy to make this change. It is an important, related idea that supports our work. However our understanding of group convolution is that the number of nonzero weights does not change, but rather it is the number of connections that changes. A particular set of filter weights does not see the output of every channel.\n\nDecoupling channels in images is natural because channels often carry very similar information. When converting a fully connected layer to a block diagonal inner product layer, blocks may not even see a whole channel as is the case with 100 ip1 blocks on the MNIST dataset using the lenet-5 framework with 50 filters in conv2.", "The block diagonal inner product layer is rather similar with the group convolution in recent CNN architectures like ShuffleNet/Xception/MobileNet/ResNeXt/Inception... . In my understanding, a group convolution is just to make the parameter matrix of each filter from a dense matrix to a block diagonal one. They share similar advantages like the speedup and memory you mentioned in your paper. I think it would be better to have a discussion in your paper with these works." ]
[ -1, 5, 6, 4, -1, -1, -1, -1, -1 ]
[ -1, 4, 4, 3, -1, -1, -1, -1, -1 ]
[ "HJabtddzG", "iclr_2018_HyI5ro0pW", "iclr_2018_HyI5ro0pW", "iclr_2018_HyI5ro0pW", "Bk9pmiuxM", "S1IUpXKgG", "H1hqfaKeM", "r15TXzn1z", "iclr_2018_HyI5ro0pW" ]
iclr_2018_SJ1fQYlCZ
Training with Growing Sets: A Simple Alternative to Curriculum Learning and Self Paced Learning
Curriculum learning and Self paced learning are popular topics in the machine learning that suggest to put the training samples in order by considering their difficulty levels. Studies in these topics show that starting with a small training set and adding new samples according to difficulty levels improves the learning performance. In this paper we experimented that we can also obtain good results by adding the samples randomly without a meaningful order. We compared our method with classical training, Curriculum learning, Self paced learning and their reverse ordered versions. Results of the statistical tests show that the proposed method is better than classical method and similar with the others. These results point a new training regime that removes the process of difficulty level determination in Curriculum and Self paced learning and as successful as these methods.
rejected-papers
The authors give evidence that is certain cases, the ordering of sample inclusion in a curriculum is not important. However, the reviewers believe the experiments are inconclusive, both in the sense that as reported, they do not demonstrate the authors' hypothesis, and that they may leave out many relevant factors of variation (such as hyper-parameter tuning).
test
[ "BJL_WOKgz", "Hy4VEO9gM", "rkh23GClz", "r1UuG5AWz", "Sy-ml5RbG", "BJCdg50bG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper addresses an interesting problem of curriculum/self-paced versus random order of samples for faster learning. Specifically, the authors argue that adding samples in random order is as beneficial as adding them with some curriculum strategy, i.e. from easiest to hardest, or reverse. \nThe main learning strategy considered in this work is learning with growing sets, i.e. at each next stage a new portion of samples is added to the current available training set. At the last stage, all training samples are considered. The classifier is re-learned on each stage, where optimized weights in the previous stage are given as initial weights in the next stage. \n\nThe work has several flaws. \n-First of all, it is not surprising that learning with more training samples at each next stage (growing sets) gets better - this is the basic principle of learning. The question is how fast the current classifier converges to the optimal Bayes level when using Curriculum strategy versus Random strategy. The empirical evaluations do not show evidence/disprove regarding this matter. For example, it could happen that the classifier converges to the optimal on the first stage already, so there is no difference when training in random versus curriculum order with growing sets. \n-Secondly, easyness/hardness of the samples are defined w.r.t. some pre-trained (external) ensemble method. It is not clear how this definition of easiness/hardness translates when training the 3-layer neural network (final classifier). For example, it could well happen that all the samples are equally easy for training the final classifier, so the curriculum order would be the same as random order. In the original work on self-paced learning, Kumar et al (2010), easiness of the samples is re-computed on each stage of the classifier learning. \n-The empirical evaluations are not clear. Just showing the wins across datasets without actual performance is not convincing (Table 2). \n-I wonder whether the section with theoretical explanation is needed. What is the main advantage of learning with growing sets (when re-training the classifier) and (traditional) learning when using the whole training dataset (last stage, in this work)? \n\n", "The paper proposes to study the influence of ordering in the Curriculum and Self paced learning. The paper is mainly based on empirical justification and observation. The results on 36 data sets show that to some extent the ordering of the training instances in the Curriculum and Self paced learning is not important. The paper involves some interesting ideas and experimental results. I still have some comments.\n\n1.\tThe empirical results show that different orderings still have different impact for data sets. How to adaptively select an appropriate ordering for given data set?\n2.\tThe empirical results show that some ordering has negative impact. How to avoid the negative impact? This question is not answered in the paper.\n3.\tThe ROGS is still clearly inferior to SPLI. It seems that such an observation does not strongly support the claim that ‘random is good enough’. \n", "Summary: \nThe paper proposes an algorithm to do incremental learning, by successively growing the training set in phases. However as opposed to training using curriculum learning or self paced learning, the authors propose to simply add training samples without any order to their \"complexity\". The authors claim that their approach, which is called ROGS, is better than the classical method and comparable to curriculum/self paced learning. The experiments are conducted on the UCI dataset with mixed results. \n\nReview: \nMy overall assessment of the paper is that it is extremely weak, both in terms of the novelty of method proposed, its impact, and the results of the experiments. Successively increasing the training set size in an arbitrary order is the first thing that one would try when learning incrementally. Furthermore, the paper does not clearly explain what does it mean by a method to \"win\" or \"lose\". Is some training algorithm A a winner over some training algorithm B, if A reaches the same accuracy as B in lesser number of epochs? In such a case, how do we decide on what accuracy is the upper bound. Also, do we tune the hyper-parameters of the model along the way? There are so many variables to account for here, which the paper completely ignores. \n\nFurthermore, even under the limited set of experiments the authors conducted, the results are highly inconclusive. While the authors test their proposed methodology on 36 UCI datasets, there is no clear indication whether the proposed approach has any superiority over the previous proposed ones, such as, CL and SPLI. \n\nGiven the above weaknesses of the paper i think the impact of this research is extremely marginal. \n\nThe paper is generally well written and easy to understand. There are some minor issues though. For example, I think Assumption (a) is quite strong and may not necessary hold in many cases. ", "Thank you for your review, we have some comments:\n\n-Firstly our paper does not include a speed test for methods to find the faster one. We compared the error rates of the methods and get better results than standard incremental training in many cases. We looked for the reason why CL, SPL and reverse ordered variants have better performance and found that their common property is training with growing sets. So we growed the sets without demanding difficulty level determination process and also get better results. \n\n-In the CL we ordered the training set according to prediction confidence of the ensemble(all samples can't have same confidence degree) and divided the ordered training set into n(=25) parts. First part includes 1/n of the training set. In the ROGS, random ordered training set is divided into n parts and 1/n is taken in the first stage. So it is not so possible to have the same set on the first stage of CL and ROGS. When we are growing the sets in both methods we get lower error rates in the following stages logically. We could consider to add our paper how the test set error changes during stages. Additionally, we implemented the original work of SPL (Kumar et al., 2010), determined the difficulty levels at the end of each stage and took the samples from this ordering.\n\n-We thought a table with actual performances will not be easy to read but we are considering to give the table of errors in the Appendix. \n\n-Finally we point Section 3 of our paper about theoretical perspective. We make an explanation about training with small sets in the previous stages provides a better starting point for the bigger ones.\n", "Thank you for your review, we want to explain some points:\n\nWhen learning incrementally in standard method, we give the whole training set sample-by-sample and it continues learning the same samples in the following epochs until convergence. When we use growing sets, first we give only one part of the training set sample-by-sample, find the minimum of this part then continue with first and second part of the training set in the second stage. We conclude that minimum of the previous part is a better starting point for next part.\n\nAs we explain in our paper we get 20 error rates(MSE) with 4x5 fold cross validation for each data set in all compared methods and made 0.95 significance level paired T-test. If one method has statistically significant better results according to T-test it wins, if it has worse results it losses.\n\nWhen we are working with 36 datasets it is difficult to find the best hyper-parameters in neural network for each dataset. We use the same model for all datasets and show that ROGS method works on many cases whether the model is proper for the data or not.\n\nWe have seen that reverse order versions of CL and SPL are good in related works and thought that it may not necessary to order the samples. Superiority is to obtain near results without ordering and thus we can throw off the complexity (easiness/hardness) determination process.\n\nAbout Assumption (a) the average of simple functions may be more complex or the average of complex functions may be simpler. It is difficult to say which of these is more possible without making a presupposion over the functions. Assumption (a) indicates the condition when the most of the function derivatives (ℓi’ (θB)) are smaller than 0 at θB. This means the probability of being less than 0 is higher for the sum of the derivatives of a randomly selected part. Even if Assumption (a) is not true, there is still possibility of being less than 0 for the sum of the derivatives of a randomly chosen part. So, it is possible to overcome the local minimum at θB with a random selection. The assumption indicates a situation where this possibility is higher. If the sum of the derivatives of a randomly selected part is greater than 0, this selection may lead the optimization to the wrong direction. But subsequent steps can still orient the optimization the correct direction as we have explained in Theorem 2. \n", "Thank you, we have some comments about your reviews.\n\n1. There are many studies about to find the optimum curriculum for specific datasets. Here we propose a general idea that obtains good results on many cases with growing sets. It is a different exciting research area to find the rules about 'when to use which ordering'. As a result of our research we have seen that our method and reverse order versions of CL, SPL can obtain better results in specifically multiclass(more than 3 class) datasets.\n\n2. According to our results we have negative impact on datasets which has 3 or less classes and high error rated. Noisy inputs of these datasets may be chosen previously and minimum of these data is not a proper starting point for the rest of the data. \n\n3. ROGS method removes the problem of difficulty level determination and obtains good results in many cases. May be each dataset have a proper ordering but there is also a common point of all successful methods and this comes from growing the training sets stage-by-stage, and start each stage from the end point of the previous stage.\n" ]
[ 4, 6, 4, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1 ]
[ "iclr_2018_SJ1fQYlCZ", "iclr_2018_SJ1fQYlCZ", "iclr_2018_SJ1fQYlCZ", "BJL_WOKgz", "rkh23GClz", "Hy4VEO9gM" ]
iclr_2018_rJaE2alRW
Autoregressive Convolutional Neural Networks for Asynchronous Time Series
We propose Significance-Offset Convolutional Neural Network, a deep convolutional network architecture for regression of multivariate asynchronous time series. The model is inspired by standard autoregressive (AR) models and gating mechanisms used in recurrent neural networks. It involves an AR-like weighting system, where the final predictor is obtained as a weighted sum of adjusted regressors, while the weights are data-dependent functions learnt through a convolutional network. The architecture was designed for applications on asynchronous time series and is evaluated on such datasets: a hedge fund proprietary dataset of over 2 million quotes for a credit derivative index, an artificially generated noisy autoregressive series and household electricity consumption dataset. The pro-posed architecture achieves promising results as compared to convolutional and recurrent neural networks. The code for the numerical experiments and the architecture implementation will be shared online to make the research reproducible.
rejected-papers
The reviewers feel that the novelties in the model are not significant. Furthermore, they suggest that empirical results could be improved by 1: analyses showing how the significance network functions and directly measuring its impact 2: More reproducible experiments. In particular, this is really an applications paper, and the experiments on the main application are not reproducible because the data is proprietary. 3: baselines that make assumptions more in line with the authors' problem setup
test
[ "Hkl6sRTyf", "H1JQgkAJG", "BkaINb9xz", "r1UnLEpmz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "To begin with, the authors seem to be missing some recent developments in the field of deep learning which are closely related to the proposed approach; e.g.:\n\nSotirios P. Chatzis, “Recurrent Latent Variable Conditional Heteroscedasticity,” Proc. 42nd IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE ICASSP), pp. 2711-2715, March 2017.\n\nIn addition, the authors claim that Gaussian process-based models are not appropriate for handling asynchronous data, since the assumed Gaussianity is inappropriate for financial datasets, which often follow fat-tailed distributions. However, they seem to be unaware of several developments in the field, where mixtures of Gaussian processes are postulated, so as to allow for capturing long tails in the data distribution; for instance:\n\nEmmanouil A. Platanios and Sotirios P. Chatzis, “Gaussian Process-Mixture Conditional Heteroscedasticity,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 5, pp. 888-900, May 2014.\n\nHence, the provided experimental comparisons are essentially performed against non-rivals of the method. It is more than easy to understand that a method not designed for modeling observations with the specific characteristics of financial data should definitely perform worse than a method designed to cope with such artifacts. That is why the sole purpose of a (convincing) experimental evaluation regime should be to compare between methods that are designed with the same data properties in mind. The paper does not satisfy this requirement.\n\nTurning to the method itself, the derivations are clear and straightforward; the method could have been motivated a somewhat better, though.", "The author proposed:\n1. A data augmentation technique for asynchronous time series data.\n2. A convolutional 'Significance' weighting neural network that assigns normalised weights to the outputs of a fully-connected autoregressive 'Offset' neural network, such that the output is a weighted average of the 'Offset' neural net.\n3. An 'auxiliary' loss function.\n\nThe experiments showed that:\n1. The proposed method beat VAR/CNN/ResNet/LSTM 2 synthetic asynchronous data sets, 1 real electricity meter data set and 1 real financial bid/ask data set. It's not immediately clear how hyper-parameters for the benchmark models were chosen.\n2. The author observed from the experiments that the depth of the offset network has negligible effect, and concluded that the 'Significance' network has crucial impact. (I don't see how this conclusion can be made.)\n3. The proposed auxiliary loss is not useful.\n4. The proposed architecture is more robust to noise in the synthetic data set compared to the benchmarks, and together with LSTM, are least prone to overfitting.\n\nPros\n- Proposed a useful way of augmenting asynchronous multivariate time series for fitting autoregressive models\n- The convolutional Significance/weighting networks appears to reduce test errors (not entirely clear)\n\nCons\n- The novelties aren't very well-justified. The 'Significance' network was described as critical to the performance, but there is no experimental result to show the sensitivity of the model's performance with respect to the architecture of the 'Significance' network. At the very least, I'd like to see what happens if the weighting was forced to be uniform while keeping the 'Offset' network and loss unchanged.\n- It's entirely unclear how the train and test data was split. This may be quite important in the case of the financial data set.\n- It's also unclear if model training was done on a rolling basis, which is common for time series forecasting.\n- The auxiliary loss function does not appear to be very helpful, but was described as a key component in the paper.\n\nQuality: The quality of the paper was okay. More details of the experiments should be included in the main text to help interpret the significance of the experimental results. The experiment also did not really probe the significance of the 'Significance' network even though it's claimed to be important.\nClarity: Above average. \nOriginality: Mediocre. Nothing really shines. Weighted average-type architecture has been proposed many times in neural networks (e.g., attention mechanisms). \nSignificance: Low. It's unclear how useful the architecture really is.", "The authors propose an extension to CNN using an autoregressive weighting for asynchronous time series applications. The method is applied to a proprietary dataset as well as a couple UCI problems and a synthetic dataset, showing improved performance over baselines in the asynchronous setting.\n\nThis paper is mostly an applications paper. The method itself seems like a fairly simple extension for a particular application, although perhaps the authors have not clearly highlighted details of methodological innovation. I liked that the method was motivated to solve a real problem, and that it does seem to do so well compared to reasonable baselines. However, as an an applications paper, the bread of experiments are a little bit lacking -- with only that one potentially interesting dataset, which happens to proprietary. Given the fairly empirical nature of the paper in general, it feels like a strong argument should be made, which includes experiments, that this work will be generally significant and impactful. \n\nThe writing of the paper is a bit loose with comments like:\n“Besides these and claims of secretive hedge funds (it can be marketing surfing on the deep learning hype), no promising results or innovative architectures were publicly published so far, to the best of our knowledge.”\n\nParts of the also appear rush written, with some sentences half finished:\n“\"ues of x might be heterogenous, hence On the other hand, significance network provides data-dependent weights for all regressors and sums them up in autoregressive manner.””\n\nAs a minor comment, the statement\n“however, due to assumed Gaussianity they are inappropriate for financial datasets, which often follow fat-tailed distributions (Cont, 2001).”\nIs a bit too broad. It depends where the Gaussianity appears. If the likelihood is non-Gaussian, then it often doesn’t matter if there are latent Gaussian variables.\n", "First of all, we would like to thank all reviewers for their insightful comments.\n\nAs the ratings quite consistently put the paper below the acceptance threshold, the authors decided not to modify the submission, but instead to continue the work and possibly re-submit the paper in the future. We agree that the obtained results lack comparison with other significant models as well as the proposed model without certain components (e.g. significance network). New experiments will be carried out to reinforce the results." ]
[ 4, 5, 5, -1 ]
[ 5, 3, 4, -1 ]
[ "iclr_2018_rJaE2alRW", "iclr_2018_rJaE2alRW", "iclr_2018_rJaE2alRW", "iclr_2018_rJaE2alRW" ]
iclr_2018_B1KFAGWAZ
Revisiting The Master-Slave Architecture In Multi-Agent Deep Reinforcement Learning
Many tasks in artificial intelligence require the collaboration of multiple agents. We exam deep reinforcement learning for multi-agent domains. Recent research efforts often take the form of two seemingly conflicting perspectives, the decentralized perspective, where each agent is supposed to have its own controller; and the centralized perspective, where one assumes there is a larger model controlling all agents. In this regard, we revisit the idea of the master-slave architecture by incorporating both perspectives within one framework. Such a hierarchical structure naturally leverages advantages from one another. The idea of combining both perspective is intuitive and can be well motivated from many real world systems, however, out of a variety of possible realizations, we highlights three key ingredients, i.e. composed action representation, learnable communication and independent reasoning. With network designs to facilitate these explicitly, our proposal consistently outperforms latest competing methods both in synthetics experiments and when applied to challenging StarCraft micromanagement tasks.
rejected-papers
The authors present a centralized neural controller for multi-agent reinforcement learning. The reviewers are are not convinced that there is sufficient novelty, considering the authors setup as essentially a special case of other recent works, with added adjustments to the neural-networks that are standard in the literature. I personally am more bullish about this paper than the reviewers, as I think engineering an architecture to perform well in interesting scenarios is worth reporting. However, the reviewers are mostly in agreement, and their reviews were neither sloppy nor factually incorrect. So I will recommend rejection, following their judgement. Nevertheless, I encourage the authors to continue strengthening the results and the presentation and resubmit.
train
[ "BkbZ3Ip7f", "ByhCSC3EM", "S141qTNVG", "HyuecsNxf", "HynlSGDxM", "B1qTwXcgz", "H1aoqUTmz", "S1vBOLaQG", "BkSjwU6Xz" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "We summarize the updates made to the paper as follow:\n\n1. Add one more StarCraft micromanagement task (both task description and new evaluation results etc.) \"Dragoons vs. Zealots\" where heterogeneous agents are involved.\n\n2. A new experiment is included in section 4.4 to compare models of different centralization level (both the curves in Figure 6 (c) and the related analysis).\n\n3. Add a visualization of the master agent's hidden state to demonstrate the effectiveness of occupancy map and GCM module in our proposal (and the related analysis).\n\n4. Update table 1 and table 2 with standard errors, follow the results of GMEZO reported in the original paper.\n\n5. Fix some typos and details according to reviewers' suggestions.\n\n6. Statement about CommNet has been revised in Section 2.\n\n7. Results of a new baseline \"CommNet + Occupancy Map\" are provided in Table 1.\n\n8. Softmax/Gaussian action layers have been added to Figure 4.", "Thanks for the updates. \n\nWe have updated the argument about differences from CommNet as follows, emphasizing the independently and recurrently reasoning property of the master agent. \n\n\"This design represents an initial version of the proposed master-slave framework, however it does not facilitate an independently reasoning master agent which takes in messages from all agents step by step and processes such information in an recurrent manner.\"\n\n\nTaking the advice regarding the master's global information, we add another baseline \"CommNet + OccupancyMap\" which takes its original state as well as the occupancy map explicitly. The results show that CommNet performs better with this additional global information. However, with an explicit global planner (the \"master\" agent) designed in our model, such information seems better utilized to facilitate learning of more powerful collaborative polices. More details please refer to Table 1 and the comments in the updated paper. Regardless, note that, all information revealed from the extra occupancy map is by definition already included in all agents' state. To be specific, the occupancy map only includes the positions and IDs of our units without any of the enemies. And it is mentioned in the original CommNet paper* section 4.3.1 and 4.3.3 that the absolute location of each agent is already included in its input state features. Thus the occupancy map adds no extra information to the total information of all agents, but organizes them in an explicit global manner.\n\nSoftmax/Gaussian action layers are added to Figure 4.\n\nWe sincerely appreciate the constructive suggestions. \n\n\n*Sainbayar Sukhbaatar, Rob Fergus, et al. Learning multiagent communication with backpropagation. In Advances in Neural Information Processing Systems, pp. 2244–2252, 2016.", "I am not sure if this really clarifies much.\n\n\"CommNet pass back the sum of all agents’ hidden state directly to each agent at the next time step (this direct pass is what we meant by handcrafted information); whereas our applied an LSTM to process such information from time to time and therefore formulates an independently thinking master agent\"\n\n-I am not sure if I understand correctly, but I think this says \"we use LSTMs rather than multilayer feedforward networks for the components f_i\", is that correct? (Fig. 1 clearly shows that you do communicate at every timestep, I am not sure why this is less handcrafted)\n\n\"we applied a GCM\"\n-Yes, I did recognize the GCM as a novel feature in my review. Perhaps its merit could be investigated by comparing \"with CommNet with 1 extra agent that takes in the same information as your 'master'.\" ?\n\nI don't understand why the Gaussian/softmax layers are not clarified in the updated paper. \"on the top layer\" is just not so clear.\n\nThe standard errors look good, as such the evaluation does imply that there are merits to the overall approach. However, I still think that in the current form, it is not quite clear exactly what explains this difference in performance.\n\n\n\n", "This paper investigates multiagent reinforcement learning making used of a \"master slave\" architecture (MSA). On the positive side, the paper is mostly well-written, seems technically correct, and there are some results that indicate that the MSA is working quite well on relatively complex tasks. On the negative side, there seems to be relatively limited novelty: we can think of MSA as one particular communication (i.e, star) configuration one could use is a multiagent system. One aspect does does strike me as novel is the \"gated composition module\", which allows differentiation of messages to other agents based on the receivers internal state. (So, the *interpretation* of the message is learned). I like this idea, however, the results are mixed, and the explanation given is plausible, but far from a clearly demonstrated answer.\n\nThere are some important issues that need clarification:\n\n* \"Sukhbaatar et al. (2016) proposed the “CommNet”, where broadcasting communication channel among all agents were set up to share a global information which is the summation of all individual agents. [...] however the summed global signal is hand crafted information and does not facilitate an independently reasoning master agent.\"\n-Please explain what is meant here by 'hand crafted information', my understanding is that the f^i in figure 1 of that paper are learned modules?\n-Please explain what would be the differences with CommNet with 1 extra agent that takes in the same information as your 'master'.\n\n\n*This relates also to this: \n\n\"Later we empirically verify that, even when the overall in-\nformation revealed does not increase per se, an independent master agent tend to absorb the same\ninformation within a big picture and effectively helps to make decision in a global manner. Therefore\ncompared with pure in-between-agent communications, MS-MARL is more efficient in reasoning\nand planning once trained. [...] \nSpecifically, we compare the performance among the CommNet model, our\nMS-MARL model without explicit master state (e.g. the occupancy map of controlled agents in this\ncase), and our full model with an explicit occupancy map as a state to the master agent. As shown in\nFigure 7 (a)(b), by only allowed an independently thinking master agent and communication among\nagents, our model already outperforms the plain CommNet model which only supports broadcast-\ning communication of the sum of the signals.\"\n\n-Minor: I think that the statement \"which only supports broadcast-ing communication of the sum of the signals\" is not quite fair: surely they have used a 1-channel communication structure, but it would be easy to generalize that.\n\n-Major: When I look at figure 4D, I see that the proposed approach *also* only provides the master with the sum (or really mean) with of the individual messages...? So it is not quite clear to me what explains the difference.\n\n\n*In 4.4, it is not quite clear exactly how the figure of master and slave actions is created. This seems to suggest that the only thing that the master can communicate is action information? It this the case?\n\n* In table 2, it is not clear how significant these differences are. What are the standard errors?\n\n* The section 3.2 explains standard things (policy gradient), but the details are a bit unclear. In particular, I do not see how the Gaussian/softmax layers are integrated; they do not seem to appear in figure 4?\n\n* I cannot understand figure 7 without more explanation. (The background is all black - did something go wrong with the pdf?)\n\n\n\n\nDetails:\n* references are wrongly formatted throughout. \n\n* \"In this regard, we are among the first to combine both the centralized perspective and the decentralized perspective\"\nThis is a weak statement (E.g., I suppose that in the greater scheme of things all of us will be amongst the first people that have walked this earth...)\n\n\n* \"Therefore they tend to work more like a commentator analyzing and criticizing the play, rather than\na coach coaching the game.\"\n-This sounds somewhat vague. Can it be made crisper?\n\n* \"Note here that, although we explicitly input an occupancy map to the master agent, the actual infor-\nmation of the whole system remains the same.\"\nThis is a somewhat peculiar statement. Clearly, the distribution of information over the agents is crucial. For more insights on this one could refer to the literature on decentralized POMDPs.\n\n\n\n\n", "The paper proposes a neural network architecture for centralized and decentralized settings in multi-agent reinforcement learning (MARL) which is trainable with policy gradients.\nAuthors experiment with the proposed architecture on a set of synthetic toy tasks and a few Starcraft combat levels, where they find their approach to perform better than baselines.\n\nOverall, I had a very confusing feeling when reading the paper.
First, authors do not formulate what exactly is the problem statement for MARL. Is it an MDP or poMDP? How do different agents perceive their time, is it synchronized or not? Do they (partially) share the incentive or may have completely arbitrary rewards?\nWhat is exactly the communication protocol?\n\nI find this question especially important for MARL, because the assumption on synchronous and noise-free communication, including gradients is too strong to be useful in many practical tasks.\n\nSecond, even though the proposed architecture proved to perform empirically better that the considered baselines, the extent to which it advances RL research is unclear to me.\nCurrently, it looks \n\nBased on that, I can’t recommend acceptance of the paper.\n\nTo make the paper stronger and justify importance of the proposed architecture, I suggest authors to consider relaxing assumptions on the communication protocol to allow delayed and/or noisy communication (including gradients).\nIt would be also interesting to see if the network somehow learns an implicit global state representation used for planning and how is the developed plan changed when new information from one of the slave agents arrives.", "The paper presents results across a range of cooperative multi-agent tasks, including a simple traffic simulation and StarCraft micro-management. The architecture used is a fully centralized actor (Master) which observes the central state in combination with agents that receive local observation, MS-MARL. \nA gating mechanism is used in order to produce the contribution from the hidden state of the master to the logits of each agent. This contribution is added to the logits coming from each agent. \n\nPros: \n-The results on StarCraft are encouraging and present state of the art performance if reproducible.\n\nCons:\n-The experimental evaluation is not very thorough:\nNo uncertainty of the mean is stated for any of the results. 100 evaluation runs is very low. It is furthermore not clear whether training was carried out on multiple seeds or whether these are individual runs. \n\n-BiCNet and CommNet are both aiming to learn communication protocols which allow decentralized execution. Thus they represent weak baselines for a fully centralized method such as MS-MARL. \nThe only fully centralized baseline in the paper is GMEZO, however results stated are much lower than what is reported in the original paper (eg. 63% vs 79% for M15v16). The paper is missing further centralized baselines. \n\n-It is unclear to what extends the novelty of the paper (specific architecture choices) are required. For example, the gating mechanism for producing the action logits is rather complex and seems to only help in a subset of settings (if at all).\n\nDetailed comments:\n\"For all tasks, the number of batch per training epoch is set to 100.\"\nWhat does this mean?\n\nFigure 1: \nThis figure is very helpful, however the colour for M->S is wrong in the legend. \n\nTable 2:\nGMEZO win rates are low compared to the original publication. \nWhat many independent seeds where used for training? What are the confidence intervals? How many runs for evaluation? \n\n\nFigure 4:\nB) What does it mean to feed two vectors into a Tanh? This figure currently very unclear. What was the rational for choosing a vanilla RNN for the slave modules?\n\nFigure 5:\na) What was the rational for stopping training of CommNet after 100 epochs? The plot looks like CommNet is still improving. \nc) This plot is disconcerting. Training in this plot is very unstable. The final performance of the method ('ours') does not match what is stated in 'Table 2'. I wonder if this is due to the very small batch size used (\"a small batch size of 4 \").\n\n\n", "There are two key differences from CommNet: 1) CommNet pass back the sum of all agents’ hidden state directly to each agent at the next time step (this direct pass is what we meant by handcrafted information); whereas our applied an LSTM to process such information from time to time and therefore formulates an independently thinking master agent; 2) when passing back the processed information, we applied a GCM unit to effectively merge information/“thoughts” from both the master agent and the slave agents. Both independent thinking LSTM and the GCM merging module are learnable and their effectiveness has been shown when compared to the CommNet model. We highly recognize CommNet and our proposal was directly motivated from CommNet. We have refined some of the statement to avoid misleading.\n\n* This seems to suggest that the only thing that the master can communicate is action information? It this the case?\n\nThis is not the case. The master will pass along its hidden states to the GCM module which will process such information and that from the slave agents to finally output actions.\n\n* What are the standard errors?\n\nStandard errors are provided in the updated tables.\n\n* The section 3.2 explains standard things (policy gradient), but the details are a bit unclear. In particular, I do not see how the Gaussian/softmax layers are integrated; they do not seem to appear in figure 4?\n\nThe Gaussian/softmax layers are actually after the hidden state output of the GCM module (to generate master's action) and each slave agent's RNN module (to generate slave's action).\n\n* I cannot understand figure 7 without more explanation. (The background is all black - did something go wrong with the pdf?)\n\nThanks for your reminding. The black background is due to the command line output of the combat task environment. As a simple game environment based on MazeBase (Sukhbaatar et al. 2015), the visualization of this task is printed to command line as it was originally implemented. Thus the two examples are obtained by directly cropping command line output. The yellow and green dashes/arrows are added by us to illusrate the behaviours of agents. We will try to replace this demonstration to a more reader-friendly version in the camera-ready version if accepted.\n\nAnd the details suggestions are very helpful, we fixed them in the updated version. Thanks.", "The overall problem statement of the MARL problem we consider here should be formulated as an MDP (supposed we do not take “fog of war” etc. into account). \n\nAlthough each agent can only observe the environment partially, when considering the global optimal planning, we should view these problems from a centralized perspective and formulate it as one global MDP where the action of such MDP is defined as the concatenation of all agents’ actions. However since such an action space can be very large when the number of agents increases, therefore people tend to apply decentralized perspective, where each agent can be viewed as having its own MDP and than try to implement certain communication protocols between agents to facilitate collaborations in-between. In this regard, our work proposed to apply a master-slave communication architecture and to realize such architecture with learnable neural networks so that the final communication protocols are learned from many times of RL try and errors in the same way as the parameters of any RL models.\n\nTechnically speaking, we have no assumption of synchronous and noise-free communication. Regarding communication, as mentioned above, since the final protocols will be driven by training data, we almost do not have any assumptions except for pre-defined the certain hyper-parameters such as the dimensionality etc. As for synchronousness, our model does not make such an assumption, e.g. the master agent can take empty or zero values if any of the slave agents have no output at certain time steps. \n\nRegarding advances to RL research, we provide a novel practical MARL solution which encodes the idea of master-slave communication architecture. There are currently few theoretical results in this paper, especially on properties of the global MDP statements. However, motivated from the concept of combining both the centralized and decentralized perspectives of MARL, we are working on another draft explaining how effective communications between agents helps finding better optima of the global MDP. \n\nAs for implicit global state representations, this is a very good suggestion, we have added some analysis where we tried to visualize and understand the hidden state of the master agent. More details can be found in sec 4.5.", "We appreciate the positive comments and address the concerns as follows. \n\nRegarding experimental evaluation, thanks to the suggestion, we have added uncertainty results aka std in the latest tables. The training was carried out on multiple seeds and we chose the best 5 runs and computed the statistics (which is somehow standard since training of RL is not always stable). \n\nThe suggestion to compare with more centralized baselines is enlightening, we manage to realized a simple centralized baseline based on one modification from our own model. More detailed results are provided in the newly section 4.4. In a nutshell, this method performs rather poorly probably due to scalability issues, but we agree there are still room to explore in this regard. Regarding the performance of GMEZO, we have originally reported the reproduced results from the BicNet paper for consistency considerations, in the updated version we clarify this point and have now reported the highest ones from both. Still ours are consistently superior.\n\nIt is slightly subjective when discussing how novel a proposal is. What we can confirm is that we are the first to explicitly explore the master-slave communication architecture for MARL and effectively show its superiority to existing communication proposals such as that in BicNet and CommNet. The proposed GCM is a novel communication processor. Although the initial experimental results didn’t strongly justify its advantages, we have tried our more challenging settings such as heterogeneous starcraft tasks etc. We included one such case and related analysis in the updated version. We agree and has never denied that master-salve and LSTM cells are existing wisdoms, our major contribution is the novel and effective instantiation in the MARL settings, out of which we reported very promising performance on challenging tasks such as starcraft micromanagements.\n\nAll detailed comments are very helpful, we have properly addressed them in the updated version. " ]
[ -1, -1, -1, 5, 5, 4, -1, -1, -1 ]
[ -1, -1, -1, 5, 4, 3, -1, -1, -1 ]
[ "iclr_2018_B1KFAGWAZ", "S141qTNVG", "H1aoqUTmz", "iclr_2018_B1KFAGWAZ", "iclr_2018_B1KFAGWAZ", "iclr_2018_B1KFAGWAZ", "HyuecsNxf", "HynlSGDxM", "B1qTwXcgz" ]
iclr_2018_S1q_Cz-Cb
Training Neural Machines with Partial Traces
We present a novel approach for training neural abstract architectures which in- corporates (partial) supervision over the machine’s interpretable components. To cleanly capture the set of neural architectures to which our method applies, we introduce the concept of a differential neural computational machine (∂NCM) and show that several existing architectures (e.g., NTMs, NRAMs) can be instantiated as a ∂NCM and can thus benefit from any amount of additional supervision over their interpretable components. Based on our method, we performed a detailed experimental evaluation with both, the NTM and NRAM architectures, and showed that the approach leads to significantly better convergence and generalization capabilities of the learning phase than when training using only input-output examples.
rejected-papers
While the reviewers considered the basic idea of adding supervision intermediate to differentiable programming style architectures to be interesting and worthy of effort, they were unsure if 1: the proposed abstractions for discussing ntm and nram are well motivated/more generally applicable 2: the methods used in this work to give intermediate supervision are more generally applicable
train
[ "SkhI_TPez", "rkRHKcFgG", "SJdsxW5xG", "r1f5rvp7z", "HJaUrD6Qz", "Sy9-HwpQf", "SJ8n4P6Qf", "Skp04D6QG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The authors introduce the general concept of a differential neural computational machine, dNCM. It can apply to any fully differentiable neural programming machine, such as the Neural Turing Machine (NTM) or NRAM or the Neural GPU, but not to non-fully-differentiable architecture such as NPI. The author show how partial traces can be used to improve training of any dNCM with results on instantiations of dNCM for NTM and NRAM.\n\nOn the positive side, the paper is well-written (though too many results require looking into the Appendix) and dNCM is elegant. Also, while it's hard to call the idea of using partial traces original, it's not been studied in this extent and setting before. On the negative side, the authors have chosen weak baselines and too few and easy tasks to be sure if their results will actually hold in general. For example, for NTM the authors consider only 5 tasks, such as Copy, RepeatCopyTwice, Flip3rd and so on (Appendix E) and define \"a model to generalize if relative to the training size limit n, it achieves perfect accuracy on all of tests of size ≤ 1.5n and perfect accuracy on 90% of the tests of size ≤ 2n\". While the use of subtraces here shows improvements, it is not convincing since other architectures, e.g., the Improved Neural GPU (https://arxiv.org/abs/1702.08727), would achieve 100% on this score without any need for subtraces or hints. The tasks for the NRAM are more demanding, but the results are also more mixed. For one, it is worrysome that the baseline has >90% error on each task (Appendix J, Figure 12) and that Merge even with full traces has still almost 80% errors. Neural programmers are notoriously hard to tune, so it is hard to be sure if this difference could be eliminated with more tuning effort. In conclusion, while we find this paper valuable, to be good enough for acceptance it should be improved with more experimentation, adding baselines like the (Improved) Neural GPU and more tasks and runs.", "Summary\n\nThis paper presents differentiable Neural Computational Machines (∂NCM), an abstraction of existing neural abstract machines such as Neural Turing Machines (NTMs) and Neural Random Access Machines (NRAMs). Using this abstraction, the paper proposes loss terms for incorporating supervision on execution traces. Adding supervision on execution traces in ∂NCM improves performance over NTM and NRAM which are trained end-to-end from input/output examples only. The observation that adding additional forms of supervision through execution traces improves generalization may be unsurprising, but from what I understand the main contribution of this paper lies in the abstraction of existing neural abstract machines to ∂NCM. However, this abstraction does not seem to be particularly useful for defining additional losses based on trace information. Despite the generic subtrace loss (Eq 8), there is no shared interface between ∂NCM versions of NTM and NRAM that would allow one to reuse the same subtrace loss in both cases. The different subtrace losses used for NTM and NRAM (Eq 9-11) require detailed knowledge of the underlying components of NTM and NRAM (write vector, tape, register etc.), which questions the value of ∂NCM as an abstraction.\n\nWeaknesses\n\nAs explained in the summary, it is not clear to me why the abstraction to NCM is useful if one still needs to define specific subtrace losses for different neural abstract machines.\nThe approach seems to be very susceptible to the weight of the subtrace loss λ, at least when training NTMs. In my understanding each of the trace supervision information (hints, e.g. the ones listed in Appendix F) provides a sensible inductive bias we would the NTM to incorporate. Are there instances where these biases are noisy, and if not, could we incorporate all of them at the same time despite the susceptibility w.r.t λ?\nNTMs and other recent neural abstract machines are often tested on rather toyish algorithmic tasks. I have the impression providing extra supervision in form of execution traces makes these tasks even more toyish. For instance, when providing input-output examples as well as the auxiliary loss in Eq6, what exactly is left to learn? What I like about Neural-Programmer Interpreters and Neural Programmer [1] is that they are tested on less toyish tasks (a computer vision and a question answering task respectively), and I believe the presented method would be more convincing for a more realistic downstream task where hints are noisy (as mentioned on page 5).\n\nMinor Comments\n\np1: Why is Grefenstette et al. (2015) an extension of NTMs or NRAMs? While they took inspiration from NTMs, their Neural Stack has not much resemblance with this architecture.\np2: What is B exactly? It would be good to give a concrete example at this point. I have the feeling it might even be better to explain NCMs in terms of the communication between κ, π and M first, so starting with what I, O, C, B, Q are before explaining what κ and π are (this is done well for NTM as ∂NCM in the table on page 4). In addition, I think it might be better to explain the Controller before the Processor. Furthermore, Figure 2a should be referenced in the text here.\np4 Eq3: There are two things confusing in these equations. First, w is used as the write vector here, whereas on page 3 this is a weight of the neural network. Secondly, π and κ are defined on page 2 as having an element from W as first argument, which are suddenly omitted on page 4.\np4: The table for NRAM as ∂NCM needs a bit more explanation. Where does {1}=I come from? This is not obvious from Appendix B either.\np3 Fig2/p4 Eq4: Related to the concern regarding the usefulness of the ∂NCM abstraction: While I see how NTMs fit into the NCM abstraction, this is not obvious at all for NRAMs, particularly since in Fig 2c modules are introduced that do not follow the color scheme of κ and π in Fig 2a (ct, at, bt and the registers).\np5: There is related work for incorporating trace supervision into a neural abstract machine that is otherwise trained end-to-end from input-output examples [2].\np5: \"loss on example of difficulties\" -> \"loss on examples of the same difficulty\"\np5: Do you have an example for a task and hints from a noisy source?\nCitation style: sometimes citation should be in brackets, for example \"(Graves et al. 2016)\" instead of \"Graves et al. (2016)\" in the first paragraph of the introduction.\n\n[1] Neelakantan et al. Neural programmer: Inducing latent programs with gradient descent. ICLR. 2015. \n[2] Bosnjak et al. Programming with a Differentiable Forth Interpreter. ICML. 2017.", "Much of the work on neural computation has focused on learning from input/output samples. This paper is a study of the effect of adding additional supervision to this process through the use of loss terms which encourage the interpretable parts of the architecture to follow certain expected patterns.\n\nThe paper focuses on two topics:\n1. Developing a general formalism for neural computers which includes both the Neural Turing Machine (NTM) and the Neural Random Access Machine (NRAM), as well as a model for providing partial supervision to this general architecture.\n\n2. An experimental study of providing various types of additional supervision to both the NTM and the NRAM architecture.\n\nI found quite compelling the idea of exploring the use of additional supervision in neural architectures since oftentimes a user will know more about the problem at hand than just input-output examples. However, the paper is focused on very low-level forms of additional supervision, which requires the user to deeply understand the neural architecture as well as the way in which a given algorithm might be implemented on this architecture. So practically speaking I don't think it's reasonable to assume that users would actually provide additional supervision in this form.\n\nThis would have been fine, if the experimental results provided some insights into how to extend and/or improve existing architectures. Unfortunately, the results were simply a very straight-forward presentation of a lot of numbers, and so I was unable to draw any useful insights. I would have liked the paper to have been more clear about the insights provided by each of the tables/graphs. In general we can see that providing additional supervision improves the results, but this is not so surprising.\n\nFinally, much of the body of the paper is focused on topic (1) from above, but I did not feel as though this part of the paper was well motivated, and it was not made clear what insights arose from this generalization. I would have liked the paper to make clear up front the insights created by the generalization, along with an intuitive explanation. Instead much of the paper is dedicated to introduction of extensive notation, with little clear benefit. The notation did help make clear the later discussion of the experiments, but it was not clear to me that it was required in order to explain the experimental results.\n\nSo in summary I think the general premise of the paper is interesting, but in it's current state I feel like the paper does not have enough sufficiently clear insights for acceptance.\n", "\n\n→ More tasks and runs are required.\n\nWe performed additional experiments with noise in the supervision and also with other architectures (NGPU) showing benefits. We updated the paper with these results.", "\n\n→ What is left to learn if I/O examples and auxiliary loss (Eq 6) are already provided?\n\nEven for full supervision, the actual values in the registers are not provided, and the computation halting time step is not provided.\n\n→ The approach would be more impressive if it was trained on less toyish tasks.\n\nWe agree that more complex tasks would be even better. However, even the current tasks (also studied by NRAM and dForth) are challenging and show the benefits of our supervision method. \n\n→ It is unclear how NRAMs fit into ∂NCMs.\n\nThe NRAM factorizes into a neural controller that communicates in rounds with a non-neural circuitry as pictured in Figure 2. This is the factorization needed by a ∂NCM, and the particular instantiation is given in equation (4). We will update the text to elaborate a little more here.", "\n→ It is unclear what the experimental insights of the paper are.\n\nWe believe a key result is that providing relatively simple hints on the interpretable part of the architecture leads to significantly improved results. This is particularly important with neural programmers which are notoriously difficult to tune (as pointed by Reviewer 3). To a degree, ability to provide additional supervision eliminates some of the complex and time consuming tuning process and makes the architecture more robust. We also have demonstrated that the supervision with NRAM can lead to better results than state-of-the-art architectures such as nGPU.", "\n\n→ Supervision requires the user to know the architecture and how the algorithm would be implemented on that architecture. The generalization is not well motivated and its insights are unclear. Why is the ∂NCM abstraction useful if supervision requires detailed knowledge of the architecture?\n\nTo provide extra supervision, the user needs to be aware of the general architecture, but they need only know in detail what the interpretable portion is. We demonstrate that this is reasonable trade-off as a little bit of extra knowledge can enable substantially better results. Architectures are often heavily tuned to particular classes of tasks, which already requires deep knowledge of the machinery. \n\nWe see ∂NCM as a mechanism for explaining our form of supervision and to which architectures it applies and how supervision works. It is not meant as a general purpose model for specifying only supervision at the ∂NCM level, without being aware of the underlying architecture. Thus, we believe the ∂NCM abstraction is useful for understanding purposes.\n\n→ How does your work compare to Differential Forth (dForth) (ICML’17) ? This work already provides a form of supervision.\n\nThe main conceptual difference between our work and dForth is the kind of supervision provided. In dForth, the teacher provides a static sketch program and leaves only the holes in the sketch to be learned. In our work, there is no sketch with holes: the teacher provides hints (soft constraints) for some time steps as a sequence of instructions to be executed. The controller then has to learn to issue the correct sequence at every time step.\n\nBoth are interesting and different ways of providing supervision and both have been studied in traditional non-neural programming by example synthesis, referred to as sketch vs. trace-based supervision.\n\n→ Why your NRAM baseline results do not match the NRAM results in the original paper?\n\nGenerally, our metric of success is harder to satisfy. In more detail:\n\n(i) Our metric of success tolerates less errors than the metric in the NRAM paper, meaning they can report success where we report failure. We believe our metric is a more natural one (see below).\n\nThe NRAM paper says: “The final error for a test case is computed as c / m, where c is the number of correctly written cells, and m represents the total number of cells that should be modified”. While not explained in that paper, a conservative interpretation is that cells that should be modified are specified by the task. Merge, for example, could specify the last half of the input string as to be modified. Errors in spots that shouldn’t be modified are not counted.\n\nOn the other hand, in our paper, we consider correctness of the entire string at once. We do not tolerate a single wrong entry. \n\n(ii) We show graphs of the error up to examples of size 50 while their graphs are for samples of size 30. Further, points on the horizontal axis in their graph represent not just the generalization on examples of that difficulty, but the average generalization of examples of up to that difficulty, computed by averaging an example of uniformly random difficulty of at most that point.\n\n(iii) our graphs represent the average error over multiple trainings, while it is not specified if theirs are the result of multiple trainings or just one. \n\n→ Why are subtraces useful if the improved Neural GPU (NGPU) can solve some of this tasks without supervision?\n\nGood point. Based on the reviewer’s suggestion we investigated this question, and updated the Appendix to include the results for the harder Merge task (see Figure 18). It turns out that while the NGPU can provide better baseline results than NRAM without supervision, the NRAM with supervision can have substantially better results than NGPU. Such results are possible only because the NRAM is more interpretable than the NGPU, allowing extra supervision and thus, better results.\n\nFor the simpler tasks, the NGPU sometimes generalizes perfectly (flip-3rd, repeat-flip-third, increment, swap), but often it generalizes worse (permute, list-k, dyck-words) than our supervision method.\n\n→ How do you provide supervision with NRAM exactly?\n\nWe added an Appendix describing the NRAM programming language used to provide supervision and also provided examples of supervision at different time steps in that language to be executed at time steps determined from a pre-condition on the state of the memory.\n\n→ The approach would be more convincing if it considered noisy hints. \n\nBased on the reviewers’ suggestion we now updated the paper with an experiment which shows that the presence of noisy and corrupted hints still significantly outperforms unsupervised training in the case of the increment task. The line “NoisyFull” in Figure 14 demonstrates that even with corrupted hints, supervision substantially helps training. This experiment is explained in more detail in “Robustness to Noise” in Section 5.\n", "We thank the reviewers for their comments. We addressed many of the major comments by performing additional experiments, and updated the Appendix in the paper to reflect that:\n\n- A comparison to the state-of-the-art Neural GPU (NGPU) showing NRAM with trace supervision can produce better results than NGPU. \n\n- Additional experiments with noise in the supervision, showing that even with noise, we can produce better results than the no supervision NRAM baseline.\n\n- A description of the supervision language for NRAM together with supervision examples.\n" ]
[ 4, 5, 4, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1q_Cz-Cb", "iclr_2018_S1q_Cz-Cb", "iclr_2018_S1q_Cz-Cb", "SkhI_TPez", "rkRHKcFgG", "SJdsxW5xG", "iclr_2018_S1q_Cz-Cb", "iclr_2018_S1q_Cz-Cb" ]
iclr_2018_HkMCybx0-
Improving Deep Learning by Inverse Square Root Linear Units (ISRLUs)
We introduce the “inverse square root linear unit” (ISRLU) to speed up learning in deep neural networks. ISRLU has better performance than ELU but has many of the same benefits. ISRLU and ELU have similar curves and characteristics. Both have negative values, allowing them to push mean unit activation closer to zero, and bring the normal gradient closer to the unit natural gradient, ensuring a noise- robust deactivation state, lessening the over fitting risk. The significant performance advantage of ISRLU on traditional CPUs also carry over to more efficient HW implementations on HW/SW codesign for CNNs/RNNs. In experiments with TensorFlow, ISRLU leads to faster learning and better generalization than ReLU on CNNs. This work also suggests a computationally efficient variant called the “inverse square root unit” (ISRU) which can be used for RNNs. Many RNNs use either long short-term memory (LSTM) and gated recurrent units (GRU) which are implemented with tanh and sigmoid activation functions. ISRU has less computational complexity but still has a similar curve to tanh and sigmoid.
rejected-papers
The authors introduce a new activation function which is similar in shape to ELU, but is faster to compute. The reviewers consider this to not be a significant innovation because the amount of time spent in computing the activation function is small compared to other neural network operations.
test
[ "rk3rjfYgG", "rkDaCCtlz", "SJNibmcgz", "rk5N6Gm-M", "HJ4fWYDC-", "rJ8gr8kyM", "B1zM2bHAb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public" ]
[ "This paper introduces a new nonlinear activation function for neural networks, i.e., Inverse Square Root Linear Units (ISRLU). Experiments show that ISRLU is promising compared to competitors like ReLU and ELU.\n\nPros:\n(1) The paper is clearly written.\n\n(2) The proposed ISRLU function has similar curves with ELU and has a learnable parameter \\alpha (although only fixed value is used in the experiments) to control the negative saturation zone. \n\nCons:\n(1) Authors claim that ISRLU is faster than ELU, while still achieves ELU’s performance. However, they only show the reduction of computation complexity for convolution, and speed comparison between ReLU, ISRLU and ELU on high-end CPU. As far as I know, even though modern CNNs have reduced convolution’s computation complexity, the computation cost of activation function is still only a very small part (less than 1%) in the overall running time of training/inference. \n\n(2) Authors only experimented with two very simple CNN architectures and with three different nonlinear activation functions, i.e., ISRLU/ELU/ReLU and showed their accuracies on MNIST. They did not provide the comparison of running time which I believe is important here as the efficiency is emphasized a lot throughout the paper.\n\n(3) For ISRLU of CNN, experiments on larger scale dataset such as CIFAR or ImageNet would be more convincing. Moreover, authors also propose ISRU which is similar to tanh for RNN, but do not provide any experimental results.\n\nOverall, I think the current version of the paper is not ready for ICLR conference. As I suggested above, authors need more experiments to show the effectiveness of their approach.\n", "\nSummary:\n- The paper proposes a new activation function that looks similar to ELU but much cheaper by using the inverse square root function.\n\nContributions:\n- The paper proposes a cheaper activation and validates it with an MNIST experiment. The paper also shows major speedup compared to ELU and TANH (unit-wise speedup).\n\nPros:\n- The proposed function has similar behavior as ELU but 4x cheaper.\n- The authors also refer us to faster ways to compute square root functions numerically, which can be of general interests to the community for efficient network designs in the future.\n- The paper is clearly written and key contributions are well present.\n\nCons:\n- Clearly, the proposed function is not faster than ReLU. In the introduction, the authors explain the motivation that ReLU needs centered activation (such as BN). But the authors also need to justify that ISRLU (or ELU) doesn’t need BN. In fact, in a recent study of ELU-ResNet (Shah et al., 2016) finds that ELU without BN leads to gradient explosion. To my knowledge, BN (at least in training time) is much more expensive than the activation function itself, so the speedup get from ISRLU may be killed by using BN in deeper networks on larger benchmarks. At inference time, all of ReLU, ELU, and ISRLU can fuse BN weights into convolution weights, so again ISRLU will not be faster than ReLU. The core question here is, whether the smoothness and centered zero property of ELU can buy us any win, compared to ReLU? I couldn’t find it based on the results presented here.\n- The authors need to validate on larger datasets (e.g. CIFAR, if not ImageNet) so that their proposed methods can be widely adopted.\n- The speedup is only measured on CPU. For practical usage, especially in computer vision, GPU speedup is needed to show an impact.\n\nConclusion:\n- Based on the comments above, I recommend weak reject.\n\nReferences:\n- Shah, A., Shinde, S., Kadam, E., Shah, H., Shingade, S.. Deep Residual Networks with Exponential Linear Unit. In Proceedings of the Third International Symposium on Computer Vision and the Internet (VisionNet'16).", "Summary:\nThe contribution of this paper is an alternative activation function which is faster to compute than the Exponential Linear Unit, yet has similar characteristics.\nThe paper first presents the mathematical form of the proposed activation function (ISRLU), and then shows the similarities to ELU graphically. It then argues that speeding up the activation function may be important since the convolution operations in CNNs are becoming heavily optimized and may form a lesser fraction of the overall computation. The ISRLU is then reported to be 2.6x faster compared to ELU using AVX2 instructions. The possibility of computing a faster approximation of ISRLU is also mentioned.\nPreliminary experimental results are reported which demonstrate that ISRLU can perform similar to ELU.\n\nQuality and significance:\nThe paper proposes an interesting direction for optimizing the computational cost of training and inference using neural networks. However, on one hand the contribution is rather narrow, and on the other the results presented do not clearly show that the contribution is of significance in practice.\nThe paper does not present clear benchmarks showing a) what is the fraction of CPU cycles spent in evaluating the activation function in any reasonably practical neural network, b) and what is the percentage of cycles saved by employing the ISRLU.\nThe presented results using small networks on the MNIST dataset only show that networks with ISRLU can perform similar to those with other activation functions, but not the speed advantages of ISRLU.\nThe effect of using the faster approximation on performance also remains to be investigated.\n\nClarity:\nThe content of the paper is unclear in certain areas.\n- It is not clear what Table 2 is showing. What is \"performance\" measured in? In general the Table captions need to be clearer and more descriptive. The acronym pkeep in later Tables should be clarified.\n- Why is the final Cross-Entropy Loss so high even though the accuracy is >99% for the MNIST experiments? It looks like the loss at initialization was reported instead?", "Many thanks for your comments & observations on our paper.\n\nWe referenced the Shah et al \"Deep Residual Networks with Exponential Linear Unit\" paper in our paper.\n\nYou mention, \"ELU without BN leads to gradient explosion\" But the paper you referenced seems to state they use ELU _without_ batch normalization (BN) and compared it favorably to ReLU+BN.\n\nShah et al in the intro: \"In this paper, we propose the use of exponential linear unit instead of the combination of ReLU and Batch Normalization in Residual Networks. We show that this not only speeds up learning in Residual Networks but also improves the accuracy as the depth increases.\"\n\nWe're a bit confused about your statement... can you clarify?\n\nBTW, all of our experiments with Mnist didn't use BN. In addition we are finishing up what look like favorable results for ISRLU (without BN) on CIFAR, GANs, and CapsuleNets. We would like to add these experiments to our paper to broaden our test cases.", "Yes, we have seen this method that shares the ideas of the \"K method\" that were decades years ago for inverse square root. These can improve performance for Exp. The implementations you pointed at were 64-bit approximations. While most DNN is done in 32-bit, 16-bit, etc. They took a few other optimization that you pointed to such as no bounds checking (which is probably ok for a well-designed AI implementation. Also these were scalar implementations) and they not vector implementations which depending on the hardware may bring up issues.\n\nAnother way to get faster intrinsic performance is various lower-precision implementations. In fact. we've even coded up some of our low lower-precisions intrinsics ourselves. For some background we'd suggest the classic: J.Hart, E.W. Cheney, et al, Computer Approximations, Publisher: Krieger Pub Co (July 1, 1978), ISBN-10: 0882756427 Side note: you should \"reshoot\" your own Chebyshev coefficients using Remez (not using the co-efficents in the book)\n\nBut the bottom line is that inverse-square root, as mentioned in the paper has been faster than exp. \n\nWith an approximation you can then basically double the number of bits of accuracy with each Newton-Raphson iteration, which is easy to vectorize and does not increase the flop count too much. \n\nThe other point we made in the paper is that ISRLU has a very natural way to introduce an alpha that is smooth and continuous for 1st/2nd derivatives. Yes there are other ways of introducing alpha into ELU functions (https://arxiv.org/abs/1704.07483) but we still find ISRLU more natural.\n\nThanks for you comment, we hope that ISRLU & ISRU be considered especially for purpose-built AI DNN hardware.", "With our experience on a wide variety of architectures and implementations of instrinsics, if inverse square root is not faster than exp one should look closely at the hardware/software implementation as this is a clue that inverse square root can be better implemented. ", "Schraudolph introduced [1,2] and Cawley later revised [3] the \"Fast, and compact approximation of the exponential function\".\nI'm wondering how the speed comparisons would change if the exponentials in the sigmoid and ELU activations are replaced accordingly. \n\n[1] http://www.mitpressjournals.org/doi/10.1162/089976699300016467\n[2] https://nic.schraudolph.org/pubs/Schraudolph99.pdf\n[3] http://www.mitpressjournals.org/doi/abs/10.1162/089976600300015033\n[4} https://martin.ankerl.com/2007/02/11/optimized-exponential-functions-for-java/" ]
[ 4, 5, 3, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_HkMCybx0-", "iclr_2018_HkMCybx0-", "iclr_2018_HkMCybx0-", "rkDaCCtlz", "B1zM2bHAb", "HJ4fWYDC-", "iclr_2018_HkMCybx0-" ]
iclr_2018_SJxE3jlA-
Now I Remember! Episodic Memory For Reinforcement Learning
Humans rely on episodic memory constantly, in remembering the name of someone they met 10 minutes ago, the plot of a movie as it unfolds, or where they parked the car. Endowing reinforcement learning agents with episodic memory is a key step on the path toward replicating human-like general intelligence. We analyze why standard RL agents lack episodic memory today, and why existing RL tasks don't require it. We design a new form of external memory called Masked Experience Memory, or MEM, modeled after key features of human episodic memory. To evaluate episodic memory we define an RL task based on the common children's game of Concentration. We find that a MEM RL agent leverages episodic memory effectively to master Concentration, unlike the baseline agents we tested.
rejected-papers
The authors show evidence that an RL agent with a new neural architecture with an external memory is superior on a version of the concentration game to a baseline. However, other works have proposed neural architectures with episodic memories, and the reviewers feel that the proposed model was not adequately compared to these. Furthermore, there are concerns about the novelty of the proposed model.
train
[ "r1gpVv_gM", "SyG_vjueM", "r1o2Xfcgf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "There are a number of attempts to add episodic memory to RL agents. A common approach is to use some sort of recurrent model with a model-free agent. This work follows this approach using what could be considered a memory network with a identity embedding function and tests on 'Concentration', a game which requires matching pairs of cards. They find their model outperforms a DNC and LSTM baselines.\n\nThe primary novelty is the use of an explicitly masked similarity function (with learned mask) and the concentration task, which requires more memory than, for example, common tasks adapted from the psychology literature such as the Morris watermaze or T-maze (although in the supervised setting tasks such as Omniglot are quite similar).\n\nThis work is well-communicated and cites relevant prior work. The author's should also be commended for agreeing to release their code on publication.\n\nThe primary weakness of this work its lack of novelty and lack of evidence of generalization of the approach, which limits its significance. The model introduced is a slight variant of memory networks. Additionally, the single task the model is tested on appears custom-designed to favor the model (see next paragraph). While the analysis of the weakness of cosine similarity is interesting, memory networks which compute separate embeddings for the 'label' (content-based label for retrieval) and memory content don't appear to suffer from the same issue as the DNC. They can store only retrieval-relevant content in the label and thus avoid issues with normalization.\n\nThe observation vector is stored directly in memory without passing through an embedding function, which in general seems quite limiting. However, in the constructed task the labels are low-dimensional, random vectors and there is no noise in the labels (i.e. two cards with the same label are labelled identically, rather the similarly). The author's mention avoiding naturalistic labels such as omniglot characters (closer to the real version of concentration) due to the possibility the agent might memorise the finite set of labels, however by choosing a large dataset and using a non-overlapping set of examples for the test set this probably could be avoided and would provide a more naturalistic test set.\n\nThe comparison with the DNC also seems designed to favor their model. DNC has write-gates, which might be relevant in a task with many irrelevant observations, but in this task are clearly going to impair learning. A memory network seems the more appropriate comparison. Its not clear why the DNC model used two different DNCs for computing the policy and value.\n\nTo demonstrate their model is of more general interest it would be necessary to try on a wider range of more naturalistic tasks and a comparison with model-free agents augmented with memory networks. Simply showing that a customized model can outperform on a single custom, synthetic task is insufficient to demonstrate that these changes are of wider interest.\n\nMinor issues:\n- colorblind seems an odd description for agents which cannot perceive the card face. Why not just 'blind'? colorblind would seem to imply partial perception of the card face.\n\n- the observations of the environment are defined explicitly, but not the action space.", "The paper addresses an important problem of how ML systems can learn episodic memory.\nAuthors, first, criticize the existing approaches and benchmarks for episodic memory, arguing that the latter do not necessarily test episodic memory to the full extent of human-level intelligence.\nThen, a new external memory augmented network (MEM) is proposed which is similar in the spirit to content-based retrieval architectures such as DNC and memory networks, but allows to explicitly exclude certain dimensions of memory vectors from matching. \nAuthors evaluate the proposed MEM together with DNC and simple LSTM baselines on the game of Concentration where they find MEM to outperform concurrent approaches.\n\nUnfortunately, I did not find enough of novelty, clarity or at least rigorous and interesting experiments in the paper to recommend acceptance.\n\nDetailed comments:\n1) When a new architecture is proposed, it is good to describe in detail, at least in the appendix. Currently, it is introduced only implicitly and a reader should infer the details from fig. 2.\n2) It looks like the main difference between DNC and MEM is the way of addressing memories that allow explicit masking. If so, then to me this is a rather minor novelty and to justify it's importance authors should run a control experiment with the exact same architecture as in DNC, but with a masked similarity kernel. Besides that, an analysis of that is learned to be masked should be provided, how \"hard\" (i.e. strictly 0 and 1) are the masks, what influences them etc.\n3) While the game of concentration clearly requires episodic memory to some extent, this only task is not enough for testing EM approaches, because there is always a risk that one of the evaluated systems somehow overfitted to this task by design. Especially to reason about human-level intelligence we need a variety of tasks.\n4) To continue the previous point, humans would not perform well in the proposed task with random card labels, because it is very likely that familiar objects on cards help building associations and remembering them. Thus it is impossible to make a human baseline for this task and decide on how far are we below the human level. ", "# Summary\nThis paper proposes an external memory architecture for dealing with partial observability. The proposed method is similar to Memory Q-Network [Oh et al.], but the paper proposes a masked Euclidean distance as a similarity measure for content-based memory retrieval. The results on \"Concentration\" task show that the proposed method outperforms DNC and LSTM.\n\n[Pros]\n- Presents a new memory-related task.\n\n[Cons]\n- No comparison to proper baselines and existing methods.\n- Demonstrated in a single artificial task.\n\n# Novelty and Significance\nThe proposed external memory architecture is very similar to MQN [Oh et al.]. The proposed masked Euclidean distance for similarity measure is quite straightforward. More crucially, there is no proper comparison between the proposed masked Euclidean distance and cosine similarity (MQN). \n \n# Quality\n- The paper does not compare their method against proper baselines (e.g., MQN or the same memory architecture with cosine similarity). DNC is quite a different architecture that has flexible writing/erasing with complex addressing mechanisms. Comparison to DNC does not show the effect of the proposed idea (masked Euclidean distance). \n- The paper shows empirical results only on \"Concentration\" task, which is a bit artificial. In addition, the paper only shows a learning curve without any analysis of the learned model or qualitative results. \n\n# Clarity\n- Is the masked weight (w) a parameter or an activation of the network? \n- The description of concentration task is a bit lengthy. It would be better to move some details to the appendix.\n- I did not understand the paper's claim that \"no existing RL benchmark task could unambiguously evaluate episodic memory in RL agents\" and \"In contrast, an episodic memory task like Concentration presents many previously unseen observations which must be handled correctly without prior exposure\". In the Pattern Matching task from [Oh et al.], the agent is also required to compare two unseen visual patterns during evaluation. " ]
[ 4, 4, 4 ]
[ 5, 4, 5 ]
[ "iclr_2018_SJxE3jlA-", "iclr_2018_SJxE3jlA-", "iclr_2018_SJxE3jlA-" ]
iclr_2018_Sk1NTfZAb
Key Protected Classification for GAN Attack Resilient Collaborative Learning
Large-scale publicly available datasets play a fundamental role in training deep learning models. However, large-scale datasets are difficult to collect in problems that involve processing of sensitive information. Collaborative learning techniques provide a privacy-preserving solution in such cases, by enabling training over a number of private datasets that are not shared by their owners. Existing collaborative learning techniques, combined with differential privacy, are shown to be resilient against a passive adversary which tries to infer the training data only from the model parameters. However, recently, it has been shown that the existing collaborative learning techniques are vulnerable to an active adversary that runs a GAN attack during the learning phase. In this work, we propose a novel key-based collaborative learning technique that is resilient against such GAN attacks. For this purpose, we present a collaborative learning formulation in which class scores are protected by class-specific keys, and therefore, prevents a GAN attack. We also show that very high dimensional class-specific keys can be utilized to improve robustness against attacks, without increasing the model complexity. Our experimental results on two popular datasets, MNIST and AT&T Olivetti Faces, demonstrate the effectiveness of the proposed technique against the GAN attack. To the best of our knowledge, the proposed approach is the first collaborative learning formulation that effectively tackles an active adversary, and, unlike model corruption or differential privacy formulations, our approach does not inherently feature a trade-off between model accuracy and data privacy.
rejected-papers
While the reviewers feel there might be some merit to this work, they find enough ambiguities and inaccuracies that I think this paper would be better served by a resubmission.
train
[ "r1qWlNtlM", "SJyQ2wqlf", "SkS7qR3-M", "SkMMwxszG", "ry0o8eofG", "BJMyLesGG", "SyOgBeizf", "SJ43M9ZMz", "S1ZKptezz", "S1xr_5abG", "Hyku30n-G", "SJ227bqbM", "B197fWcbf", "S1B5euOZz", "B1xjZwmbz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "public", "official_reviewer", "public", "public", "public", "public" ]
[ "This paper is a follow-up work to the CCS'2017 paper on the GAN-based attack on collaborative learning system where multiple users contribute their private and sensitive data to joint learning tasks. In order to avoid the potential risk of adversary's mimic based on information flow among distributed users, the authors propose to embed the class label into a multi-dimensional space, such that the joint learning is conducted over the embedding space without knowing the accurate representation of the classes. Under the assumption that the adversary can only generate fake and random class representations, they show their scheme is capable of hiding information from individual samples, especially over image data.\n\nThe paper is clearly written and easy to understand. The experiments show interesting results, which are particularly impressive with the face data. However, the reviewer feels the assumption on the adversary is generally too weak, such that slightly smarter adversary could circumvent the protection scheme and remain effective on sample recovery.\n\nBasically, instead of randomly guessing the representations of the classes from other innocent users, the adversary could apply GAN to learn the representation based on the feedback from these users. This can be easily done by including the representations in the embedding space in the parameters in GAN for learning.\n\nThis paper could be an interesting work, if the authors address such enhanced attacks from the adversary and present protection results over their existing experimental settings.", "Collaborative learning has been proposed as a way to learn over federated data while preserving privacy. However collaborative learning has been shown to be suscepti\nble to active attacks in which one of the participants uses a GAN to reveal information about another participant.\n\nThis paper proposes a collaborative learning framework (CLF) that mitigates the GAN attack. The framework involves using the neural net to learn a mapping of the inp\nut to a high-dimensional vector and computing the inner product of this vector to a random class-specific key (the final class prediction is the argmax of this inner product). The class-specific key can be chosen randomly by each participant. By choosing sufficiently long random keys, the probability of an attacker guessing the key can be reduced. Experiments on two datasets show that this scheme successfully avoids the GAN attack.\n \n1. Some of the details of key sharing are not clear and would appear to be important for the scheme to work. For example, if participants have instances associated with the same class, then they would need to share the key. This would require a central key distribution scheme which would then allow the attacker to also get access to the key.\n\n2. I would have liked to see how the method works with an increasing fraction of adversarial participants (I could only see experiments with one adversary). Similarly, I would have liked to see experiments with and without the fixed dense layer to see its contribution to effective learning. ", "In this paper, the authors proposed a counter measure to protect collaborative training of DNN against the GAN attack in (Hitaj et al. 2017). The motivation of the paper is clear and so is the literature review. But for me the algorithm is not clearly defined and it is difficult to evaluate how the proposed procedure works. I am not saying that this is not the solution. I am just saying that the paper is not clear enough to say that it is (or it is not). From, my perspective this will make the paper a clear reject. \n\nI think the authors should explain a few things more clearly in order to make the paper foolproof. The first one seems to me the most clear problem with the approach proposed in the paper:\n\n1 $\\psi(c)$ defines the mapping from each class to a high dimensional vector that allows protection against the GAN attack. $\\psi(c)$ is suppose to be private for each class (or user if each class belong only to one user). This is the key aspect in the paper. But if more than one user have the same class they will need to share this key. Furthermore, at test time, these keys need to be known by everyone, because the output of the neural network needs to be correlated against all keys to see which is the true label. Of course the keys can only be released after the training is completed. But the adversary can also claim to have examples from the class it is trying to attack and hence the legitimate user that generated the key will have to give the attacker the key from the training phase. For example, let assume the legitimate user only has ones from MNIST and declares that it only has one class. The attacker says it has two classes the same one that the legitimate user and some other label. In this case the legitimate user needs to share $\\psi(c)$ with the attacker. Of course this sounds “fishy” and might be a way of finding who the attacker is, but there might be many cases in which it makes sense that two or more users shares the same labels and in a big system might be complicated to decide who has access to which key.\n\n2 I do not understand the definition of $\\phi(x)$. Is this embedding fixed for each user? Is this embedding the DNN? In Eq. 4 I would assume that $\\phi(x)$ is the DNN and that it should be $\\phi_\\theta(x)$, because otherwise the equation does not make sense. But this is not clearly explained in the paper and Eq 4 makes no sense at all. In a way the solution to the maximization in Eq 4 is Theta=\\infty. Also the term $\\phi(x)$ is not mentioned in the paper after page 5. My take is that the authors want to maximize the inner product, but then the regularizer should go the other way around. \n\n3 In the paper in page 5 we can read: “Here, we emphasize the first reason why it is important to use l2-normalized class keys and embedding outputs: in this manner, the resulting classification score is by definition restricted to the range [-1; +1],” If I understand correctly the authors are dividing the inner product by ||$\\psi(c)|| ||$\\phi(x)||. I can see that we can easily divide by ||$\\psi(c)||, but I cannot see how we can do dive by ||$\\phi(x)||, if this term depends on \\theta. If this term does not depend on \\theta, then Eq 4 does not make sense.\n\nTo summarize, I have the impression that there are many elements in the paper that does not makes sense in the way that they are explained and that the authors need to tell the paper in a way that can be easily understood and replicated. I recommend the authors to run the paper by someone in their circle that could help them rewrite the paper in a way that is more accessible. \n", "We address the problem of sharing samples for a common class, In the revised version of the paper. We have added a new section (Section 5.5) where we discuss and empirically verify that participants may have training examples of overlapping classes without sharing their private keys. (Taken partially from our answer to AnonReviwer2.)\n\nThank you very much for pointing out the ambiguity in the formulation. It has been corrected now.\n\nSince \\phi_{\\theta}(.) is a deterministic mapping that outputs a vector, we just compute the L2 norm of the output vector, simply as a function of the output vector. ", "In our approach, we protect participants by hiding class scores from any other participant in CLF. For this purpose, we let participants to create private keys for its local training classes. Please note that private keys are completely randomly distributed, and participants do not share any information about their keys throughout training. (The revised paper, we believe, explains the procedure much more clearly.)\n\nTherefore, we do not see how a GAN attack without a guidance score or feedback signal can be executed to reconstruct the private class keys. \n\nWe will be more than happy to discuss if you can elaborate this objection.\n", "We address the problem of sharing samples for a common class, in the revised version of the paper. We have added a new section (Section 5.5) where we discuss and empirically verify that participants may have training examples of overlapping classes without sharing their private keys. \n\nWe have also added new attacking results for MNIST showing that there can be multiple attackers in CLF (indeed every participant can be an attacker) in Figure-6. For such cases, the GAN attacks still fail without damaging the learning process. The reconstructions show that generators trained by attackers can capture likelihood of data given the guessed key. However these likelihoods are far from data distributions of the handwritten digits. Which is the expected outcome of our methodology and reflects our success.\n\nFurthermore we speak of how we benefit from the fixed layer in Section 5.4. By using a fixed layer, we are able to control complexity of local models, which is crucial in preventing participants to overfit their local datasets in one epoch of local training.", "We thank for the interesting comments and suggestions in this thread. We have just published the comprehensively revised paper where we have removed all of the controversial arguments regarding differential privacy (DP), as suggested by the reviewers.\n \nOur paper, however, is not (directly) about DP: we show that our proposed approach allows privacy-preserving collaborative training without introducing DP or other techniques that corrupt model parameters / parameter updates with noise injection. More importantly, our CLF formulation is resilient against active GAN attacks (Hitaj et.al. 2017).\n \nIn more detail, there are two main reasons why we think our approach is of significance:\n \n(1) DP typically requires making a difficult trade-off decision between model accuracy and privacy. In particular, the privacy budget per parameter plots in Shokri et al. (2015) show that in order to reach an acceptable (90%) level of test-set accuracy on MNIST, one may need to use very high \"epsilon\" values (ie. very low noise), which may significantly reduce the effectiveness of DP in terms of privacy preservation. Our approach does not necessarily involve such a trade-off between privacy and accuracy (except that using excessively high-dimensional class keys may lead to issues during training).\n \n(2) Our approach prevents CLF against GAN attacks (Hitaj et al. 2017), which can be difficult to avoid using DP, without (significantly) sacrificing the classification accuracy. \n \nTherefore, in summary, what we propose is not built upon DP, instead, it can be seen as a new and alternative approach for privacy preserving collaborative training that builds upon participant-specific keys, as opposed to hiding information through mixing models updates/parameters with noise.", "The above statement in this paper is false or, at best, misleading. The fact that it is attributed to someone else, doesn't change that.", "But that's the point, OTHERS have used RSA with 16-bit keys and Hitaj et al. CCS'17 show this is ill-considered. It's an attack paper, no new scheme is proposed. It is reported that properly set DP will thwart these attacks (but at the cost of utility, see the conclusions).", "This submission makes a false statement. It is mathematically impossible to reconstruct training examples while satisfying differential privacy. That statement needs to be corrected. And it is relevant to the motivation for this work.\n\nI did not mean to start a debate about the Hitaj et al. paper. My comment is only about the false statement in this submission, which is justified by citing the Hitaj et al. paper.", "This reviewer does not have a problem with the paper under study, but believes that Hitaj et al. paper is wrong. \n\nMy take is that this review should be removed, because it is only concern with the validity of a already publish work and they should talk to CCS'17 committee about it. \n\nAlso, the code for Hitaj et al. 2017 is available if the reviewer thinks the parameters are incorrectly set, they should work with the code to show that the authors maliciously played with the parameters and publish a paper or a blog showing why it does not work. The blog link above does not do that. I think this is the best way to show that Hitaj et al. is not valid. But trashing other conferences with grievances is an old technique that some people use all too frequently and it is becoming really tiring. ", "Differential privacy is tangential to the work in this submission and the flaws of the Hitaj et al. paper should not be held against it. \n\nI am commenting because the quote about the related work needs to be clarified. Both Shokri & Smatikov and Hitaj et al. use differential privacy with extremely large parameters, which render it meaningless.", "The Hitaj et al. CCS'17 paper is misleading; only upon close scrutiny does one realize that, when they refer to differential privacy, they mean with crazy parameters.\n\nIt's analogous to claiming that RSA cryptography is broken and then only on page 3 clarifying that what you really mean is that RSA with 16-bit keys is susceptible to a brute force factoring attack. \n\nIn particular, the above quote from this submission does not clarify this issue. It says \"differential privacy fails to prevent the attack\" without providing details. This is on its face false, as the default interpretation is \"differential privacy with reasonable parameters.\" \n", "The CCS’17 (Hitaj et al.) paper mentions several times they don't \"break\" DP or use DP in any way, but they show that DP is inadequate when epsilon is large (as used and implemented by others) or at the record level. See throughout the paper (https://acmccs.github.io/papers/p603-hitajA.pdf) and the conclusions in particular. \n\nSo the blog misses several crucial points and this paper (\"Key Protected Classification… “) also provides clear evidence of the privacy risks of CLFs.", "This paper states (page 2, second paragraph):\n\nHowever, it has recently been shown that [collaborative learning frameworks (CLFs)] can be vulnerable to not only passive attacks, but also much more powerful active attacks, i.e., training-time attacks, for which the CLF with differential privacy fails to prevent the attack and there is no known prevention technique in general (Hitaj et al., 2017). More specifically, a training participant can construct a generative adversarial network (GAN) (Goodfellow et al., 2014) such that its GAN model learns to reconstruct training examples of one of the other participants over the training iterations. \n\nThis is given as the motivation for this work, but this statement is very flawed. Hitaj et al. do not \"break\" differential privacy. The problem is that they use differential privacy with extremely large parameter values, which yields a meaningless privacy guarantee.\n\nFrank McSherry has posted a detailed critique of the Hitaj et al. paper here:\n\nhttps://github.com/frankmcsherry/blog/blob/master/posts/2017-10-27.md" ]
[ 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 2, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Sk1NTfZAb", "iclr_2018_Sk1NTfZAb", "iclr_2018_Sk1NTfZAb", "SkS7qR3-M", "r1qWlNtlM", "SJyQ2wqlf", "iclr_2018_Sk1NTfZAb", "S1ZKptezz", "B197fWcbf", "Hyku30n-G", "B1xjZwmbz", "B1xjZwmbz", "S1B5euOZz", "B1xjZwmbz", "iclr_2018_Sk1NTfZAb" ]
iclr_2018_Bk_fs6gA-
Long Term Memory Network for Combinatorial Optimization Problems
This paper introduces a framework for solving combinatorial optimization problems by learning from input-output examples of optimization problems. We introduce a new memory augmented neural model in which the memory is not resettable (i.e the information stored in the memory after processing an input example is kept for the next seen examples). We used deep reinforcement learning to train a memory controller agent to store useful memories. Our model was able to outperform hand-crafted solver on Binary Linear Programming (Binary LP). The proposed model is tested on different Binary LP instances with large number of variables (up to 1000 variables) and constrains (up to 700 constrains).
rejected-papers
The authors use a memory-augmented neural architecture to learn solve combinatorial optimization problems. The reviewers consider the approach worth studying, but find the authors' experimental protocol and baselines flawed.
train
[ "rJpWE-cgM", "ByEoHz5ez", "rJT9cfqlz", "Hy5pspL1f", "rJjGPGcAZ", "Skx-mCeAZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public" ]
[ "Learning to solve combinatorial optimization problems using recurrent networks is a very interesting research topic. However, I had a very hard time understanding the paper. It certainly doesn’t help that I’m not familiar with the architectures the model is based on, nor with state-of-the-art integer programming solvers.\n\nThe architecture was described but not really motivated. The authors chose to study only random instances which are known to be bad representatives of real-world problmes, instead of picking a standard benchmark problem. Furthermore, the insights on how the network is actually solving the problems and how the proposed components contribute to the solution are minimal, if any.\n\nThe experimental issues (especially regarding the baseline) raised by the anonymous comments below were rather troubling; it’s a pity they were left unanswered.\n\nHopefully other expert reviewers will be able to provide constructive feedback.", "# Summary\nThis paper proposes a neural network framework for solving binary linear programs (Binary LP). The idea is to present a sequence of input-output examples to the network and train the network to remember input-output examples to solve a new example (binary LP). In order to store such information, the paper proposes an external memory with non-differentiable reading/writing operations. This network is trained through supervised learning for the output and reinforcement learning for discrete operations. The results show that the proposed network outperforms the baseline (handcrafted) solver and the seq-to-seq network baseline.\n\n[Pros]\n- The idea of approximating a binary linear program solver using neural network is new.\n\n[Cons]\n- The paper is not clearly written (e.g., problem statement, notations, architecture description). So, it is hard to understand the core idea of this paper.\n- The proposed method and problem setting are not well-justified. \n- The results are not very convincing.\n\n# Novelty and Significance\n- The problem considered in this paper is new, but it is unclear why the problem should be formulated in such a way. To my understanding, the network is given a set of input (problem) and output (solution) pairs and should predict the solution given a new problem. I do not see why this should be formulated as a \"sequential\" decision problem. Instead, we can just give access to all input/output examples (in a non-sequential way) and allow the network to predict the solution given the new input like Q&A tasks. This does not require any \"memory\" because all necessary information is available to the network.\n- The proposed method seems to require a set of input/output examples even during evaluation (if my understanding is correct), which has limited practical applications. \n\n# Quality\n- The proposed reward function for training the memory controller sounds a bit arbitrary. The entire problem is a supervised learning problem, and the memory controller is just a non-differentiable decision within the neural network. In this case, the reward function is usually defined as the sum of log-likelihood of the future predictions (see [Kelvin Xu et al.] for training hard-attention) because this matches the supervised learning objective. It would be good to justify (empirically) the proposed reward function. \n- The results are not fully-convincing. If my understanding is correct, the LTMN is trained to predict the baseline solver's output. But, the LTMN significantly outperforms the baseline solver even in the training set. Can you explain why this is possible?\n\n# Clarity\n- The problem statement and model description are not described well. \n1) Is the network given a sequence of program/solution input? If yes, is it given during evaluation as well?\n2) Many notations are not formally defined. What is the output (o_t) of the network? Is it the optimal solution (x_t)? \n3) There is no mathematical definition of memory addressing mechanism used in this paper.\n- The overall objective function is missing. \n\n[Reference]\n- Kelvin Xu et al., Show, Attend and Tell: Neural Image Caption Generation with Visual Attention", "This paper proposes using long term memory to solve combinatorial optimization problems with binary variables. The authors do not exhibit much knowledge of combinatorial optimization literature (as has been pointed out by other readers) and ignore a lot of previous work by the combinatorial optimization community. In particular, evaluating on random instances is not a good measure of performance, as has already been pointed out. The other issue is with the baseline solver, which also seems to be broken since their solution quality seems extremely poor. In light of these issues, I recommend reject.", "Looking forward to a response from the authors.\n\nAnother signal that the baseline is broken - with 80 and 150 variables, the average cost is zero, which means that for all of the 1000 problems at these two sizes, their baseline returned the trivial all zeros solution.\n\nThe baseline is clearly broken (in addition to the many troubling concerns already pointed out in other comments).", "The authors propose to use a long-term memory network to solve combinatorial optimization problems, namely binary linear programs. \n\nUsing a long-term memory is definitely interesting, at it may capture a deeper understanding of the combinatorial problems at hand. This knowledge may in turn help improve the resolution process.\n\nThat being said, this works suffers from seemingly clear lack of knowledge of optimization-related literature and practices.\n\n* As was mentioned in a previous comment, there is no mention of existing literature on combinatorial optimization and integer linear optimization. The claim that \"the application of neural networks to combinatorial optimization has a long and distinguished history\" is only supported by references to works that are 30 years appart (mid 80s and mid 2010s).\n\n* The authors state that (sec. 3, 1st paragraph) \"a naive linear solver constructs the set of feasible solutions, [...] then iterates over [it] using the cost function to find the optimal solution\". This wrongly suggests that linear solvers go through explicit enumeration, which is not the case. Cutting planes and branch-and-bound techniques should be mentioned, or at least referred to (any textbook on linear programming would have a chapter on this).\n\n* The experimental procedure (section 5) goes against most good practices from the OR community:\n - A set of randomly-generated instance is NOT representative of any real-life problem (see MIPlib for a well-known and broadly-used benchmark), and such instances are likely to be infeasible. This latter concern is not mentionned.\n - In the generated dataset, \"at most 33% of coefficients are non-zeros\". Practical instances are much more sparse (see MIPlib instances)\n - The baseline appears to be extremely poor, as was pointed out by a previous comment. 10 variables means at most 1024 solutions. Any solver that does not achieve optimality of such instances should not be considered as a baseline.\n - The performance of the algorithms is evaluated using the average cost as a metric (with no mention of variance in the results). This is not a good metric for it is too sensitive to extreme values. Performance profiles (see Dolan and Moré 2002) are a more comprehensive tool for benchmarking optimization algorithms.\n\n\nAll in all, the lack of relevant litterature and poor methodology raise serious concerns over the contribution of the proposed approach.", "If you're going to compare against combinatorial optimization problems, you should cite work on combinatorial optimization - this is an active area of research and the google paper was rejected last year for ignoring previous work and overstating contributions. \n\nIs the COIN-OR package anywhere close to SOTA? I'd expect the industrial solvers like CPLEX and Gurobi to be far better, and they have free academic licenses available. You're showing an improvement in 60% of cases with only ten variables in Figure 2, but there's only 2^10 = 1024 possible variable values, meaning it's possible to brute force search through all the solutions and achieve the optimal value. The fact that your baseline solver doesn't do that suggests it's not a very strong baseline." ]
[ 4, 4, 3, -1, -1, -1 ]
[ 1, 2, 4, -1, -1, -1 ]
[ "iclr_2018_Bk_fs6gA-", "iclr_2018_Bk_fs6gA-", "iclr_2018_Bk_fs6gA-", "rJjGPGcAZ", "iclr_2018_Bk_fs6gA-", "iclr_2018_Bk_fs6gA-" ]
iclr_2018_r17Q6WWA-
Multi-Task Learning by Deep Collaboration and Application in Facial Landmark Detection
Convolutional neural networks (CNN) have become the most successful and popular approach in many vision-related domains. While CNNs are particularly well-suited for capturing a proper hierarchy of concepts from real-world images, they are limited to domains where data is abundant. Recent attempts have looked into mitigating this data scarcity problem by casting their original single-task problem into a new multi-task learning (MTL) problem. The main goal of this inductive transfer mechanism is to leverage domain-specific information from related tasks, in order to improve generalization on the main task. While recent results in the deep learning (DL) community have shown the promising potential of training task-specific CNNs in a soft parameter sharing framework, integrating the recent DL advances for improving knowledge sharing is still an open problem. In this paper, we propose the Deep Collaboration Network (DCNet), a novel approach for connecting task-specific CNNs in a MTL framework. We define connectivity in terms of two distinct non-linear transformation blocks. One aggregates task-specific features into global features, while the other merges back the global features with each task-specific network. Based on the observation that task relevance depends on depth, our transformation blocks use skip connections as suggested by residual network approaches, to more easily deactivate unrelated task-dependent features. To validate our approach, we employed facial landmark detection (FLD) datasets as they are readily amenable to MTL, given the number of tasks they include. Experimental results show that we can achieve up to 24.31% relative improvement in landmark failure rate over other state-of-the-art MTL approaches. We finally perform an ablation study showing that our approach effectively allows knowledge sharing, by leveraging domain-specific features at particular depths from tasks that we know are related.
rejected-papers
The experimental work in this paper leaves it just short of being suitable for acceptance. The work needs more comparisons with prior work and other approaches. The numerical ratings of the work by reviewers are just too low.
train
[ "Bylfi7tez", "ry0lbHclz", "SyuPmP3lM", "SytmUAi7M", "Hy-UWWjff", "HJDRBqufM", "ryXDrc_fG", "HyyB0FuGf", "S1MZTYuzG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "Pros:\n1. This paper proposed a new block which can aggregate features from different tasks. By doing this, it can take advantage of common information between related tasks and improve the generalization of target tasks.\n\n2. The achievement in this paper seems good, which is 24.31%.\n\nCons:\n1. The novelty of this submission seems a little limited.\n\n2. The target task utilized in this paper is too simple, which only detects 5 facial landmarks. It is hard to say this proposed work can still work when facing more challenging tasks, for example, 60+ facial landmarks prediction.\n\n3. \" Also, one drawback of HyperFace is that the proposed feature fusion is specific to AlexNet,\" In the original submission, HyperFace is based on AlexNet, but does this mean it can only work on AlexNet?", "\n\nThis paper proposes a multi-pathway neural network for facial landmark detection with multitask learning. In particular, each pathway corresponds to one task, and the intermediate features are fused at multiple layers. The fused features are added to the task-specific pathway using a residual connection (the input of the residual connection are the concatenation of the task-specific features and the fuse features). The residual connection allows each pathway to selectively use the information from other pathways and focus on its own task.\n\nThis paper is well written. The proposed neural network architectures are reasonable. \n\nThe residual connection can help each pathway to focus on its own task (suggested by Figure 8). This phenomenon is not guaranteed by the training objective but happens automatically due to the architecture, which is interesting. \n\nThe proposed model outperforms several baseline models. On MTFL, when using the AlexNet, the improvement is significant; when using the ResNet18, the improvement is encouraging but not so significant. On AFLW (trained on MTFL), the improvements are significant in both cases. \n\nWhat is missing is the comparison with other methods (besides the baseline). For examples, it will be helpful to compare with existing non-multitask learning methods, like TCDCN (Zhang et al., 2014) (it seems to achieve 25% failure rate on AFLW, which is lower than the numbers in Figure 5), and multi-task learning method, like MTCNN (Zhang et al., 2016). It is important to show that proposed multitask learning method is useful in practice. \nIn addition, many papers take the average error as the performance metric. Providing results in the average error can make the experiments more comprehensive.\n\nThe proposed architecture is a bit huge. It scales linearly with the number of tasks, which is not quite preferable. It is also not straightforward to add new tasks to finetune a trained model. \n\nIn Figure 5 (left), it is a bit weird that the pretrained model underperforms the nonpretrained one. \n\nI am likely to change the rating based on the comparison with other methods.\n\n\n", "The collaborative block that authors propose is a generalized module that can be inserted in deep architectures for better multi-task learning. The problem is relevant as we are pushing deep networks to learn representation for multiple tasks. The proposed method while simple is novel. The few places where the paper needs improvement are:\n\n1. The authors should test their collaborative block on multiple tasks where the tasks are less related. Ex: Scene and object classification. The current datasets where the model is evaluated is limited to Faces which is a constrained setting. It would be great if Authors provide more experiments beyond Faces to test the universality of the proposed approach.\n2. The Face datasets are rather small. I wonder if the accuracy improvements hold on larger datasets and if authors can comment on any large scale experiments they have done using the proposed architecture. \n\nIn it's current form I would say the experiment section and large scale experiments are two places where the paper falls short. ", "Here are some preliminary results of our approach on MTCNN. Note that due to the time allotted, we did not fully explored all variations of our training process. We are currently performing more experiments, especially on the structure of the underlying networks PNet, RNet and ONet, and on the hard negative data generation process (we used the provided hyper-parameters from the authors of the MTCNN-Tensorflow github project, which may have been fine-tuned for the original MTCNN).\n\nWe used as training datasets Sun’s et al. dataset (LFW+Net) for landmarks and the Wider face dataset for face recognition. We test on the Celeba test set, which contains 19962 images. Here are the results that we obtained:\n\nMTCNN original (ran ourselves)\n\n Number of face detection failures: 44\n Mean dists: 8.1124\n Median dists: 3.9281\n Mean landmark failure rate: 0.2201\n\nOurs\n\n Number of face detection failures: 16\n Mean dists: 8.5217\n Median dists: 2.8076\n Mean landmark failure rate: 0.1258\n\nWith our approach, MTCNN has fewer face detection failures (16 vs 44) and a lower landmark failure rate (0.1258 vs 0.2201). The mean distance is however larger (8.5217 vs 8.1124), but the median is lower (2.8076 vs 3.9281). We are currently running experiments on the MTFL dataset to see if we obtain similar improvements. ", "Thank you for the detailed clarification. \n\nFor the test settings, thank you for the clarification. It is clear now. \nI agree that it is hard to fully replicate previous methods and retrain the models with an up-to-date neural network. However, without that results, the experiments seem a bit not solid enough. \nThe explanations of the other limitations are reasonable, but they do not resolve the actual limitation. I would like to just take these limitations. \n\nSome of my concerns are addressed, and there is a chance for the authors to provide an updated MTCNN results. I would like to slightly increase the rating.", "These are the values that we obtained (in %):\n\n1. MTFL\n\n1.1 Underlying Network: AlexNet\n\n1.1.1 Not pre-trained\nAN-S: 9.474, AN: 9.435, ANx: 9.356, HF: 9.548, TCDCN: unknown, XS: 9.379, Ours: 8.423\n\n1.1.2 Pre-trained\nAN-S: 9.473, AN: 9.395, HF: 9.426, TCDCN: unknown, XS: 9.377, Ours: 8.500\n\n1.2 Underlying Network: ResNet\n\n1.2.1 Not pre-trained\nRN-S: 8.692, RN: 8.571, RNx: 8.236, XS: 8.456, Ours: 8.007\n\n1.2.2 Pre-trained\nRN-S: 8.262, RN: 8.170, XS: 7.953, Ours: 7.845\n\n2. AFW\n\n2.1 Underlying Network: AlexNet\n\n2.1.1 Not pre-trained\nAN-S: 16.04, AN: 16.06, ANx: 16.68, HF: 16.34, XS: 18.24, Ours: 15.14\n\n2.1.2 Pre-trained\nAN-S: 17.24, AN: 17.15, HF: 16.34, XS: 17.78, Ours: 16.95\n\n2.2 Underlying Network: ResNet\n\n2.2.1 Not pre-trained\nRN-S: 16.15, RN: 15.71, RNx: 14.81, XS: 15.78, Ours: 14.14\n\n2.2.2 Pre-trained\nRN-S: 14.73, RN: 14.97, XS: 15.70, Ours: 14.27\n", "1. What is missing is the comparison with other methods...\n\nWe would like to first clarify some elements that seem to be confusing. The authors of the TCDCN approach, Zhang et al., 2014, first introduced the MTFL dataset in their paper ( http://mmlab.ie.cuhk.edu.hk/projects/TCDCN.html). This dataset was created by merging two datasets together, one for training and one for testing. The training is Sun’s et al. dataset, which itself is constituted of two datasets: LFW dataset and their proposed Net dataset. As for the test set, Zhang et al., 2014 randomly selected 3000 images from the AFLW dataset.\n\nThe performance of 25% by TCDCN therefore correspond to the accuracy on the test set of MTFL. As can be seen in Figure 3, we already compared ourselves to TCDCN.\n\nThe results in Figure 5 are for a different test set. We still train on the training dataset of MTFL (LFW + Net), but this time we test on the AFW dataset from Zhu et al. 2012.\n\nWe would have liked to compare ourselves to TCDCN on AlexNet (initialized at random) and ResNet, but the authors did not yet provide the training code. There is an open issue in their Github project: https://github.com/zhzhanp/TCDCN-face-alignment/issues/7 \n\nRegarding MTCNN, we are currently running experiments using the MTCNN-Tensorflow github project. We implemented our collaborative block and incorporated it to the network. The results should be available in the following 2 weeks or so, but already look promising.\n\n2. In addition, many papers take the average error as the performance metric.\n\nWe did not include the average error since we thought the metric error was sufficient. However, we agree that it would be more comprehensive to show them. We will put them in a following comment.\n\n3. The proposed architecture is a bit huge...\n\nOur main contribution is the collaborative block, which connects task-specific networks in a soft parameter sharing MTL setting. The linear increase with the number of tasks is a limitation of this setting, which is well-known in the MTL community. The effective increase of our collaborative block is in itself limited two conv layers for the central aggregation, and two conv layers for each task-specific aggregation.\n\nWhen using ResNet18 as underlying network, we used 5 collaborative blocks. With 4 tasks in total, the size of each block (in order of depth) was 234,752, 234,752, 936,448, 3,740,672 and 14,952,448. The single-task ResNet18 has 11,176,512 parameters, so the five tasks soft-parameter network has 55,882,560 parameters. For ResNet18, that increase may be large, but note that it does not scale with depth. Using a ResNet101 with 44,549,160 parameters, the five tasks soft-parameter network would have 222,745,800. The relative parameter increase of our approach would be lower.\n\n4. It is also not straightforward to add new tasks to finetune a trained model.\n\nIn our soft-parameter sharing multi-task setting, finetuning on a new task can be done by simply connecting a new task-specific network to the other networks. In the case where we do not have access (during finetuning) to the original tasks on which the network was trained on, it is always possible to freeze the weights of the pre-trained task-specific networks. This has the advantage of avoiding catastrophic forgetting, where the features learned from the previous tasks are removed during finetuning. This is in contrast with hard-parameter sharing, where only the fully connected layers are separated. In that case, the network must be finetuned using all previous tasks, otherwise the shared intermediate layers can experiment catastrophic forgetting.\n\n5. In Figure 5 (left), it is a bit weird that the pretrained model underperforms the nonpretrained one.\n\nFor this experiment, the networks were pretrained on ImageNet, fine-tuned on MTFL, then tested on AFW. In other words, the networks were pretrained on a first domain, finetuned on a second domain then tested on a third domain. We believe that, in order to develop good domain adaptation abilities during finetuning, it was harder for the networks to adjust its features learned on ImageNet than learned them from a random initialisation start. ", "1. The novelty of this submission seems a little limited.\n\nSeveral advances in deep learning from the past 5 years have shown that simple approaches sometimes yield large improvements. One of the most striking example is the use of identity skip connection in Residual Network. The novelty of simply adding the feature map at a lower layer to the feature map at a higher layer could be viewed as limited. Other contributions that could be seen as limited also include batch normalisation, the squeeze and excitation block from the winners of ImageNet 2017 competition (feature map calibration by global average pooling and scaling), DenseNet (change the sum in ResNet by a concatenation) and even the ReLU. The novelty of all these approaches may seem limited, but this is because they are fairly simple to understand.\n\nOne crucial advantage of simple contributions is the possibility of straightforward integration into existing projects. As an example, AnonReviewer3 asked us to include MTCNN in our experiments. We found the following Tensorflow github project https://github.com/AITTSMD/MTCNN-Tensorflow that implements and trains MTCNN. It took use little time to implement our collaborative block in Tensorflow, and integrate it to their network. We have included our code at the end of our comment. This is a key advantage of our contribution, that it can be incorporated into existing projects without difficulties.\n\n# Tensorflow python implementation of our collaborative block\n\ndef collaborative_block(inputs, nf, training, scope):\n def conv2d(x, nf, fs):\n y = slim.conv2d(x, nf, [fs, fs], 1, 'SAME', activation_fn=None, \n weights_initializer=slim.xavier_initializer(),\n biases_initializer=None,\n weights_regularizer=slim.l2_regularizer(0.0005))\n return y\n\n def bn(x, training):\n y = slim.batch_norm(x, 0.999, True, True, 1e-5, is_training=training)\n return y\n\n def aggregation(x, n_out, training):\n with tf.variable_scope('aggregation'):\n z = x\n z = conv2d(z, n_out, 1)\n z = bn(z, training)\n z = tf.nn.relu(z)\n z = conv2d(z, n_out, 3)\n z = bn(z, training)\n return z\n\n def central_aggregation(inputs, n_out, training):\n with tf.variable_scope('central'):\n z = tf.concat(inputs, axis=-1)\n z = aggregation(z, n_out, training)\n z = tf.nn.relu(z)\n return z\n\n def local_aggregation(x, z, n_out, pos, training):\n with tf.variable_scope('local_{}'.format(pos)):\n y = tf.concat([x, z], axis=-1)\n y = x + aggregation(y, n_out, training)\n return y\n\n with tf.variable_scope(scope):\n n_inputs = len(inputs)\n z = central_aggregation(inputs, nf * n_inputs // 4, training)\n outputs = [local_aggregation(x, z, nf, i, training)\n for i, x in enumerate(inputs)]\n return outputs\n\n\n2. The target task utilized in this paper is too simple, which only detects 5 facial landmarks. It is hard to say this proposed work can still work when facing more challenging tasks, for example, 60+ facial landmarks prediction.\n\nIt is true that we only predict 5 facial landmarks in our experiments on the MTFL and AFW datasets. However, we predict 21 facial landmarks in our experiment on the AFLW dataset. This is a substantially harder task. As we write in table 1 in section 4.4, the results show that our approach outperforms both the standard multi-task setting (hard-parameter sharing) and the cross-stitch approach (soft-parameter sharing). \n\n\n3. \"Also, one drawback of HyperFace is that the proposed feature fusion is specific to AlexNet,\" In the original submission, HyperFace is based on AlexNet, but does this mean it can only work on AlexNet?\n\nIn the version of their paper what we read on arXiv, i.e. version 2, they wrote that they propose “a novel CNN architecture” as one of their contributions. They did not elaborate on how to adapt their approach on other network architectures. At that time, it was not clear for us how to make it work on other network. However, the authors recently (December 6, 2017) updated their arXiv paper to version 3, with a network architecture based on residual connections (see https://arxiv.org/abs/1603.01249). They call it the HyperFace-ResNet, in contrast to their original HyperFace network using AlexNet. Since now they show how to adapt their approach to another network, we agree that it is no longer a drawback that it is only specific to AlexNet. We will therefore remove our sentence.", "1. We decided to perform our multi-task experiments on facial landmark detection because several previous approaches have shown that training on face orientation regression, along with gender, smile and glasses classification, can help better detect facial landmarks. We wanted to first demonstrate that our approach could leverage domain-specific information from tasks that we know were related. In particular, this allowed our ablation study in section 4.5 to provide empirical and easily interpretable evidence that our approach could indeed take advantage of high level face profile features to boost facial landmark detection.\n\nHowever, we agree that including experiments in other domains would further improve diversity. In that sense, we started working immediately after submission on tasks unrelated to faces, to precisely test the universality of our approach. So far, all conducted experiments were positive. For instance, we have an ongoing project on tree species identification from images of bark. This yet unpublished dataset contains 750,000+ unique crops from close-up pictures of bark for 22 different tree species, along with their trunk diameter (DBH). This is a multi-task setting where the tasks are less related. Indeed, different types of trees can have the same DBH, and inversely, trees from the same category can have different DBH (reflecting for instance their age). Using our approach, we could improve the classification accuracy from 93,09% to 94,46%. We can add this result in a new section to provide additional evidence that our approach can also work in a multi-task setting where tasks are less related.\n\nWe also looked at using our collaborative block on a standard object recognition problem. Instead of connecting task-specific networks to perform multi-task learning, we create a network by repeating our collaborative block to perform single-task learning. The network processes the input image using multiple connected branches, and outputs a feature vector at the end of the convolutional layers (before the fully connected layer) that is the concatenation of the features computed by each branch. In this setting, our approach goes in line with current works that try to alleviate large processing time by trading depth for width, i.e. by using fewer layers with more weights. Our current preliminary results have shown that increasing width by having more collaborative branches is an effective way to achieve similar error rates, while using fewer weights. On the CIFAR-10 dataset, with standard data augmentation (horizontal flip and ±4 pixels translation), we obtained 3.96% classification error using only 6,988,986 parameters. In comparison to recent approaches that explored trading depth for width, our approach has the lowest number of weights (relatively to obtaining around 4% error rate), as seen below:\n\nWide ResNet\n4.00% with 36,479,194 parameters (https://github.com/szagoruyko/wide-residual-networks)\n\nResNeXt\n4.00% with around 9.2M parameters (estimated from the curve in Fig. 7 of their arxiv paper https://arxiv.org/pdf/1611.05431.pdf)\n\nAOGNet-BN\n3.99 with 8.0M parameters (https://arxiv.org/pdf/1711.05847.pdf)\n\nOurs:\n3.96 % with 6,988,986 parameters\n\nWe are currently performing more experiments, but we could include this result on CIFAR-10.\n\n\n2. We started with smaller datasets because we wanted to test several networks. For instance, the results in Figure 5 took us around one month to obtain with our single GPU architecture. This is because we used two underlying networks, which can either be pre-trained or not, and trained them in either a single-task setting or one of the different multi-task settings. We wanted to compare our approach to many other networks before going on larger datasets. \n\nHowever, we agree that it would be best to have larger datasets. In that sense, our current work on multi-task tree classification mentioned above could be considered a more large scale experiment, as it contains 750,000+ unique crops of 224x224 pixels. Moreover, we are currently implementing our approach on the MTCNN facial landmark detection approach. The network is trained to perform both face detection and landmarks detection. For face detection, we are using the WIDER dataset (http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/) containing 393,703 face images, and for facial landmark detection, we are using the celebA dataset (http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) containing 202,599 face images. The results of this experiment should be available in the next 2 weeks or so, but are already looking promising. See my answer to AnonReviewer3 for more details." ]
[ 5, 6, 6, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r17Q6WWA-", "iclr_2018_r17Q6WWA-", "iclr_2018_r17Q6WWA-", "Hy-UWWjff", "HJDRBqufM", "ryXDrc_fG", "ry0lbHclz", "Bylfi7tez", "SyuPmP3lM" ]
iclr_2018_SJCPLLpaW
Exploring the Hidden Dimension in Accelerating Convolutional Neural Networks
DeePa is a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training process of convolutional neural networks. DeePa optimizes parallelism at the granularity of each individual layer in the network. We present an elimination-based algorithm that finds an optimal parallelism configuration for every layer. Our evaluation shows that DeePa achieves up to 6.5× speedup compared to state-of-the-art deep learning frameworks and reduces data transfers by up to 23×.
rejected-papers
While this paper has some very interesting ideas the majority view of the reviewers and their aggregate numerical ratings are just too low to warrant acceptance.
val
[ "SkhvCRBVz", "r1ggBqSVf", "S18Z1_H4f", "rkMXszYgM", "BJ_nVijxM", "rkCp66Tef", "HkXoonNzf", "HJ3rchNfG", "BJuQ834Gf" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "We thank the reviewer for the constructive feedback.\n\nFor the TensorFlow experiments, we use synchronous training with a batch size of 32 and train the models on the ImageNet dataset. The performance numbers reported on https://www.tensorflow.org/performance/benchmarks are measured by using the asynchronous training method, which has better performance than synchronous training as shown in the TensorFlow paper.\n\nTo test the effectiveness of our TensorFlow benchmark, we rerun the TensorFlow experiments with asynchronous training and report the numbers in the following table, which shows that our TensorFlow benchmark has very similar performance compared to the benchmark on the website.\n\nTable 1. TensorFlow training throughput (#images/second) with a batch size of 32 on the ImageNet dataset.\n\n#P100 GPUs Model Our numbers Our numbers Numbers from the website\n Synchronous Asynchronous Asynchronous\n1 VGG-16 137 141 144\n2 VGG-16 253 252 253\n4 VGG-16 406 459 457\n1 Inception3 124 126 130\n2 Inception3 243 255 257\n4 Inception3 439 488 507\n", "For TensorFlow the reported numbers seem significantly lower than benchmarks at https://www.tensorflow.org/performance/benchmarks and for less optimal batch size.\n\nCan't find similar benchmarks for PyTorch so don't have a better comparison, but it opens the question of how good is that benchmark.", "We have attempted to address the reviews' comments in the revised manuscript, and we believe the revisions have resulted in a significantly improved manuscript. The revised manuscript includes the following major changes:\n\n1. We have added an experiment (Section 7.3) with NVIDIA Tesla P100 GPUs (newest generation GPUs) on up to 4 compute nodes to compare different frameworks for distributed executions.\n\n2. We have conducted performance comparisons (Section 7.2) with different batch sizes and show that we are able to achieve speedups compared to PyTorch and TensorFlow with all of the tested batch sizes\n\n3. We have added some profiling results (Appendix A.3) to visualize the performance bottlenecks in image parallelism and our configurations. The profiling results show that our configurations reduce data transfers, better overlap data transfers with computation, and improve GPU utilization.\n\n4. We have added a performance comparison (Appendix A.4) on the ImageNet-22K dataset and show that DeePa achieves even better speedups compared to PyTorch and TensorFlow on the ImageNet-22K dataset.", "This paper develops a framework for parallelization of convolutional neural nets. In the framework, parallelism on different dimensions are explored for convolutional layers to accelerate the computation. An algorithm is developed to find the best global configuration.\n\nThe presentation needs to be more organized, it is not very easy to follow.\n\n1. Computation throughput is not defined.\n\n2. Although the author mentions DeePa with Tensorflow or Pytorch several times, I think it is not proper to make this comparison. The main idea of this paper is to optimize the parallelization scheme of CNN, which is independent of the framework used. It is more useful if the configuration searching can be developed on tensorflow / pytorch.\n\n3. The per layer comparison is not very informative for practice because the data transfer costs of convolution layers could be completely hidden in data parallelization. In data parallelism, the GPU devices are often fully occupied during the forward pass and backward pass. Gaps are only in between forward and backward, and between iterations. Model parallelism would add gaps everywhere in each layer. This could be more detrimental when the communication is over ethernet. To be more convincing, it is better to show the profile graph of each run to show which gaps are eliminated, rather than just numbers.\n\n4. The batch size is also a crucial factor, difference batch size would favor different methods. More comparisons are necessary.", "The paper proposes an approach that offers speedup on common convolutional neural networks. It presents the approach well and shows results comparing with other popular frameworks used in the field.\n\nOriginality\n- The automation of parallelism across the different dimensions in each of the layers appears somewhat new. Although parallelism across each of the individual dimensions has been explored (batch parallel is most common and best supported, height and width is discussed at least in the DistBelief paper), automatically exploring this to find the most efficient approach is new. The splitting across channels seems not to have been covered in a paper before.\n\nSignificance\n- Paper shows a significant speedup over existing approaches on a single machine (16 GPUs). It is unclear how well this would translate across machines or to more devices, and also on newer devices - the experiments were all done on 16 K80s (3 generations old GPUs). While the approach is interesting, its impact also depends on the speedup on the common hardware used today.\n\nPros:\n- Providing better parallelism opportunities for convolutional neural networks\n- Simple approach to finding optimal global configurations that seems to work well\n- Positive results with significant speedups across 3 different networks\n\nCons:\n- Unclear if speedups hold on newer devices\n- Useful to see how this scales across more than 1 machine\n- Claim on overlapping computation with data transfer seems incorrect. I am pretty sure TensorFlow and possibly PyTorch supports this.\n\nQuestions:\n- How long does finding the optimal global configuration take for each model?\n", "The paper proposes a deep learning framework called DeePa that supports multiple dimensions of parallelism in computation to accelerate training of convolutional neural networks. Whereas the majority of work on parallel or distributed deep learning partitions training over bootstrap samples of training data (called image parallelism in the paper), DeePa is able to additionally partition the operations over image height, width and channel. This gives more options to parallelize different parts of the neural network. For example, the best DeePa configurations studied in the paper for AlexNet, VGG-16, and Inception-v3 typically use image parallelism for the initial layers, reduce GPU utilization for the deeper layers to reduce data transfer overhead, and use model parallelism on a smaller number of GPUs for fully connected layers. The net is that DeePa allows such configurations to be created that provide an increase in training throughput and lower data transfer in practice for training these networks. These configurations for parellism are not easily programmed in other frameworks like TensorFlow and PyTorch.\n\nThe paper can potentially be improved in a few ways. One is to explore more demanding training workloads that require larger-scale distribution and parallelism. The ImageNet 22-K would be a good example and would really highlight the benefits of the DeePa in practice. Beyond that, more complex workloads like 3D CNNs for video modeling would also provide a strong motivation for having multiple dimensions of the data for partitioning operations.", "We thank the reviewer for the constructive comments.\n\n1. Computation throughput in Figure 1 is not defined.\nWe have added a definition of computation throughput in Figure 1.\n\n2. It is more useful if DeePa can be developed on TensorFlow or PyTorch.\nWe agree that our results show such implementations would be useful. Legion is the only framework that supports partitioning/parallelization across all the interesting dimensions (image, height, width, and channel for 2D CNNs), which is why we selected it to demonstrate that substantial speedups in deep learning through exploiting other parallelizable dimensions is even possible.\n\n3. To be more convincing, it is better to show the profile graphs of different runs to help understand which gaps are eliminated.\nWe have added some profiling results (Appendix A.3) to compare the performance between image parallelism and DeePa's configuration. The profiling results show that the better configurations reduce data transfers, better overlap data transfers with computation, and improve GPU utilization.\n\n4. Performance comparisons with different batch sizes are missing.\nIn the revised paper, we have added an experiment (Section 7.2) to compare the performance of the different frameworks with various minibatch sizes. The results show that DeePa achieves speedups compared to PyTorch and TensorFlow with all of the tested minibatch sizes.\n", "We thank the reviewer for the constructive comments.\n\n1. It is unclear if the performance speedup still holds on multiple machines and newer GPU devices? \nIn the revised paper, we have added an experiment (Section 7.3) with NVIDIA Tesla P100 GPUs (newest generation GPUs) on up to 4 compute nodes. The result shows that DeePa achieves even better performance speedups compared to PyTorch and TensorFlow for multi-node executions, where data transfer cost becomes a bigger factor in the per-iteration training time.\n\n2. Claim on overlapping computation with data transfer seems incorrect. \nThis is a fair point; it's not clear whether the current version of TensorFlow overlaps communication and computation or not (at least the version described in the original paper, Abadi et al., 2016, appears to not support such overlap, but that may have changed). We have removed this statement from the paper. Note that all the performance comparisons are with whatever TensorFlow actually does in the version r1.3.\n\n3. How long does finding the optimal global configuration take for each model?\nIn the revised paper, we report the times for finding the optimal global configurations in the first paragraph of Section 7.1. In particular, it takes 0.7, 1.1, and 4.8 seconds for finding the optimal configurations for AlexNet, VGG-16, and Inception-v3, respectively. The reported numbers also include the time to measure the average execution time for different operations.\n", "We thank the reviewer for the constructive comments.\n\n1. The ImageNet 22-K would be a good example and really highlight the benefits of DeePa in practice.\nWe have added a performance comparison (Appendix A.4) on the ImageNet-22K dataset. The results show that DeePa achieves almost the same training throughput on ImageNet and ImageNet-22K, while PyTorch and TensorFlow reduces the training throughput by 20%-45% on ImageNet-22K. In addition, the global configurations used by DeePa also reduce per-iteration data transfers by 3.7-44.5x compared to image parallelism.\n\n2. More complex workloads like 3D CNNs for video modeling would provide a strong motivation.\nWe agree that 3D CNNs would be a good example. However, they would require significantly more engineering effort and we believe the results we have obtained (e.g., the latest ImageNet-22K numbers) already strongly support the thesis that alternative data partitioning strategies can substantially speed up deep learning." ]
[ -1, -1, -1, 4, 5, 7, -1, -1, -1 ]
[ -1, -1, -1, 5, 4, 4, -1, -1, -1 ]
[ "r1ggBqSVf", "S18Z1_H4f", "iclr_2018_SJCPLLpaW", "iclr_2018_SJCPLLpaW", "iclr_2018_SJCPLLpaW", "iclr_2018_SJCPLLpaW", "rkMXszYgM", "BJ_nVijxM", "rkCp66Tef" ]
iclr_2018_H1srNebAZ
Discovering the mechanics of hidden neurons
Neural networks trained through stochastic gradient descent (SGD) have been around for more than 30 years, but they still escape our understanding. This paper takes an experimental approach, with a divide-and-conquer strategy in mind: we start by studying what happens in single neurons. While being the core building block of deep neural networks, the way they encode information about the inputs and how such encodings emerge is still unknown. We report experiments providing strong evidence that hidden neurons behave like binary classifiers during training and testing. During training, analysis of the gradients reveals that a neuron separates two categories of inputs, which are impressively constant across training. During testing, we show that the fuzzy, binary partition described above embeds the core information used by the network for its prediction. These observations bring to light some of the core internal mechanics of deep neural networks, and have the potential to guide the next theoretical and practical developments.
rejected-papers
While one reviewer did upgrade their Rating from 6 to 7, the most negative reviewer maintains: "Overall, I find this work interesting and current results surprising. However, I find it to be a preliminary work and not yet ready for publication. The paper still lacks a conclusion / a leading hypothesis / an explanation for the shown results. I find this conclusion indispensable even for a small scientific study to be published." after the rebuttal. With scores of 7-5-4 it is just not possible for the AC to recommend acceptance.
val
[ "rJ07mFogG", "Hk5Z4hOlf", "S1bAZxcxf", "B1WYpDrff", "Sk5XpPBfG", "rJcxTPHGf", "SkTChPSfz", "SJT6iPSff", "ByGR9PHzz", "B1x99DrzM", "rk3wqOUxz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "--------------------\nReview updates:\nRating 6 -> 7\nConfidence 2 -> 4\n\nThe rebuttal and update addressed a number of my concerns, cleared up confusing sections, and moved the paper materially closer to being publication-worthy, thus I’ve increased my score.\n--------------------\n\nI want to love this paper. The results seem like they may be very important. However, a few parts were poorly explained, which led to this reviewer being unable to follow some of the jumps from experimental results to their conclusions. I would like to be able to give this paper the higher score it may deserve, but some parts first need to be further explained.\n\nUnfortunately, the largest single confusion I had is on the first, most basic set of gradient results of section 4.1. Without understanding this first result, it’s difficult to decide to what extent the rest of the paper’s results are to be believed.\n\nFig 1 shows “the histograms of the average sign of partial derivatives of the loss with respect to activations, as collected over training for a random neuron in five different layers.” Let’s consider the top-left subplot of Fig 1, showing a heavily bimodal distribution (modes near -1 and +1.). Is this plot made using data from a single neuron or from multiple neurons? For now let’s assume it is for a single neuron, as the caption and text in 4.1 seem to suggest. If it is for a single neuron, then that neuron will have, for a single input example, a single scalar activation value and a single scalar gradient value. The sign of the gradient will either be +1 or -1. If we compute the sign for each input example and then AGGREGATE over all training examples seen by this neuron over the course of training (or a subset for computational reasons), this will give us a list of signs. Let’s collect these signs into a long list: [+1, +1, +1, -1, +1, +1, …]. Now what do we do with this list? As far as I can tell, we can either average it (giving, say, .85 if the list has far more +1 values than -1 values) OR we can show a histogram of the list, which would just be two bars at -1 and +1. But we can’t do both, indicating that some assumption above was incorrect. Which assumption in reading the text was incorrect?\n\nFurther in this direction, Section 4.1 claims “Zero partial derivatives are ignored to make the signal more clear.” Are these zero partial derivatives of the post-relu or pre-relu? The text (Sec 3) points to activations as being post-relu, but in this case zero-gradients should be a very small set (only occuring if all neurons on the next layer had either zero pre-relu gradients, which is common for individual neurons but, I would think, not for all at once). Or does this mean the pre-relu gradient is zero, e.g. the common case where the gradient is zeroed because the pre-activation was negative and the relu at that point has zero slope? In this case we would be excluding a large set (about half!) of the gradient values, and it didn’t seem from the context in the paper that this would be desirable.\n\nIt would be great if the above could be addressed. Below are some less important comments.\n\nSec 5.1: great results!\n\nFig 3: This figure studies “the first and last layers of each network”. Is the last layer really the last linear layer, the one followed by a softmax? In this case there is no relu and the 0 pre-activation is not meaningful (softmax is shift invariant). Or is the layer shown (e.g. “stage3layer2”) the penultimate layer? Minor: in this figure, it would be great if the plots could be labeled with which networks/datasets they are from.\n\nSec 5.2 states “neuron partitions the inputs in two distinct but overlapping categories of quasi equal size.” This experiment only shows that this is true in aggregate, not for specific neurons? I.e. the partition percent for each neuron could be sampled from U(45, 55) or from U(10, 90) and this experiment would not tell us which, correct? Perhaps this statement could be qualified.\n\nTable 1: “52th percentile vs actual 53 percentile shown”. \n\n> Table 1: The more fuzzy, the higher the percentile rank of the threshold\n\nThis is true for the CIFAR net but the opposite is true for ResNet, right?\n", "The paper proposes to study the behavior of activations during training and testing to shed more light onto the inner workings of neural networks. This is an important area and findings in this paper are interesting!\n\nHowever, I believe the results are preliminary and the paper lacks an adequate explanation/hypothesis for the observed phenomenon either via a theoretical work or empirical experiments.\n- Could we look at the two distributions of inputs that each neuron tries to separate? \n- Could we perform more extensive empirical study to substantiate the phenomenon here? Under which conditions do neurons behave like binary classifiers? (How are network width/depth, activation functions affect the results).\n\nAlso, a binarization experiment (and finding) similar to the one in this paper has been done here:\n[1] Argawal et al. Analyzing the Performance of Multilayer Neural Networks for Object Recognition. 2014\n\n+ Clarity: The paper is easy to read. A few minor presentation issues:\n- ReLu --> ReLU\n\n+ Originality: \nThe paper is incremental work upon previous research (Tishby et al. 2017; Argawal et al 2014).\n\n+ Significance:\nWhile the results are interesting, the contribution is not significant as the paper misses an important explanation for the phenomenon. I'm not sure what key insights can be taken away from this.\n\n\n", "This paper presents an experimental study on the behavior of the units of neural networks. In particular, authors aim to show that units behave as binary classifiers during training and testing. \n\nI found the paper unnecessarily longer than the suggested 8 pages. The focus of the paper is confusing: while the introduction discusses about works on CNN model interpretability, the rest of the paper is focused on showing that each unit behaves consistently as a binary classifier, without analyzing anything in relation to interpretability. I think some formal formulation and specific examples on the relevance of the partial derivative of the loss with respect to the activation of a unit will help to understand better the main idea of the paper. Also, quantitative figures would be useful to get the big picture. For example in Figures 1 and 2 the authors show the behavior of some specific units as examples, but it would be nice to see a graph showing quantitatively the behavior of all the units at each layer. It would be also useful to see a comparison of different CNNs and see how the observation holds more or less depending on the performance of the network.\n", "Thank you very much for your interest in our paper.\nThe cited paper shows that with a slight modification, the current operations in a neural network can be interpreted as a distance measure between fuzzy, binary variables from fuzzy logic theory. From this observation, the authors assume that a neural network treats activation and weights as fuzzy binary variables, without further verification.\n\nThe observations of our paper show that a neural network treats activations as binary, fuzzy variables. However, we observe that this behavior emerges from training, and is not intrinsic to the operations of a neural network as –according to our understanding- is suggested by the cited paper and this anonymous comment. Further analysis of the cited paper shows that some important claims are not well validated. Indeed, a double thresholding activation function is introduced and motivated to outperform the ReLU activation. Validation of this claim (figure 4), however, compares the new activation function to linear activation (and not ReLU). Moreover, discussion of this figure compares convergence rates, which does not provide any relevant information, since the networks do not converge to the same final performance (84.63% accuracy for linear activation, 89.26% for double thresholding function).\n\nOverall, we believe the relation with Fan’s work is too vague to be discussed in our paper. We are however open to receive additional comments and clarifications about Fan’s interesting line of research.", "Thank you for your comments. We’ve made our best to account for them in the revised version of our paper. Below, we present answers to your specific comments. Moreover, let us bring to your attention that changes have been made in section 4.1, clarifying greatly the experimental approach we used. More information about it can be found in our answers to reviewer 3.\n\n------\n- Could we look at the two distributions of inputs that each neuron tries to separate? \n------\n\nSince the distributions are determined by the sign of the loss function partial derivative (with respect to the neuron activation), the two distributions of inputs in one neuron are currently only available in layers close enough to the output layer, where the partial derivative sign remains constant along training. For such layers, we can get intuitions about the content of the distributions through the following reasoning (added as last paragraph of section 4.2):\nThe average sign of the loss function partial derivative with respect to the activation of a sample determines the category, and seems to be constant along training -at least for layers close to the output (Figure 1). Categories are thus mainly fixed by the initialization of the network's parameters. Moreover, the sign of the derivative signal is heavily conditioned on the class of the input. In particular, in neurons of the output layer, partial derivative signs only depend on the class label, and not on the input. Figure 8 in appendix shows that in dense2-relu, a class is in most cases either entirely present or absent of a category, and is only occasionally split across low and high categories. Category definition is thus approximately a selection of a random subset of classes, determined by the random initial parameters between the studied neuron and the output layer. \n\n------\n- Could we perform more extensive empirical study to substantiate the phenomenon here? Under which conditions do neurons behave like binary classifiers? (How are network width/depth, activation functions affect the results).\n------\n\nThank you for this suggestion. \nTo complete our empirical study, we have considered other activations than the ReLU one. Networks with sigmoid and linear activations are now considered by our analysis (see Figures 1, 2, 3 and 4). As we expected, the results are the same, even with purely linear networks. This emphasizes a message that was not well enough explained in the original paper: the observations we make are not caused by the thresholding behaviour of activation functions (ReLU, sigmoid), but are deeply linked with the training dynamics of deep neural networks. This observation is now discussed in the last paragraph of section 5.2.\nRegarding the impact of the network architecture (width, depth, connectivity), note that it is reasonably explored in the original paper: the same conclusions emerge from a 512-wide two-layer MLP, from a 12-layer CNN with a width of up to 512 filters, and from a 50-layer ResNet with a width of up to 2048 filters.", "\n------\nAlso, a binarization experiment (and finding) similar to the one in this paper has been done here:\n[1] Argawal et al. Analyzing the Performance of Multilayer Neural Networks for Object Recognition. 2014\n------\n\nThanks for pointing out this reference. \nWe were not aware of it and will add it in the related work section of our paper. We however consider that our paper brings contributions and makes observations that are different or that significantly complement the ones made by Argawal et al. Argawal et al. analyze the properties of features/activations in a transfer learning framework. In relation with our contribution, Section 5.1 of their paper shows that the binarization of the activation at ReLU threshold (at 0) doesn’t hurt performance on the new task. The claims of our paper go way beyond this observation. We hope that the new version of our paper makes them clearer. Two main differences are as follows:\n\n- Argawal et al. binarize the activations in a very simple manner: according to the threshold of the activation function. In our paper, we show that they missed a key insight: the binary behaviour of features/activation is not related to the thresholding nature of activation functions. The reality is much more subtle: the binary behaviour is deeply linked with the SGD training dynamics of deep networks (whatever their activation function), and the partition threshold systematically lies around 50 percentile rank (and not around the arbitrary zero ReLU threshold). This observation about the partition threshold directly emerges from the comparison between the clear pattern of Figure 4 and the much noisier position of ReLU thresholds (Figure 3 or Table 1). This claim is now made even stronger by our experiments with a linear CNN, without any ReLU (or activation function related) thresholding. Our observations are thus much stronger and unexpected than the one from Argawal et al.\n\n- The dynamics of training are not explored at all in Argawal et al. Our paper shows, for the first time, that the dynamics in neurons that are close enough to the output adopt the ones from binary classifiers. While restricted to a subset of layers (i.e. the ones that are sufficiently close to the output), the simplicity of the dynamics is unexpected, and even appears in MLP networks that have been around for more than 30 years. Moreover, it is possible that similar behaviour appears in early layers, but is hidden by unnecessary noise in the backpropagated gradients. Verifying this hypothesis requires further investigations, but our work makes a first step towards a broader characterization of training dynamics.\n\nWe’ve added a short discussion on the relation of our work with Argawal et al. in the Related work section (Section 2, end of second paragraph) which is further discussed in the last paragraph of section 5.2.", "\n------\n+ Clarity: The paper is easy to read. A few minor presentation issues:\n- ReLu --> ReLU\n------\n\nThanks for noticing it! It is now changed.\n\n------\n+ Originality: \nThe paper is incremental work upon previous research (Tishby et al. 2017; Argawal et al 2014).\n------\n\nAbove, we have already commented on our contribution in the light of Argawal et al. We hope that it makes clear that our paper provides original work compared to them.\nOn the other hand, the only common point between Tishby et al. and our work lies in the fact that both works analyze the regularity of gradients during training. However, like our paper specifies, “while these works (including Tishby et al.) focus on the gradients with respect to parameters on a batch of samples, we analyze the gradients with respect to activations on single samples. This difference of perspective is crucial for the understanding of the representation learned by a neuron, and is a key aspect of our paper.” With Tishby et al.’s results, it is impossible to make a link between hidden neurons and binary classification of individual samples, which is the core observation of our paper.\n\n------\n+ Significance:\nWhile the results are interesting, the contribution is not significant as the paper misses an important explanation for the phenomenon. I'm not sure what key insights can be taken away from this.\n------\n\nWe agree with you that our paper lacks a final polished and complete conclusion. Indeed, we don’t see our paper as finished work, but rather as the opening of a promising investigation direction for a problem that has remained unsolved for more than 30 years: understanding neural networks. The fact that our observations are not obvious and generalize over very different networks suggests that these are very important properties to know in order to understand neural networks. The fact that the design and intuitions behind our experiments are not trivial and presenting them is already a challenge makes us believe it deserves to be presented to the community and discussed in an interactive manner. To emphasize the research directions that emerge from our observations, we have updated the ‘Discussion and future work’ section of our paper. In particular, we describe three directions of research related to the training dynamics of layers far from the output, the design of activation functions, and the generalization puzzle.\n", "We are sorry that the original version did not allow you to fully understand the main ideas of the paper. Thanks to the reviews and comments, we’ve noticed that indeed, some parts were not explained clearly enough, and we have done our best to clarify these in the new version.\n\nThe link with interpretability of neurons is that both works try to understand the role of a neuron inside a neural network. Our approach, however, is different. As stated in the related work section, “our paper leaves interpretability behind, but provides experiments for the validation of a complete description of the encoding of information in any neuron.” Our discussion and future work section emphasizes the impact of our observations on neuron interpretability.\n\nWe modified the text to make it explicit when the activation of a single sample is considered (in contrast to the average on a mini-batch). This implies replacing ‘the partial derivative of the loss with respect to the activation’ by ‘the partial derivative of the loss with respect to the activation OF ONE SAMPLE’. While clear in our mind, we now noticed that our initial phrasing was confusing (see also our answer to reviewer 3). We hope that the new version of section 4.1 makes the relevance of the recorded partial derivatives more clear.\n\nWe’ve added quantitative figures aggregating all neurons of a layer. It reveals that the aggregate behavior follows the same pattern than the examples provided in Figure 1. Due to space limitations, we’ve added the new figures to the appendix. Finally, we’ve also added experiments with sigmoid activation function and purely linear networks, revealing the same behaviour.\n\nWe’ve made efforts to reduce the length of the paper (i.e. removing section about ReLU analysis). However, due to the addition of new figures and comments requested by the reviewers, the number of pages has increased in the new version. We believe reducing the length of the paper would penalize its clarity. However, we will account for the reviewer’s opinion if it is maintained.", "--------\nFig 3: This figure studies “the first and last layers of each network”. Is the last layer really the last linear layer, the one followed by a softmax? In this case there is no relu and the 0 pre-activation is not meaningful (softmax is shift invariant). Or is the layer shown (e.g. “stage3layer2”) the penultimate layer? Minor: in this figure, it would be great if the plots could be labeled with which networks/datasets they are from.\n--------\n\nPenultimate, indeed. It was changed in the paper + we added dataset information to the plot captions\n\n--------\nSec 5.2 states “neuron partitions the inputs in two distinct but overlapping categories of quasi equal size.” This experiment only shows that this is true in aggregate, not for specific neurons? I.e. the partition percent for each neuron could be sampled from U(45, 55) or from U(10, 90) and this experiment would not tell us which, correct? Perhaps this statement could be qualified.\n--------\n\nIndeed, some form of aggregation is done over neurons. The fact that a window centered around percentile rank 50 does not provide random predictions indicates that the percentile at which the two distributions cross each other changes across neurons, as explained in the paper: “While the partitions separate the inputs in equally sized categories on average, the size of the categories varies across neurons and is not exactly 50%, which explains the fact that a window center at the 50th percentile does not induce random predictions.”\nHowever, we keep some resolution about the position of the partition thresholds. In the example given in the comment, since “the performance should decrease when the window is located in fuzzy regions”, if the performance is lower for a (45-55) window, than for a (80-90) window, this means that the binary encoding is on average (across all neurons) more fuzzy in (45-55) window than in (80-90). It is thus more likely that the separation points between the categories are sampled from U(45,55), rather than from U(10,90).\nWe hope that this answers the doubts of the reviewer, and are open to any further discussion. \n\n--------\nTable 1: “52th percentile vs actual 53 percentile shown”. \n--------\n\nIndeed. Thanks!\n\n--------\n> Table 1: The more fuzzy, the higher the percentile rank of the threshold\nThis is true for the CIFAR net but the opposite is true for ResNet, right?\n--------\n\nOur statement is not based on the comparison of ReLU thresholds inside the same network (cifar CNN or ResNet), but across networks. Specifically, we have compared the ReLU thresholds for the penultimate layer in cifar CNN and in ResNet. In both networks, those thresholds correspond to similar percentile ranks for all neurons, indicating convergence to a precise value. Moreover, we observe that this percentile value is larger for ResNet than for the cifar CNN (84% vs 53%). Since ImageNet is a more complicated task, leading to fuzzier intermediate representations (see Figure 4), we state that “the more fuzzy, the higher the percentile rank of the threshold”. This statement is however not sufficiently supported by experimental evidences to lead to a definitive conclusion. We have decided to remove the ReLU analysis section (previously section 5.3), since it didn’t provide enough conclusive elements. ", "Thank you very much for these encouraging and involved comments. We’ve done our best to answer them appropriately, and are looking forward to your feedback.\n\n--------\nFig 1 shows “the histograms of the average sign of partial derivatives of the loss with respect to activations, as collected over training for a random neuron in five different layers.” Let’s consider the top-left subplot of Fig 1, showing a heavily bimodal distribution (modes near -1 and +1.)....\n... As far as I can tell, we can either average it (giving, say, .85 if the list has far more +1 values than -1 values) OR we can show a histogram of the list, which would just be two bars at -1 and +1. But we can’t do both, indicating that some assumption above was incorrect. Which assumption in reading the text was incorrect?\n--------\n\nThanks for your comment, which reveals a lack of clarity in our explanation. When analyzing the derivatives, we treat two dimensions separately: input samples and training step. When recording the partial derivatives of an activation, we keep track of both dimensions, such that we can easily access the derivative signs of the activation of a single sample across the training procedure. To create the histograms of figure 1, we first compute, for each individual sample separately, the average of the derivative signs over all the recorded training steps. This tells us whether an increased activation generally benefits (negative average) or penalizes (positive average) the classification of this sample. To extend the analysis to all samples, the histogram of average signs of derivatives (one scalar per sample) is plotted over all input samples. \n\nWhen reading the manuscript at the light of your comment, we have observed that the confusion is largely induced by the fact that we generally use the term ‘activation’ to refer to the ‘activation of a single sample’. Example:\nWhat we have written: “In particular, we observe that an activation is pushed in the same direction throughout nearly all the training: either up or down”\nWhat we had in mind: “In particular, we observe that the activation OF A SAMPLE is pushed in the same direction throughout nearly all the training: either up or down”\nSimilarly, when talking about “the average sign of partial derivatives with respect to an activation”, we had in mind “the average sign of partial derivatives with respect the activation of a sample”\nThe revised version is much clearer in this regard.\n\n--------\nFurther in this direction, Section 4.1 claims “Zero partial derivatives are ignored to make the signal more clear.” Are these zero partial derivatives of the post-relu or pre-relu? The text (Sec 3) points to activations as being post-relu, but in this case zero-gradients should be a very small set (only occuring if all neurons on the next layer had either zero pre-relu gradients, which is common for individual neurons but, I would think, not for all at once). Or does this mean the pre-relu gradient is zero, e.g. the common case where the gradient is zeroed because the pre-activation was negative and the relu at that point has zero slope? In this case we would be excluding a large set (about half!) of the gradient values, and it didn’t seem from the context in the paper that this would be desirable.\n--------\n\nWe indeed analyze post-relu derivatives. Zero derivatives actually emerge for a sample when it is well classified, making the gradients too small to be handled by the float32 precision (smallest number is 1.19209e-07). Since the notion of sign is not relevant anymore for zero values, we compute the average of partial derivative signs for a sample only over the training steps for which the partial derivative is non-zero. We have made the reasoning explicit in the paper! Thanks for pointing it out.\n\nIn particular, the first paragraph of section 4.1 has been revised to account for your first two questions:\n“We proceed to a standard training of the cifar CNN and the MNIST MLP networks until convergence. During training, but in a separate process, we record the gradient of the loss with respect to the activations of each input on a regular basis (every 100 batches for cifar and every 10 batches for MNIST, leading to 1600 and 2350 recordings respectively). Measures were only performed on a random subset of neurons and samples due to memory limitations (see Appendix for more details). For each (input sample, neuron) pair, we compute the average sign of the partial derivatives with respect to the corresponding activation, as recorded at the different training steps. This value tells us whether an increased activation generally benefits (negative average) or penalizes (positive average) the classification of the sample. Due to the use of float32 precision, zero partial derivatives appear at some point in training when the sample is correctly classified, making the gradient very small. Since the signs of these values are not relevant, they are ignored when the average sign is calculated.”\n", "You may discuss the relation to the recent NIPS publication https://arxiv.org/abs/1710.10328. \n\nApparently your experimental findings can be explained in the lens of generalized hamming distance as introduced in the NIPS paper. But I am not sure if there are anything more fundamental disclosed by your interesting experimental results. " ]
[ 7, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1srNebAZ", "iclr_2018_H1srNebAZ", "iclr_2018_H1srNebAZ", "rk3wqOUxz", "Hk5Z4hOlf", "Hk5Z4hOlf", "Hk5Z4hOlf", "S1bAZxcxf", "rJ07mFogG", "rJ07mFogG", "iclr_2018_H1srNebAZ" ]
iclr_2018_r16Vyf-0-
Image Transformer
Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the self-attention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. We propose another extension of self-attention allowing it to efficiently take advantage of the two-dimensional nature of images. While conceptually simple, our generative models trained on two image data sets are competitive with or significantly outperform the current state of the art in autoregressive image generation on two different data sets, CIFAR-10 and ImageNet. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we show that our super-resolution models improve significantly over previously published autoregressive super-resolution models. Images they generate fool human observers three times more often than the previous state of the art.
rejected-papers
This paper had some quality and clarity issues and the lack of motivation for the approach was pointed out by multiple reviewers. Just too far away from the acceptance threshold.
train
[ "SyA8u9OlG", "BkYvzAixG", "SJIFtpEbM", "r1pOCmnMM", "rkNH073Mf", "H1vxCX3GM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Summary\n\nThis paper extends self-attention layers (Vaswani et al., 2017) from sequences to images and proposes to use the layers as part of PixelCNNs (van den Oord et al., 2016). The proposed model is evaluated in terms of visual appearance of samples and log-likelihoods. The authors find a small improvement in terms of log-likelihood over PixelCNNs and that super-resolved CelebA images are able to fool human observers significantly more often than PixelRNN based super-resolution (Dahl et al., 2017).\n\nReview\n\nAutoregressive models are of large interest to the ICLR community and exploring new architectures is a valuable contribution. Using self-attention in autoregressive models is an intriguing idea. It is a little bit disappointing that the added model complexity only yields a small improvement compared to the more straight-forward modifications of the PixelCNN++. I think the paper would benefit from a little bit more work, but I am open to adjusting my score based on feedback.\n\nI find it somewhat surprising that the proposed model is only slightly better in terms of log-likelihood than a PixelRNN, but much better in terms of human evaluation – given that both models were optimized for log-likelihood. Was the setup used with Mechanical Turk exactly the same as the one used by Dahl et al.? These types of human evaluations can be extremely sensitive to changes in the setup, even the phrasing of the task can influence results. E.g., presenting images scaled differently can mask certain artifacts. In addition, the variance between subjects can be very high. Ideally, each method included in the comparison would be re-evaluated using the same set of observers. Please include error bars.\n\nThe CelebA super-resolution task furthermore seems fairly limited. Given the extreme downsampling of the input, the task becomes similar to simply generating any realistic image. A useful baseline would be the following method: Store the entire training set. For a given query image, look for the nearest neighbor in the downsampled space, then return the corresponding high-resolution image. This trivial method might not only perform well, it also highlights a flaw in the evaluation: Any method which returns stored high-resolution images – even if they don’t match the input – would perform at 50%. To fix this, the human observers should also receive the low-resolution image and be asked to identify the correct corresponding high-resolution image.\n\nUsing multiplicative operations to model images seems important. How does the self-attention mechanism relate to “gated” convolutions used in PixelCNNs? Could gated convolutions not also be considered a form of self-attention?\n\nThe presentation/text could use some work. Much of the text assumes that the reader is familiar with Vaswani et al. (2017) but could easily be made more self-contained by directly including the definitions used. E.g., the encoding of positions using sines and cosines or the multi-head attention model. I also felt too much of the architecture is described in prose and could be more efficiently and precisely conveyed in equations.\n\nOn page 7 the authors write “we believe our cherry-picked images for various classes to be of higher perceptual quality”. This is a meaningless result, not only because the images were cherry-picked. Generating realistic images is trivial - you just need to store the training images. Analyzing samples generated by a generative model (outside the context of an application) should therefore only be used for diagnostic purposes or to build intuitions but not to judge the quality of a model.\n\nPlease consider rephrasing the last sentence of the abstract. Generating images which “look pretty cool” should not be the goal of a serious machine learning paper or a respected machine learning conference.", "This paper extends the PixelCNN/RNN based (conditional) image generation approaches with self-attention mechanism. \n\nPros:\n- qualitatively the proposed method has good results in several tasks\n\nCons:\n- writing needs to be improved\n- lack of motivation\n- not easy to follow technique details\n\n\nThe motivation part is missing. It seems to me that the paper simply try to combine the Transformer with PixelCNN/RNN based image generation without a clear explanation why this is needed. Why self-attention is so important for image generation? Why not just a deeper network with more parameters? Throughout the paper I cannot find a clear answer for this. Based on this I couldn't see a clear contribution. \n\nThe paper is difficult to keep the track given the current flow. Each subsection of section 3 starts with technique details without explaining why we do this. Some sentences like \"look pretty cool\" is not academic. \n\nThe experiments lack comparisons except the human evaluation, while the log-likelihood improvement is marginal. I am wondering how the human evaluation is conducted. Does it compare all the competing algorithms against the same sub-samples of the GT data? How many pairs have been compared for each algorithm? Apart from this metric, I would like to see qualitative comparison between competing algorithms in the paper as well. Also other approaches e.g. SRGAN could be compared. \n\nI am also interested about the author's claim that the implementation error that influences the log-likelihood. Has this been fixed after the deadline?", "In this paper the authors propose an autoregressive image generation model that incorporates a self-attention mechanism. The latter is inspired by the work of [Vaswani et al., 2016], which was proposed for sequences and is extended to 2D images in this work. The authors apply their model to super-resolution of face images, as well as image completion (aka inpainting) and generation, both unconditioned or conditioned on one of a small number of image classes from the CIFAR10 and ImageNet datasets. The authors evaluate their method in terms of visual quality of their generated images via an Amazon Mechanical Turk survey and quantitatively by reporting slightly improved log-likelihoods. \n\nWhile the paper is well written, the motivation for combining self-attention and autoregressive models remains unclear unfortunately, even more though as the reported quantitative improvement in terms of log-likelihood are only marginal. The technical exposition is at times difficult to follow with some design decisions of the network layout being quite ad hoc and not well motivated. Expressing the involved operations in mathematical terms would help comprehend some of the technical details and add to the reproducibility of the proposed model. \n\nAnother concern is the experimental evaluation. While the reported log-likelihoods are only marginally better, the authors report a significant boost in how often humans are fooled by the generated images. While the image generation is conditioned on the low-resolution input, the workers in the Amazon Mechanical Turk study get to see the high-resolution images only. Of course, a human observer would pick the one image out of the two shown images which is more realistic although it might have nothing to do with the input image, which seems wrong. Instead, the workers should see the low-res input image and then have to decide which high-res image seems a better match or more likely.\n\nOverall, the presented work looks quite promising and an interesting line of research. However, in its present form the manuscript doesn't seem quite ready for publication yet. Though, I would strongly encourage the authors to make the exposition more self-contained and accessible, in particular through rigorous mathematical terms, which would help comprehend the involved operations and help understand the proposed mechanism.\n\nAdditional comments:\n- Abstract: \"we also believe to look pretty cool\". Please re-consider the wording here. Generating \"pretty cool\" images should not be the goal of a scientific work.\n", "We thank the reviewer for the thorough and insightful review.\nReviewer:\n...surprising that the proposed model is only slightly better in terms of log-likelihood ... Was the setup used ... exactly the same as the one used by Dahl et al.?\n\nOur response:\nOur generation models had not finished training on ImageNet. We now have significantly better perplexities (3.78 bits/dim, 3.77 with checkpoint averaging) than the row PixelRNN (3.86 bits/dim) and the Gated PixelCNN (3.83 bits/dim, previous SOTA) models. Gated PixelCNN improved over the previous SOTA by only 0.03, our improvement is twice as large. \n\nReviewer:\n… human evaluations can be extremely sensitive to changes in the setup … Please include error bars.\n\n\nOur response:\nWe included error bars which show that the variance is small, and the subjects (50 per image) are fairly clear on their preferences of images. We ensured that we use the exact same evaluation setup, down to the interface presented to the subjects.\n\nReviewer:\nThe CelebA super-resolution task … to fix this, the human observers should also receive the low-resolution image\n\nOur response:\nWe agree, yet followed Dahl, et al.’s procedure exactly for comparability. While the shortcoming of the evaluation does present a potential loophole, allowing the model to generate images that do not downsample back to the input image, our model does not exploit this. It does generate images that, when downsampled, yield images very close to the low resolution input.\n\nTo demonstrate this, we compared the pixel/channel-level L2^2-distance between the low-resolution input image and the downsampled output image. Across 300 images from the CelebA test set, the average per-pixel, per-channel distance in normalized intensities between the input and the downsampled output images is 0.0106. The average distance between each of the low-resolution input images and 100 other downsampled images from the CelebA test set each is 0.1188. Given these are all cropped images of faces, we believe the difference to be significant. To underline this, we chose those two input images for which the downsampled version of the output image generated by our model is most different from the input image according to this metric and made them available here. Even here the downsampled output of our model is very similar to the original input.\n\nDue to shortage of space, we kindly request the reviewer to refer to the links in our response to Anonymous Reviewer 2.\n\nThe respective distances are 0.0344 and 0.03119 (between the input and model output images, each downsampled). We hope this shows that our model generates plausible images adhering to the intended constraint: that when downsampled they are very similar to the original low-resolution input image.\n\nThe input images constitute rich conditioning information such as hair and skin color, object position and pose, background color, etc. We believe there is real demand for models improving perceptual detail in images without a specific, expected output.\n\n\nReviewer:\n...Could gated convolutions not also be considered a form of self-attention?\n\n\n\nOur response:\nThere is some similarity to the multiplicative effects in self attention and gated CNNs, but there are also clear differences. Both use gating to scale the activations in multiplicative terms, which can prevent gradients from ‘blowing up’.\n\nIn self-attention, we have two sources of multiplicative interactions: 1) softmax-gated query key inner products which give us multiplicative effects between query and key representations, and 2) multiplicative interaction of the softmax weights with all values at each position, once per head. In self-attention, we first filter (softmax gating), and then aggregate (linear combination of values) per head, while gated convolutions first aggregate (applying the kernel) and then filter (gating). Because of the large receptive field of local self-attention, we achieve multiplicative interactions between positions that are far apart, which can be computationally expensive for convolutions. Both gating mechanisms are complementary and can be used together, e.g. the gating from gated PixelCNN could replace our position-wise FFNN layers.\n\nReviewer:\nThe presentation/text could use some work. ….\n\nOur response:\nWe hoped to fit the submission within 9 pages, focusing on novel content at the expense of being self-contained. However, Equation 1 describes the computation applied to each position in every layer completely - the only exception being multiple heads. If accepted, we will repeat more unchanged details of the model.\n\nReviewer:\nThe authors write “we believe our cherry-picked images for various classes to be of higher perceptual quality”.\n\nOur response:\nWe have revised our language and now write “we believe our curated images for various classes to be of reasonable perceptual quality”, without comparing the perceptual quality to other work.\n\nWe removed the statement on “pretty cool” images from the abstract.\n", "We thank the reviewer for your insightful review.\n\nAt the time of submission, our conditional and unconditional generation generation models had not finished training on ImageNet, the harder task. We now have even better perplexities (3.78 bits/dim, 3.77 with checkpoint averaging) than the row PixelRNN (3.86 bits/dim) and the Gated PixelCNN (3.83 bits/dim, previous state of the art) models.\nGated PixelCNN improved over the previous state of the art by only 0.03, while our improvement is twice as large. Over the entire image of 3072 dimensions, the improvement in bits is quite significant.\n\nReviewer:\nThe motivation part is missing. ... Why not just a deeper network with more parameters? Throughout the paper I cannot find a clear answer for this. Based on this I couldn't see a clear contribution. \n\nOur response:\nWe agree with the reviewer that our motivation might not have been described well in the original submission. We added a more thorough motivation to the introduction of the paper, which we repeat here in summary for your convenience. \n\nOne disadvantage of CNNs compared to RNNs is their typically fairly limited receptive field. This can adversely affect their ability to model long-range phenomena common in images, such as symmetry and occlusion, especially with a small number of layers. Growing the receptive field, like deepening the network, however, comes at a significant cost in number of parameters and hence computation and can make training such models more challenging.\n\nWith the Image Transformer, we aim to find a better balance in the trade-off between the virtually unlimited receptive field of the necessarily sequential PixelRNN and the limited receptive field of the much more\nparallelizable PixelCNN and its various extensions.\n\nWe propose eschewing recurrent and convolutional networks in favor of a model based entirely on a locally restricted form of multi-head self-attention that could also be interpreted as a sparsely parameterized form of convolution, allowing for significantly larger receptive fields than CNNs at the same number of parameters.\n\nWe furthermore added additional experiments indicating that indeed, increasing the size of the receptive field significantly improves the performance of our model, allowing it to (now significantly, see below) outperform the state of the art. These show that increasing the perceptive field from 16 to 256 positions, for instance, improves perplexity on CIFAR-10 Test from 3.47 to 2.99.\n\n\nReviewer:\nThe paper is difficult to keep the track given the current flow. Each subsection of section 3 starts with technique details without explaining why we do this. Some sentences like \"look pretty cool\" is not academic. \n\nOur response:\nWe removed that sentence from the abstract, added additional material on the motivation, as summarized above, and tried to improve the overall flow in a flew places.\n\n\nReviewer:\nThe experiments lack comparisons except the human evaluation, … Does it compare all the competing algorithms against the same sub-samples of the GT data? ...\n\nOur response:\nWe follow the same evaluation procedure as Dahl, et al.’s paper but do not use the same exact sub-samples as we could not recover them. For each model, we use 50 randomly selected dev images where each image is rated by 50 workers. We use a different set of workers for each model. Also, our latest results on imagenet unconditional generation perplexities show a significant improvement in log likelihoods over the previous state-of-the-art. \n\nReviewer:\nApart from this metric, I would like to see qualitative comparison between competing algorithms in the paper as well. Also other approaches e.g. SRGAN could be compared.\n\nOur Response:\nIt would be very difficult to conduct a proper qualitative evaluation because we are missing representative samples from the various algorithms. We hope that our human evaluation numbers capture some qualitative differences between our model and pixel cnn.\n\nReviewer:\nI am also interested about the author's claim that the implementation error that influences the log-likelihood. Has this been fixed after the deadline?\n\nOur response:\nWe have indeed fixed the bug. The resulting images are now free of artifacts and the log-likelihood did improve. That said, we are still in the middle of conducting a final, apples-to-apples comparison between 1D and 2D self attention on various super-resolution tasks and will include the results of this in the final paper.\n", "We thank the reviewer for your review.\nAt submission time, our generation models had not finished training on ImageNet, the harder task. We now have significantly better perplexities (3.78 bits/dim, 3.77 with checkpoint averaging) than the row PixelRNN (3.86 bits/dim) and the Gated PixelCNN (3.83 bits/dim, previous SOTA) models.\nGated PixelCNN improved over the previous SOTA by only 0.03, while our improvement is twice as large. The improvement in bits over an entire image with 3072 positions is significant.\n\nReviewer:\nWhile the paper is well written, …. some design decisions of the network layout being quite ad hoc and not well motivated.\n\nResponse:\nWe agree that our motivation was not sufficiently described in our original submission. We added a much more detailed motivation in the introduction, summarized here for convenience.\n\nA disadvantage of CNNs compared to RNNs is their typically limited receptive field. This can adversely affect their ability to model long-range phenomena common in images, such as symmetry and occlusion in a small number of layers. Growing the receptive field, or deepening the network, comes at great cost in number of parameters and computation and can make training such models harder.\n\nIn this work we aim to find a better balance in the trade-off between the virtually unlimited receptive field of the necessarily sequential PixelRNN and the limited receptive field of the much more parallelizable PixelCNN.\n\nThe locally restricted form of multi-head self-attention we propose could also be interpreted as a sparsely parameterized form of convolution, with significantly larger receptive fields than CNNs at the same number of parameters.\n\nWe added experimental results that indicating that indeed, increasing the size of the receptive field improves the performance of our model significantly.\n\n\nReviewer:\nExpressing the involved operations in mathematical terms would help comprehend ... \n\nOur response:\nWe agree, though Equation 1 does describe the computation performed per position in the each of the layers completely, with the only exception being multiple heads. If the paper is accepted, we will elaborate more on the details of the model to make the content more self-contained, repeating the equations for multi-head attention and the positional encodings, etc.\n\nReviewer:\nAnother concern is the experimental evaluation....\n\nOur response:\nWe agree, but decided to exactly follow Dahl, et al.’s procedure for comparability. While this shortcoming does present a potential loophole by allowing the model to generate images that do not downsample back to the input image (which we consider to be rich conditioning), our model does not exploit this, but instead generates images that, when downsampled again, yield images very close to the low resolution input.\n\nTo demonstrate this, we conducted an analysis comparing the pixel/channel-level L2^2-distance between the low-resolution input image and the downsampled output image.\nAcross a set of 300 images from the CelebA test set, the average per-pixel, per-channel distance in normalized intensities between the input and the downsampled output images is 0.0106. The average distance between the each of the low-resolution input images and 100 other downsampled images from the CelebA test set each is 0.1188. Given that these are all cropped images of faces, we believe the difference to be significant. To underline this, we chose those two input images for which the downsampled version of the output image generated by our model is most different from the input image according to this metric and made them available. The downsampled output of our model is still very similar to the original, downsampled input.\n\nExample 1\noriginal input image: http://tiny.cc/kq8mpy\ndownsampled input image: http://tiny.cc/xq8mpy\nsuper-resolved generated image (generated by the model): http://tiny.cc/7r8mpy\ndownsampled super-resolved generated image: http://tiny.cc/gs8mpy\n\nExample 2\noriginal input image: http://tiny.cc/kt8mpy\ndownsampled input image: http://tiny.cc/tt8mpy\nsuper-resolved generated image (generated by the model): http://tiny.cc/5t8mpy\ndownsampled super-resolved generated image: http://tiny.cc/eu8mpy\n\nThe respective distances are 0.0344 and 0.03119 (between the downsampled original and the downsampled model output). We hope this persuades the reviewer that our model generates plausible images that adhere to the intended constraint: that they, when downsampled, are very similar to the original low-resolution input image.\n\nReviewer:\n...strongly encourage the authors to make the exposition more self-contained and accessible, ….\n\nOur Response:\nWe hope the additional motivation helped improve the accessibility. If the paper is accepted, we will be happy to elaborate more on the details of the model from Vaswani et al. (2017) to make the content more self-contained.\n\nWe removed the statement about generating “pretty cool” images from the abstract.\n" ]
[ 6, 3, 5, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1 ]
[ "iclr_2018_r16Vyf-0-", "iclr_2018_r16Vyf-0-", "iclr_2018_r16Vyf-0-", "SyA8u9OlG", "BkYvzAixG", "SJIFtpEbM" ]
iclr_2018_HklpCzC6-
Image Segmentation by Iterative Inference from Conditional Score Estimation
Inspired by the combination of feedforward and iterative computations in the visual cortex, and taking advantage of the ability of denoising autoencoders to estimate the score of a joint distribution, we propose a novel approach to iterative inference for capturing and exploiting the complex joint distribution of output variables conditioned on some input variables. This approach is applied to image pixel-wise segmentation, with the estimated conditional score used to perform gradient ascent towards a mode of the estimated conditional distribution. This extends previous work on score estimation by denoising autoencoders to the case of a conditional distribution, with a novel use of a corrupted feedforward predictor replacing Gaussian corruption. An advantage of this approach over more classical ways to perform iterative inference for structured outputs, like conditional random fields (CRFs), is that it is not any more necessary to define an explicit energy function linking the output variables. To keep computations tractable, such energy function parametrizations are typically fairly constrained, involving only a few neighbors of each of the output variables in each clique. We experimentally find that the proposed iterative inference from conditional score estimation by conditional denoising autoencoders performs better than comparable models based on CRFs or those not using any explicit modeling of the conditional joint distribution of outputs.
rejected-papers
The experimental work was seen as one of the main weaknesses.
train
[ "B1hVeGtez", "S1cNDW9eM", "rynv0Uf-f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes an image segmentation method which iteratively refines the semantic segmentation mask obtained from a deep net. To this end the authors investigate a denoising auto-encoder (DAE). Its purpose is to provide a semantic segmentation which improves upon its input in terms of the log-likelihood.\n\nMore specifically, the authors `propose to condition the autoencoder with an additional input’ (page 1). To this end they use features obtained from the deep net. Instead of training the DAE with ground truth y, the authors found usage of the deep net prediction to yield better results.\n\nThe proposed approach is evaluated on the CamVid dataset.\n\nSummary:\n——\nI think the paper discusses a very interesting topic and presents an elegant approach. A few points are missing which would provide significantly more value to a reader. Specifically, an evaluation on the classical Pascal VOC dataset, details regarding the training protocol of the baseline (which are omitted right now), an assessment regarding stability of the proposed approach (not discussed right now), and a clear focus of the paper on segmentation or conditioning. See comments below for details and other points.\n\nComments:\n——\n1. When training the DAE, a combination of squared loss and categorical cross-entropy loss is used. What’s the effect of the squared error loss and would the categorical cross-entropy on its own be sufficient? This question remains open when reading the submission.\n\n2. The proposed approach is evaluated on the CamVid dataset which is used less compared to the standard and larger Pascal VOC dataset. I conjecture that the proposed approach wouldn’t work too well on Pascal VOC. On Pascal VOC, images are distinctly different from each other whereas subsequent frames are similar in CamVid, i.e., the road is always located at the bottom center of the image. The proposed architecture is able to take advantage of this dataset bias, but would fail to do so on Pascal VOC, which has a much more intricate bias. It would be great if the authors could check this hypothesis and report quantitative results similar to Tab. 1 and Fig. 4 for Pascal VOC.\n\n3. The authors mention a grid-search for the stepsize and the number of iterations. What values were selected in the end on the CamVid and hopefully the Pascal VOC dataset?\n\n4. Was the dense CRF applied out of the box, or were its parameters adjusted for good performance on the CamVid validation dataset? While parameters such as the number of iterations and epsilon are tuned for the proposed approach on the CamVid validation set, the submission doesn’t specify whether a similar procedure was performed for the CRF baseline.\n\n5. Fig. 4 seems to indicate that the proposed approach doesn’t converge. Hence an appropriate stepsize and a reasonable number of iterations need to be chosen on a validation set. Choosing those parameters guarantees that the method performs well on average, but individual results could potentially be entirely wrong, particularly if large step sizes are chosen. I suspect this effect to be more pronounced on the Pascal VOC dataset (hence my conjecture in point 2). To further investigate this property, as a reader, I’d be curious to get to know the standard deviation/variance of the accuracy in addition to the mean IoU. Again, it would be great if the authors could check this hypothesis and report those results.\n\n6. I find the experimental section to be slightly disconnected from the initial description. Specifically, the paper `proposes to condition the autoencoder with an additional input’ (page 1). No experiments are conducted to validate this proposal. Hence the main focus of the paper (image segmentation or DAE conditioning) remains vague. If the authors choose to focus on image segmentation, a comparison to state-of-the-art should be provided on classical datasets such as Pascal VOC, if DAE conditioning is the focus, some experiments in this direction should be included in addition to the Pascal VOC results.\n\nMinor comment:\n——\n- I find it surprising that the authors choose not to cite some related work on combining deep nets with structured prediction.", "I am a returning reviewer for this paper, from a previous conference. Much of the paper remains unchanged from the time of my previous review. I have revised my review according to the updates in the paper:\n\nSummary of the paper:\nThis work proposes a neural network based alternative to standard CRF post-processing techniques that are generally used on top semantic segmentation CNNs. As an alternative to CRF, this work proposes to iteratively refine the predicted segmentation with a denoising auto encoder (DAE). Results on CamVid semantic segmentation dataset showed better improvements over base CNN predictions in comparison to popular DenseCRF technique.\n\n\nPaper Strengths:\n- A neat technique for incorporating CRF-like pixel label relations into semantic segmentation via neural networks (auto encoders).\n- Promising results on CamVid segmentation dataset with reliable improvements over baseline techniques and minor improvements when used in conjunction with recent models.\n\n\nMajor Weaknesses:\nI have two main concerns for this work:\n- One is related to the novelty as the existing work of Xie et al. ECCV'16 also proposed similar technique with very similar aim. I think, conceptual or empirical comparisons are required to assess the importance of the proposed approach with respect to existing ones. Mere citation and short discussion is not enough. Moreover, Xie et al. seem to have demonstrated their technique on two different tasks and on three different datasets.\n- Another concern is related to experiments. Authors experimented with only one dataset and with one problem. But, I would either expect some demonstration of generality (more datasets or tasks) or strong empirical performance (state-of-the-art on CamVid) to assess the empirical usefulness with respect to existing techniques. Both of these aspects are missing in experiments. \n\n\nMinor Weaknesses:\n- Negligible improvements with respect to CRF techniques on modern deep architectures.\n- Runtime comparison is missing with respect to baseline techniques. Applying the proposed DAE 40-50 times seems very time consuming for each image.\n- By back-propagating through CRF-like techniques [Zheng et al. ICCV'15, Gadde et al. ECCV'16, Chandra et al. ECCV'16 etc.], one could refine the base segmentation CNN as well. It seems this is also possible with the proposed architecture. Is that correct? Or, are there any problems with the end-to-end fine-tuning as the input distribution to DAE constantly changes? Did authors try this?\n\n\nSuggestions:\n- Only Gaussian noise corruption is used for training DAE. Did authors experiment with any other noise types? Probably, more structured noise would help in learning better contextual relations across pixel labels?\n\nClarifications:\nWhat is the motivation to add Euclidean loss to the standard cross-entropy loss for segmentation in Eq-3?\n\nReview summary:\nThe use of denoising auto encoders (DAEs) for capturing pixel label relations and then using them to iteratively refine the segmentation predictions is interesting. But, incomplete comparisons with similar existing work and limited experiments makes this a weak paper.", "This paper proposes an iterative procedure on top of a standard image semantic segmentation networks. \n\nThe submission proposes a change to the training procedure of stacking a denoising auto-encoder for image segmentation. The technical contribution of this paper is small. The paper aims to answer a single question: When using a DAE network on top of a segmentation network output, should one condition on the predicted, or the ground truth segmentation? (why not on both?) The answer is conditioning on the predicted image for a second round of inference is a bit better. The method also performs a bit better (no statistical significance tests) than other post-processing methods (Dense-CRF, CRF-RNNs)\n\nExperimental results are available only on a small dataset and for two different networks. This may be sufficient for a first proof-of-concept but a comparison against standard benchmark methods and datasets for semantic segmentation is missing. It is unlikely that in the current state of this submission is a contribution to image segmentation, evidence is weak and several improvements are suggested.\n\n- The experimental evidence is insufficient. The improvements are small, statistical tests are not available. The CamVid dataset is the smallest of the image segmentation datasets used these days, more compelling would be MSCOCO or Cityscapes, better most of them. The question whether this network effect is tied to small-dataset and low-resolution is not answered. Will a similar effect be observed when compared to networks trained on way more data (e.g., CityScapes)? \n- The most important baseline is missing: auto-context [Tu08]. Training the same network the DAE uses in an auto-context way. That is, take the output of the first model, then train another network using both input and prediction again for semantic segmentation (and not Eq.3). This is easy to do, practically almost always achieves better performance and I would assume the resulting network is faster and performs similar to the method presented in this submission on (guessing, I have not tried). In any case, to me this is the most obvious baseline. \n- I am in favour of probabilistic methods, but the availability of an approximation of p(y) (or the nearest mode) is not used (as is most often the case).\n- Runtimes are absent. This is a practical consideration which is important especially if there is little technological improvement. The DAE model of this submission compares to simple filtering methods as Krähenbühl&Koltun DenseCRF which are fast and performance results are comparable. The question wether this is practically relevant is missing, judging from the construction I guess this does not fare well. Also training time is significantly more, please comment.\n- The related work is very well written, thanks. This proposal is conceptually very similar to auto-context [Tu08] and this reference missing (this is also the most important baseline)\n\n[Tu08] Tu, “Auto-context and its application to high-level vision tasks”, CVPR 2008\n\n\n" ]
[ 5, 4, 4 ]
[ 5, 4, 4 ]
[ "iclr_2018_HklpCzC6-", "iclr_2018_HklpCzC6-", "iclr_2018_HklpCzC6-" ]
iclr_2018_SkymMAxAb
AirNet: a machine learning dataset for air quality forecasting
In the past decade, many urban areas in China have suffered from serious air pollution problems, making air quality forecast a hot spot. Conventional approaches rely on numerical methods to estimate the pollutant concentration and require lots of computing power. To solve this problem, we applied the widely used deep learning methods. Deep learning requires large-scale datasets to train an effective model. In this paper, we introduced a new dataset, entitled as AirNet, containing the 0.25 degree resolution grid map of mainland China, with more than two years of continued air quality measurement and meteorological data. We published this dataset as an open resource for machine learning researches and set up a baseline of a 5-day air pollution forecast. The results of experiments demonstrated that this dataset could facilitate the development of new algorithms on the air quality forecast.
rejected-papers
This is an interesting application area, but the quality of the presentation and experimental work here is not sufficient for acceptance. The numerical ratings from reviewers are just not high enough to warrant acceptance.
train
[ "S1AztKDlf", "SJl24wuxM", "SJ7ACl5xM", "ryR8G8j7M", "rkuYI9UXG", "rJBNjY8mM", "Hk5P7LSQz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper is about open sourcing AirNet, a database that has interpolated air quality metrics in a spatial form along with matching meteorological data obtained elsewhere. In addition, the paper also develops a few baseline methods and evaluated using standard metrics such as detection rate, false alarms etc. The work is original and significant from an applications point of view. It looks like the dataset is useful but the model development and experimental sections are weak.\n\nStrengths:\n- open source data set for air quality monitoring that is significantly better than existing ones.\n- baseline models using standard methods including RNN.\n\nWeaknesses:\n- The air quality data is measured at point locations (stations) which are interpolated to obtain spatial data. There is no evaluation on this step to make sure the interpolated data indeed reflects truth. \n- Experiments doesn't seem to be carefully done using hyper-parameter tuning/ cross-validation. The model results may be misleading.\n- Writing and formatting needs to be improved. Some examples - \"quality of air quality\", \"people attempted to apply deep learning\", \"in the computer vision field .\", \"Some people also used the hidden Makov model\", \"radial of longitude\", \"in 2:00AM, January 23\". The paper in general was not easy to follow at many places.\n- Is Table 3 incomplete with one box unlabeled?\n- Figure 3 is not clear. It is suggested to follow standard notations to represent the RNN structure (see Jurgen Schmidhuber's paper)\n- \"DEV\" in table 4 is not explained. Is this a development set? If so, what does it mean?\n- It is said that \"reduced LSTM is improved than LSTM\". But the test results in Table 4 shows that LSTM is better.", "The major contribution lies in the producing of the data. There are several concerns.\n1. Since the major contribution lies in the production of the data, it is required for the authors to justify the quality of data. How accurate they are? What are the error bounds in terms of devices of measurement? What is the measurement precision? There is no such discussion for the data source in this submission, and thus it would be really hard for the reviewer to judge the validity of the dataset. The authors claim this is the largest dataset of such purpose, but they didn't demonstrate that the smaller datasets offered previously is indeed less competitive.\n\n2. Using interpolation to align data is questionable. There are obviously many better ways to do so.\n\n3. I would suggest the authors should use the two baseline models on other air-quality datasets for comparison. It can then convince the readers this dataset is indeed a better choice for the designed task. \n\n4. This paper is not very well written. The English has certain room for improvement, and some details are missing. For instance, in Table1, Table2 and Table3, there are no captions . It is also unclear what's the purpose of Figure3 for?", "This paper's main contribution is in the building of a spatio-temporal data set on air pollution indicators as the title states.\nThe data set is built from open source data to comprise pollutants measured at a number of stations and meteorological data. Then, an air pollutant predictor is built as a baseline machine learning model with a reducedLSTM model. \nMost of the first part's work is in the extraction of the public data from the above mentioned sources, aligning of the two source data and sampling considerations.\nThe paper lacks detailed explanation of the problem it is actually addressing by omitting the current systems' performance: simply stating: 1.1/page 2 \"Thus it became essential and urgent to set up a larger scale training dataset to enhance the accuracy of the forecast results.\" \nIt also lacks definition of certain application domain area terms and acronyms (PM2:5).\nCertain paragraphs need rewriting:\n - 2.2/Page 3: \"Latitude ranges from 75 degrees to 132 degrees and the north latitude range of is from 18 degrees to 51 degrees\".\n - 3.1/Page 4: \"We converted the problem of the pollutant prediction as time sequential prediction problems, as in the case of giving the past pollutant concentration x0 to xt􀀀1.\".\nAlso, Table 1: GFS Field Description contains 6 features not 7 as stated in 2.1\n\nFor air pollutant prediction a baseline machine learning model is built with a reducedLSTM model. \nResults seem promising but lack serious comparison with currently obtained results by other approaches as mentioned above. The statement in 5./Page 7:\"Furthermore, reduced LSTM is improved than LSTM, we assumed this is because our equation considered air pollutant dynamics, thus we gave more information to model than LSTM while keeping LSTMs advantage.\" attributes the enhanced results to extra data (quantity) fed to the model rather than the fact (quality) as stated in the paper that the meteorological conditions (dispersion etc.) influence the air pollutant presence/ concentrations in nearby stations.\nA rewriting and clarification of certain paragraphs is therefore recommended.", "We have revised our paper based on the reviewers' comments. All of our changes are summarized below:\n1) we have corrected grammatical and spelling errors. \n2) we have added defination of PM2.5.\n3) we have added section 2.3 to show the precision of our interpolate algorithm.\n4) we have rewritten section 4.2 to make the architecture of WipeNet clearer.\n5) we have fixed some logic errors like the inconsistency betweent expression and table about the comparison of LSTM and ReducedLSTM.", "Thanks for your very constructive feedback. We have uploaded a revision to incorporate your suggestions. We will try to answer your questions and concerns one by one below.\n\n- We added a discussion on the accuracy of interpolation in section 2.3. Using data from 90% of monitoring stations, the predicted data were interpolated on a geographic coordinate grid of 0.25 degree across China. The correlation between the interpolated data and the remaining 10% of monitoring stations is 0.79. Researchers at Harvard University used satellite measurements of Aerosol Optical Depth (AOD), ground topography and so on to estimate pm2.5 in the area lacking of monitoring stations (Di (2017)). They obtained a coefficient of determination(r-squared) of 0.83. The accuracy of our interpolation is not much different from the results of using more data. So we think that the adjusted interpolation method for pm2.5 is still good enough for pm2.5 predictions. Thanks for your reminder, we will take better approaches to estimating pm2.5 values in the area lacking of monitoring stations in our future work. \n-------\nDi, Qian, et al. \"Air pollution and mortality in the Medicare population.\" New England Journal of Medicine 376.26 (2017): 2513-2522.\n\n\n- Experiments doesn't seem to be carefully done using hyper-parameter tuning/ cross-validation. The model results may be misleading.\nRe: We used GRU (Gated Recurrent Unit) , LSTM, and reducedLSTM to test our dataset. We carefully tune parameters of those three models can get the best result we could. The result is listed in table 5.\n\n- Writing and formatting needs to be improved. Some examples - \"quality of air quality\", \"people attempted to apply deep learning\", \"in the computer vision field .\", \"Some people also used the hidden Makov model\", \"radial of longitude\", \"in 2:00AM, January 23\". The paper in general was not easy to follow at many places.\nRe: We fixed typos and grammar errors, and rewrotte the unclear sentences.\n\n- Is Table 3 incomplete with one box unlabeled?\nRe: Table 3 (now as Table 4) was completed, because we just used three kinds of counts to calculate POD, FAR and CSI. We added a short bar to indicate this box is not related.\n\n- Figure 3 is not clear. It is suggested to follow standard notations to represent the RNN structure (see Jurgen Schmidhuber's paper)\nRe: Figure 3 is the model architecture of WipeNet, we redrew it and added reference in section 4.2\n\n- \"DEV\" in Table 4 is not explained. Is this a development set? If so, what does it mean?\nRe: DEV in Table4 (now as Table 5) means developing dataset. It’s useless to show the result and conclusion, so we removed it and just keep result in test set. Thanks you for reminder.\n\n- It is said that \"reduced LSTM is improved than LSTM\". But the test results in Table 4 shows that LSTM is better.\nRe: As you noticed, LSTM is better when adding meteorological data than ReducedLSTM as table 4 (now in table 5) shows. The sentence was wrong. We have fixed this bug. \nFurthermore, without meteorological data, ReducedLSTM outperforms LSTM. Consider its fewer parameters compared with LSTM, we think this is due to we use more prior knowledge to design the model. \n\nWe hope that these replies and the revision resolve your questions. Any additional questions and suggestions are welcome and we will try our best to make things as clear as possible.", "Thanks for your very constructive feedback. We have uploaded a revision to incorporate your suggestions. We will try to answer your questions and concerns one by one below.\n\n>1. Since the major contribution lies in the production of the data, it is required for the authors to justify the quality of data. How accurate they are? What are the error bounds in terms of devices of measurement? What is the measurement precision? There is no such discussion for the data source in this submission, and thus it would be really hard for the reviewer to judge the validity of the dataset. The authors claim this is the largest dataset of such purpose, but they didn't demonstrate that the smaller datasets offered previously is indeed less competitive.\n\nRe: Our data sources include CNEMC and NOAA. CNEMC uses the measured weight method to determine the air quality according to the standard HJB-2011, while NOAA uses many different method to measure different meteorological data. We added these information into section 2.3 as below: \nFor air quality data from CNEMC, according to the decription of HJ6, the measurement error is less than 10µg/m 3 . For meteorological data from NOAA, Quanzhi Ye validates the quality of Cloud data in Ye (2010). For example, the probability of below 30% forecast error is 63% for Paranal..\n-------\nYe, Q.-Z. 2011, Forecasting Cloud Cover and Atmospheric Seeing for Astronomical Observing: Application and Evaluation of the Global Forecast System, Publications of the Astronomical Society of the Pacific, 123, 113-124\n\n>2. Using interpolation to align data is questionable. There are obviously many better ways to do so.\nRe: We added a discussion on the accuracy of interpolation in section 2.3. Using data from 90% of monitoring stations, the predicted data were interpolated on a geographic coordinate grid of 0.25 degree across China. The correlation between the interpolated data and the remaining 10% of monitoring stations is 0.79. Researchers at Harvard University used satellite measurements of Aerosol Optical Depth (AOD), ground topography and so on to estimate pm2.5 in the area lacking of monitoring stations (Di (2017)). They obtained a coefficient of determination(r-squared) of 0.83. The accuracy of our interpolation is not much different from the results of using more data. So we think that the adjusted interpolation method for pm2.5 is still good enough for pm2.5 predictions. Thanks for your reminder, we will take better approaches to estimating pm2.5 values in the area lacking of monitoring stations in our future work. \n-------\nDi, Qian, et al. \"Air pollution and mortality in the Medicare population.\" New England Journal of Medicine 376.26 (2017): 2513-2522.\n\n>3. I would suggest the authors should use the two baseline models on other air-quality datasets for comparison. It can then convince the readers this dataset is indeed a better choice for the designed task. \nRe : Thanks for your advice. We do not have enough time to do those tests at the current stage of publicizing the datasets and the preliminary studies, and we will continue to do the comparison after the primary recognition from peers.\n\n>4. This paper is not very well written. The English has certain room for improvement, and some details are missing. For instance, in Table1, Table2 and Table3, there are no captions . It is also unclear what's the purpose of Figure3 for?\nRe: We rewrote this paper for better simplicity and clarity. The missing captions you mentioned were added as: \nTable1: GFS Field Description\nTable2: The precision of interpolate algorithm. Every columns show interpolate precision at different pollution level\nTable3: Air pollutant dataset Characteristic\nTable4: Several concept about hit, miss and false alarm.\n\nfigure 3 is the model architecture of WipeNet. We redrew it and added reference in section 4.2\n\nWe hope that these replies and the revision resolve your questions. Any additional questions and suggestions are welcome and we will try our best to make things as clear as possible.", "Thanks for your very constructive feedback. We have uploaded a revision to incorporate your suggestions. We will try to answer your questions and concerns one by one below.\n\n>The paper lacks detailed explanation of the problem it is actually addressing by omitting the current systems' performance: simply stating: 1.1/page 2 \"Thus it became essential and urgent to set up a larger scale training dataset to enhance the accuracy of the forecast results.\" \n\nRe: Chinese Ministry of Environmental Protection (CMEP) is currently providing air quality forecasts for the next two days, but CMEP does not publish the statistical data of their accuracy of forecast, so we didn't put it into our paper. In our practical experience, the accuracy is not good. One of the air quality pollution warning was released on 27 October 2017 in Beijing, leading many people to cancel the outdoor activities, but the pollution did not come as predicted. Our experience makes us feel that the analytical work of air quality is very important, particularly considering that machine learning methods have developed so quickly. If there are public datasets, there is a good chance that researchers can build more accurate forecasting models for longer periods and will greatly improve the lives of the public.\n\n>It also lacks definition of certain application domain area terms and acronyms (PM2:5).\nRe: We have added more background information into introduction section. The PM2.5's definition and its danger have been added into first paragraph: One of the most abundant air pollutants PM2.5 (fine particles with a diameter 2.5 micrometers(µm) or less) could penetrate the deepest part of the lungs such as the bronchioles or alveoli and result in asthma, lung cancer, respiratory diseases, cardiovascular disease etc.\n\n>Certain paragraphs need rewriting:\n> - 2.2/Page 3: \"Latitude ranges from 75 degrees to 132 degrees and the north latitude range of is from 18 degrees >to 51 degrees\".\nRe: Changed to \"Longitude ranges from 75 degrees to 132 degrees and the north latitude range of is from 18 degrees to 51 degrees.\"\n> - 3.1/Page 4: \"We converted the problem of the pollutant prediction as time sequential prediction problems, as in >the case of giving the past pollutant concentration x0 to xt􀀀1.\".\nRe: We rewrite 3.1 to make it clearer.\n>Also, Table 1: GFS Field Description contains 6 features not 7 as stated in 2.1\nRe: It's 6 features, we've corrected it in 2.1\n\n>Results seem promising but lack serious comparison with currently obtained results by other approaches as >mentioned above. \nRe: We used GRU (Gated Recurrent Unit) , LSTM, and reducedLSTM to test our dataset. We carefully tuned parameters of those three models to get the best possible results. The result is listed in table 5.\n\n>The statement in 5./Page 7:\"Furthermore, reduced LSTM is improved than LSTM, we assumed >this is because our equation considered air pollutant dynamics, thus we gave more information to model than LSTM >while keeping LSTMs advantage.\" attributes the enhanced results to extra data (quantity) fed to the model rather >than the fact (quality) as stated in the paper that the meteorological conditions (dispersion etc.) influence the air >pollutant presence/ concentrations in nearby stations.\nRe: The original sentence was ambiguous. It was aiming to emphasize that we used more prior knowledge to improve robustness of the model. We rewrote this paragraph to disambiguate as follow: ReducedLSTM is better than LSTM in certain case, because we use more prior knowledge (eg. air pollutant dynamics) to design the model than LSTM while keeping LSTM's advantage.\n\nWe hope that these replies and the revision resolve your questions. Any additional questions and suggestions are welcome and we will try our best to make things as clear as possible." ]
[ 5, 4, 4, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_SkymMAxAb", "iclr_2018_SkymMAxAb", "iclr_2018_SkymMAxAb", "iclr_2018_SkymMAxAb", "S1AztKDlf", "SJl24wuxM", "SJ7ACl5xM" ]
iclr_2018_r1nzLmWAb
Video Action Segmentation with Hybrid Temporal Networks
Action segmentation as a milestone towards building automatic systems to understand untrimmed videos has received considerable attention in the recent years. It is typically being modeled as a sequence labeling problem but contains intrinsic and sufficient differences than text parsing or speech processing. In this paper, we introduce a novel hybrid temporal convolutional and recurrent network (TricorNet), which has an encoder-decoder architecture: the encoder consists of a hierarchy of temporal convolutional kernels that capture the local motion changes of different actions; the decoder is a hierarchy of recurrent neural networks that are able to learn and memorize long-term action dependencies after the encoding stage. Our model is simple but extremely effective in terms of video sequence labeling. The experimental results on three public action segmentation datasets have shown that the proposed model achieves superior performance over the state of the art.
rejected-papers
All reviewers believed that the novelty of the contribution was limited.
train
[ "B1tWwoKxz", "H1lTTDulG", "HJ5VPLYxG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper discusses the problem of action segmentation in long videos, up to 10 minutes long. The basic idea is to use a temporal convolutional encoder-decoder architecture, where in the enconder 1-D temporal convolutions are used. In the decoder three variants are studied:\n\n(1) One that uses only several bidirectional LSTMs, one after the other.\n(2) One that first applies successive layers of deconvolutions to produce per frame feature maps. Then, in the end a bidirectional LSTM in the last layer.\n(3) One that first applies a bidirectional LSTM, then applies successively 1-D deconvolution layer.\n\nAll variants end with a \"temporal softmax\" layer, which outputs a class prediction per frame.\n\nOverall, the paper is of rather limited novelty, as it is very similar to the work of Lea et al., 2017, where now the decoder part also has the deconvolutions smoothened by (bidirectional) LSTMs. It is not clear what is the main novelty compared to the aforementioned paper, other than temporal smoothing of features at the decoder stage.\n\nAlthough one of the proposed architectures (TricorNet) produces some modest improvements, it is not clear why the particular architectures are a good fit. Surely, deconvolutions and LSTMs can help incorporate some longer-term temporal elements into the final representations. However, to begin with, aren't the 1-D deconvolutions and the LSTMs (assuming they are computed dimension-wise) serving the same purpose and therefore overlapping? Why are both needed?\n\nSecond, what makes the particular architectures in Figure 3 the most reasonable choice for encoding long-term dependencies, is there a fundamental reason? What is the difference of the L_mid from the 1-D deconv layers afterward? Currently, the three variants are motivated in terms of what the Bi-LSTM can encode (high or low level details). \n\nThird, the qualitative analysis can be improved. For instance, the experiment with the \"cut lettuce\" vs \"peel cucumber\" is not persuasive enough. Indeed, longer temporal relationships can save incorrect future predictions. However, this works both ways, meaning that wrong past predictions can persist because of the long-term modelling. Is there a mechanism in the proposed approach to account for that fact?\n\nAll in all, I believe the paper indeed improves over existing baselines. However, the novelty is insufficient for a publication at this stage.", "The paper proposed a combination of temporal convolutional and recurrent network for video action segmentation. Overall this paper is written and easy to follow.\n\nThe novelty of this paper is very limited. It just replaces the decoder of ED-TCN (Lea et al. 2017) with a bi-directional LSTM. The idea of applying bi-directional LSTM is also not new for video action segmentation. In fact, ED-TCN used it as one of the baselines. The results also do not show much improvement over ED-TCN, which is much easier and faster to train (as it is fully convolutional model) than the proposed model. Another concern is that the number of layers parameter 'K'. The authors should show an analysis on how the performance varies for different values of 'K' which I believe is necessary to judge the generalization of the proposed model. I also suggest to have an analysis on entire convolutional model (where the decoder has 1D-deconvolution) to be included in order to get a clear picture of the improvement in performance due to bi-directional LSTM . Overall, I believe the novelty, contribution and impact of this work is sub-par to what is expected for publication in ICLR. ", "I will be upfront: I have already reviewed this paper when it was submitted to NIPS 2017, so this review is based heavily on the NIPS submission. \n\nI am quite concerned that this paper has been resubmitted as it is, word by word, character by character. The authors could have benefited from the feedback they obtained from the reviewers of their last submissions to improved their paper, but nothing has been done. Even very easy remarks, like bolding errors (see below) have been kept in the paper.\n\nThe proposed paper describes a method for video action segmentation, a task where the video must be temporally densely labeled by assigned an action (sub) class to each frame. The method proceeds by extracting frame level features using convolutional networks and then passing a temporal encoder-decoder in 1D over the video, using fully supervised training.\n\nOn the positive side, the method has been tested on 3 different datasets, outperforming the baselines (recent methods from 2016) on 2 of them.\n\nMy biggest concern with the paper is novelty. A significant part of the paper is based on reference [Lea et al. 2017], the differences being quite incremental. The frame-level features are the same as in [Lea et al. 2017], and the basic encoder-decoder strategy is also taken from [Lea et al. 2017]. The encoder is also the same. Even details are reproduced, as the choice of normalized Relu activations.\n\nThe main difference seems to me that the decoder is not convolutional, but a recurrent network.\n\nThe encoder-decoder architecture seems to be surprisingly shallow, with only K=2 layers at each side.\n\nThe paper is well written and can be easily understood. However, a quite large amount of space is wasted on obvious and known content, as for example the basic equation for a convolutional layer (equation (1)) and the following half page of text and equations of LSTM and Bi-directional LSTM networks. This is very well known and the space can be used for more details on the paper's contributions.\n\nWhile the paper is generally well written, there are a couple of exceptions in the form of ambiguous sentences, for example the lines before section 3.\n\nThere is a bolding error in table 2, where the proposed method is not state of the art (as indicated) w.r.t. to the accuracy metric.\n\nTo sum it up, the positive aspect of nicely executed experiments is contrasted by low novelty of the method. To be honest, I am not totally sure whether the contribution of the paper should be considered as a new method or as architectural optimizations of an existing one. This is corroborated by the experimental results on the first two datasets (tables 2 and 3): on 50 salads, where ref. [Lea et al. 2017]. seems currently to obtain state of the art performance, the improvement obtained by the proposed method allows it to get state of the art performance. On GTEA, where [Lea et al. 2017] does not currently deliver state of the art performance, the proposed method performs (slightly) better than [Lea et al. 2017] but does not obtain state of the art performance.\n\nOn the third dataset, JIGSAWS, reference [Lea et al. 2017]. has not been tested, which is peculiar given the closeness.\n" ]
[ 3, 4, 3 ]
[ 5, 4, 5 ]
[ "iclr_2018_r1nzLmWAb", "iclr_2018_r1nzLmWAb", "iclr_2018_r1nzLmWAb" ]
iclr_2018_ByZmGjkA-
Understanding Grounded Language Learning Agents
Neural network-based systems can now learn to locate the referents of words and phrases in images, answer questions about visual scenes, and even execute symbolic instructions as first-person actors in partially-observable worlds. To achieve this so-called grounded language learning, models must overcome certain well-studied learning challenges that are also fundamental to infants learning their first words. While it is notable that models with no meaningful prior knowledge overcome these learning obstacles, AI researchers and practitioners currently lack a clear understanding of exactly how they do so. Here we address this question as a way of achieving a clearer general understanding of grounded language learning, both to inform future research and to improve confidence in model predictions. For maximum control and generality, we focus on a simple neural network-based language learning agent trained via policy-gradient methods to interpret synthetic linguistic instructions in a simulated 3D world. We apply experimental paradigms from developmental psychology to this agent, exploring the conditions under which established human biases and learning effects emerge. We further propose a novel way to visualise and analyse semantic representation in grounded language learning agents that yields a plausible computational account of the observed effects.
rejected-papers
This paper resulted in significant discussion -- both between R2 and the authors, and between the AC, PCs, and other solicited experts. The problem of language grounding (and instruction following) in virtual environments is clearly important, this work was one of the first in the recent resurgence, and the goal of understand what the agents have learned is clearly noble and important. In terms of raw recommendations, the majority reviewer recommendation is negative, but since concerns raised by R2 seemed subjective (which in principle is not a problem), out of abundance of caution, we solicited additional input. Unfortunately, we received feedback consistent with the concerns raised here: -- The lack of generality of the behavior found. Even if we ignore the difficult question of why the agent prefers what it does, it's unclear how the conclusions here generalize much farther than the model and environment used; the manuscript does not provide any novel or transferable principals of the form "this kind of bias in the environment leads to this kind of bias in models with these properties". -- We realize even providing that concrete a statement might be hard, but also missing are thorough comparisons to other kinds of models (e.g. non-deep, as asked by R1) to establish that this is a general phenomenon. Ultimately, there is a sense that this is too narrow an analysis, too soon. If there was one architecture for learning embodied agents in 3d environments that was clearly successful and useful, then studying its properties might be interesting (even crucial). But the dust in this space isn't settled. Our current agents are fairly poor, and so the impact of understanding the biases of a specific model trained in a specific environment seems fairly low. Finally -- this not taken into consideration in making the decision -- it is not okay to list personal homepage domains (that may reveal author identity to ACs) as conflict domains; those are meant for institutional conflicts/domains.
train
[ "SyTAvN8yf", "r1-nPRulz", "HJULpw9gz", "Sknip6OmM", "Hy5uZa8Xz", "HJzGTcLXf", "S1bLjyVmM", "BJ2ysCXQf", "rJig5077M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "This paper presents an analysis of an agent trained to follow linguistic commands in a 3D environment. The behaviour of the agent is analyzed by means of a set of \"psycholinguistic\" experiments probing what it learned, and by inspection of its visual component through an attentional mechanism.\n\nOn the positive side, it is nice to read a paper that focuses on understanding what an agent is learning. On the negative side, I did not get many new insights from the analyses presented in the study.\n\n3 A situated language learning agent\n\nI can't make up the chair from the refrigerator in the figure.\n\n4.1 Word learning biases\n\nThis experiment shows that, when an agent is trained on shapes only, it will exhibit a shape bias when tested on new shapes and colors. Conversely, when it is exposed to colors only, it will have a color bias. When the training set is balanced, the agent shows a mild bias for the simpler color property. How is this interesting or surprising? The crucial question, here, would be whether, when an agent is trained in a naturalistic environment (i.e., where distributions of colors, shapes and other properties reflect those encountered by biological agents), it would show a human-like shape bias. This, however, is not addressed in the paper.\n\nMinor comments about this section:\n\n- Was there noise also in shape generation, or were all object instances identical?\n\n- propensity to select o_2: rather o_1?\n\n- I did not follow the paragraph starting with \"This effect provides\".\n\n4.2 The problem of learning negation\n\nI found this experiment very interesting.\n\nPerhaps, the authors could be more explicit about the usage of negation here. The meaning of commands containing negation are, I think, conjunctions of the form \"pick something and do not pick X\" (as opposed to the more natural \"do not pick X\").\n\nmodifiation: modification\n\n4.3 Curriculum learning\n\nPerhaps the difference in curriculum effectiveness in language modeling vs grounded language learning simulations is due to the fact that the former operates on large amounts of natural data, where it's hard to define the curriculum, while the latter are typically grounded in toy worlds with a controlled language, where it's easier to construct the curriculum.\n\n4.4 Processing and representation differences\n\nThere is virtually no discussion of what makes the naturalistic setup naturalistic, and thus it's not clear which conclusions we should derive from the corresponding experiments. Also, I don't see what we should learn from Figure 5 (besides the fact that in the controlled condition shapes are easier than categories). For the naturalistic condition, the current figure is misleading, since different classes contain different numbers of instances. It would be better to report proportions.\n\nConcerning the attention analysis, it seems to me that all it's saying is that lower layers of a CNN detect lower-level properties such as colors, higher layers detect more complex properties, such as shapes characterizing objects. What is novel here?\n\nAlso, since introducing attention changes the architecture, shouldn't the paper report the learning behaviour of the attention-augmented network?\n\nThe explanation of the attention mechanism is dense, and perhaps could be aided by a diagram (in the supplementary materials?). I think the description uses \"length\" when \"dimensional(ity)\" is meant.\n\n6. Supplementary material\n\nIt would be good to have an explicit description of the architecture, including number of layers of the various components, structure of the CNN, non-linearities, dimensionality of the layers, etc. (some of this information is inconsistently provided in the paper).\n\nIt's interesting that the encoder is actually a BOW model. This should be discussed in the paper, as it raises concerns about the linguistic interest of the controlled language that was used.\n\nTable 3: indicates is: indicates if\n", "In this manuscript, the authors connect psychological experimental methods to understand how the black box of the mind solves problems with current issues in understanding how the black box of deep learning methods solves problems. The authors used situated versions of human language learning tasks as simulation environments to test a CNN + LSTM deep learning network. They examined a few key phenomena: shape/color bias, learning negation concepts, incremental learning, and how learning affects the representation of objects via attention-like processes. They illustrated conditions in which their deep learning network acts similarly to people in simulations.\nDeveloping methods that enable humans to understand how deep learning models solve problems is an important problem for many reasons (e.g., usability of models for science, ethical concerns) that has captured the interest of a wide range of researchers. By adapting experimental methodology from psychology to test that have been used to understand and explain the internal workings of the mind, the authors approach the problem in a novel and innovative manner. I was impressed by the range of phenomena they tackled and their analyses were informative in understanding the behavior of deep learning models \nI found the analogy persuasive in theory, but I was not convinced that the current manuscript really demonstrates its value. In particular, I did not see the value of situating their model in a grounded environment. One analysis that would have helped convince me is a comparison to an equivalent non-grounded deep learning model (e.g., a CNN trained to make equivalent classifications), and show how this would not help us understand human behavior. However, the more I thought about the logic of this type of analysis, the more concerned I became about the logic of their approach. \nWhat would it mean if the equivalent non-situated model does not show the phenomena? If it does not, it could illustrate the efficacy of using situated environments. But, it also could mean that their technique acts differently for equivalent situated and non-situated models. In this case though, what would we learn about the more general non-situated case then? It does not seem like we would learn much, which would defeat the purpose of the technique. Alternatively, if the equivalent non-situated model does show the phenomena, then using the situated version would not be useful because the model acts equivalently in both cases. I am not fully convinced by the argument I just sketched, but leaves me very concerned about the usefulness of their approach. (note that the “controlled” vs. “naturalistic” analyses in the word learning section did not convince me. This argues for the importance of using naturalistic statistics – not necessarily cross-modal, situated environments as the authors argue for).\nAdditionally, I was unconvinced that simpler models could not be used to examine the phenomena that they analyzed. Although combining LSTM with CNN via a “mixing” module was interesting, it added another layer of complexity that made it more difficult to assess what the results meant. This left me less convinced of the usefulness of their paradigm. If we need to create a novel deep learning method to illustrate its efficacy, how will it be useful for solving the problem that motivated everything: understanding how pre-existing deep learning methods solve problems. \n", "This paper presents an analysis of the properties of agents who learn grounded language through reinforcement learning in a simple environment that combines verbal instruction with visual information. The analyses are motivated by results from cognitive and developmental psychology, exploring questions such as whether agents develop biases for shape/color, the difficulty of learning negation, the impact of curriculum format, and how representations at different levels of abstraction are acquired. I think this is a nice example of a detailed analysis of the representations acquired by a reinforcement learning agent. The extent to which it provides us with insight into human cognition depends on the degree to which we believe the structure of the agent and the task have a correspondence to the human case, which is ultimately probably quite limited. Nonetheless the paper takes on an ambitious goal of relating questions in machine learning in cognitive science and does a reasonably good job of analyzing the results.\n\nComments:\n\n1. The results on word learning biases are not particularly surprising given previous work in this area, much of which has used similar neural network models. Linda Smith and Eliana Colunga have published a series of papers that explore these questions in detail:\n\nhttp://www.iub.edu/~cogdev/labwork/kinds.pdf\nhttp://www.iub.edu/~cogdev/labwork/Ontology2003.pdf\n\n2. In figure 2 and the associated analyses, why were 20 shape terms used rather than 8 to parallel the other cases? It seems like there is a strong basic color bias. This seems like one of the most novel findings in the paper and is worth highlighting.\n\nThis figure and the corresponding analysis could be made more systematic by mapping out the degree of shape versus color bias as a function of the number of shape and color terms in a 2D plot. The resulting plot would show the degree of bias towards color.\n\n3. The section on curriculum learning does not mention relevant work on “starting small” and the “less is more\" hypothesis in language development by Jeff Elman and Elissa Newport:\n\nhttps://pdfs.semanticscholar.org/371b/240bebcaa68921aa87db4cd3a5d4e2a3a36b.pdf\nhttp://www.sciencedirect.com/science/article/pii/0388000188900101\n\n4. The section on learning speeds could include more information on the actual patterns that are found with human learners, for example the color words are typically acquired later. I found these human results hard to reconcile with the results from the models. I also found it hard to understand why colors were hard to learn given the bias towards colors shown earlier in the paper.\n\n5. The section on layerwise attention claims to give a “computational level” explanation, but this is a misleading term to use — it is not a computational level explanation in the sense introduced by David Marr which is the standard use of this term in cognitive science. The explanation of layerwise attention could be clearer.\n\nMinor:\n\n“analagous” -> “analogous”\n\nThe paper runs longer than eight pages, and it is not obvious that the extra space is warranted.\n", "Thanks for your further response. To clarify once more, I find your research direction very exciting. Also, I am sorry if I implied that your work is \"half-baked\": that was not my intention. I can see that it's very thorough. It just seems to me that, with the exception of the negation study, it is not yet providing novel results of some generality, or addressing *why* questions in a way that would be helpful to the community.\n\nElman's classic paper provided an insight that was completely novel at the time about how the order of presentation of input data affects RNN learning of natural linguistic structures, and a thorough qualitative analysis of the networks' learning behaviour. Bengio et al. introduced the idea of curriculum learning to the ML community. I was not familiar with the work of Ritter et al. before reading your paper, but, based on a cursory look at it, it seems to me it presents a comparison between learning concepts from a controlled data-set and in the wild. I am sorry to be stubborn, but I found these 3 studies more instructive than yours, in its current version.\n\nI realize, of course, that you can't cover everything in a conference paper. Indeed, it would be great to see separate papers on the various topics you are presenting, with more detailed analyses of each.\n", "Thanks for your response. It made your position clearer to us, which was really helpful, and we of course appreciate and respect your views. However, we want to make clear that we entirely disagree with them, particularly your suggestion that the work is somewhat half-baked and your assessment of the amount of insight and contributions in the paper. \n\nTo re-iterate, the take-home lessons of the paper are:\n\n1. architectures that combine convnet vision with (language-like) sequences of words, and use the aquired semantic representations to solve tasks, naturally promote a bias to extend ambiguous new labels according to colour, not shape.\n2. when such architectures are trained on a lot of shape words, they can exhibit the opposite (human-like) bias.\n3. although we are not sure exactly why these effects emerge, we provide some insight by demonstrating that representation (and decisions about) colour words involves focusing on information that is most easily extracted at the lower-level of visual processing, whereas processing shape words requires focus on the higher-levels of visual processing \n4. when learning to execute instructions involving a form of negation, such architectures typically learn to generalise in an 'inefficient' non-generalisable way unless the training experience is sufficiently broad, in which case this limitation can be resolved to some degree. \n5. when learning multiple words, such architectures learn much faster if their experience is restricted initially to some of the words, and only broadened once the initial words are mastered\n\nYou may consider 1-5 to be insufficient for a conference publication in this field. We believe this view is at odds with the following facts (among others):\n\n- A study that focused on (1-2) - demonstrating a shape-bias in visual-classification architectures when trained on imagenet images - was published at last year at ICML (https://arxiv.org/pdf/1706.08606.pdf). \n- This paper, raised by Reviewer 1, was (https://pdfs.semanticscholar.org/371b/240bebcaa68921aa87db4cd3a5d4e2a3a36b.pdf) was published in the journal Cognitive Science and observed an effect similar to that of (4), but when learning from synthetic symbolic (sequential) data.\n- The well-cited and influential paper published at ICML (https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/15972/Bengio%2C%202009%20Curriculum%20Learning.pdf?sequence=1) focused solely on demonstrating that neural nets (learning from synthetic symbolic or pixel data or natural language text) tend to learn faster if the inputs are ordered according to difficulty in some way (5).\n\nThe above papers each highlighted what have now become (or are becoming) commonly-accepted effects of learning in neural networks. However, each paper focused only on a subset of the phenomena we study in this work. Further, none of the above papers was able to explain *why* effects emerge, beyond offering some rudimentary analyses and intuition. Nonetheless, we think they are all excellent contributions to the understanding of neural networks. We don't want to suggest that our work is superior to these, only that the scope of insight and novel findings covered by our paper seems to be at least on-par with these works, each of which was deemed publishable by the community.\n\n We agree that studying the *why* question around each of these effects is very important as a follow-up, but it is an extremely difficult goal, and an objective that will probably be reached very gradually by the whole community working together (building on contributions like ours). To require such a fundamental breakthrough for a conference publication (rather than clear evidence that we have moved knowledge forward, like we provide) is, in our opinion, very harsh. Moreover, even if we just did a few more experiments in the *why* direction for one of 1-5, something would need to be omitted, since (as Reviewer 1 has raised) we are already at the page limit. We feel that would detract from the paper in other ways. This is also the case for the follow-up experiments that you suggest for the attention analysis. We agree that these would be really interesting, and we are glad that you can now see the potential of using layerwise attention to understand linguistic representations. In conclusion, we believe that the current scope and content of the paper includes more than enough new insights and takeaways for a self-contained contribution on understanding grounded language learning with neural networks.", "I thank the authors for their thorough and thoughtful response, and for incorporating my feedback in the paper revision.\n\nI still don't grasp the main point of the experiment on learning biases. I agree that it makes sense to study learning in an artificial, controlled setup. But, when this setup involves feeding the agent more color or more shape terms, the result that the model will show a color/shape bias seems trivial to me. This leaves, as the only interesting find, the condition in which the model is getting the same amount of color and shape examples, and it displays a (mild) colour bias. This might be surprising, but then the authors should provide some insight, possibly by means of follow-up experiments, on *why* such bias should emerge. Your tightly controlled setup and relatively simple architecture should allow you to gain a fuller understanding of such biases. If not, then I don't see the point of using an artificial simulation.\n\nConcerning the attention analysis, I was indeed thinking of research about what convnets learn at different layers, and it is true that you went beyond that by looking at which components are activated during word learning. However, you illustrate your mechanism when it is applied to very basic color and shape words, that are not that different from attribute labels of the sort routinely used in computer vision. As an example of something I'd find exciting, it would be cool if you could show that, say, the agent pays more attention to the color-encoding layers for fruits (for which colour is often a distinctive property) than for tools (for which it typically isn't).\n\nLet me re-iterate that I really like the emphasis of the paper on analyzing the behaviour of a learning agent, rather than on quantitative performance. However, except for the negation study, I did not learn much from the paper. I would be hard-pressed to tell what the general take-home lessons (either for AI or cognitive science) are. To conclude, I think that this is a very promising research line, that is however not yet ripe for publication in a major conference.\n", "Thank you. We have responded to the major points, minor corrections are fixed in the text.\n\nConcern: I did not get many new insights from the analyses presented in the study.\n[shape/colour bias] How is this interesting or surprising? \n\nResponse: We obtained many new insights when doing the research. If you cite where you learned about the effects we report we will cite that work and position our work relative to it. \n\nWe find the bias results both interesting and surprising given previous work. Note we are not just saying 'the trained model behaves differently depending on the training data'. We claim that convnet+lang embedding models exhibit a strong colour bias when extending ambiguous labels (assuming unbiased training data). We have made this conclusion more explicit in the paper. It surprises us because (a) it opposes an established effect in humans and (b) Ritter et al. showed similar models trained on imagenet photos exhibit the opposite bias. Unlike them, we isolate the cause of the bias (the architecture), by controlling for bias in the training data. This is relevant to current research because most approaches to grounded language learning involve a related architecture. \n\n\nConcern: The crucial question ... when an agent is trained in a naturalistic environment..., it would show a human-like shape bias. \n\nResponse: This question is not crucial to our goal, which is to understand artificial models, not humans. Any researcher would love to experiment with agents in a perceptual environment that reflects a 'typical' human-machine interaction. However, as far as we know (please say if you disagree), nobody has done this, and it would be very challenging. Where would the model be? Would it learn from expert humans, novice humans, other agents? Rather than try to estimate these unknowns, we studied differences when changing unambiguous factors in the environment (e.g. shape /colour words ratio, equal-sized vs. variable-sized word classes). We make conclusions about how the model's learning is affected by these clear differences in experience, which will be applicable once such models can interact with 'real' users in the real world. \n\nConcern: Concerning the attention analysis..... What is novel here?\n\nResponse: You don't cite why this is not novel. Are you are thinking of research into the feature-maps learned at different layers of convnet image classifiers? The novelty over that - we propose a method to visualise and quantify the interaction between language and vision as word meanings are combined and composed (and as a trained agent explores and acts in the world). Using this method, we can see what visual information is most pertinent to the meaning of any linguistic stimuli, including novel phrases not seen during training. This is certainly different from conclusions about how visual classifiers. The fact that our findings (about a network that can learn to combine language and vision at different levels of abstraction in order to solve tasks) are consistent with those findings (about a network trained to classify images by predicting a label) does not render either result redundant.\n\nConcern: There is no discussion of what makes the naturalistic setup naturalistic\n\nResponse: Naturalistic - not all word classes have the same number of members like e.g. the class of prepositions has many fewer members than the class of nouns. We have changed the term as it was misleading.\n\nConcern: The encoder is actually a BOW model...raises concerns about the linguistic interest of the controlled language that was used.\n\nResponse: Good point. In most experiments the input is a single word, so BOW and LSTM are equivalent.. The exception is negation experiments, which we have repeated with both BOW and LSTM encoding, and report the results in Figure 3. The effect remains.\n\nConcern: Perhaps difference in curriculum effectiveness.....\nResponse: We don't claim that curriculum learning never works for text-based learning, only that it was easy to observe strong curriculum effects in the context of our simulated world and situated agent. We have changed the text to make this more precise. \n\nConcern: shouldn't the paper report the learning behaviour of the attention-augmented network?\n\nResponse: We did not notice learning differences (for instance in sample-complexity), but layerwise attention needs additional computation which makes clock time slower. We did not experiment with this attention more generally since it would have made the findings less general. We now explain this reasoning.\n\nQuestion: Was there noise also in shape generation, or were all object instances identical?\nAll objects were identical in size but rotate to give variation in perspective. We have added this detail.\n\nOther requested improvements:\nRe-worded the paragraph beginning \"This effect provides\" \n\nMore explicit about the nature of the negation command\n\nAdded full details of the model in supplementary material. \n", "Thanks for your review, we appreciate your effort in considering the paper. You raised some very interesting concerns and questions that we have thought about a lot during the course of conducting this research. If we understand correctly, your greatest worry is with our use of a simulated learning environment, and the feeling that our results may not generalise to models that learn from other types of data.\n\nOur response:\n\nTo address the goals of the paper (understanding how randomly-initialised neural nets can learn 'language' from raw, unstructured visual and linguistic input), we needed to decide on stimuli for training and testing. When selecting stimuli for understanding learning machines, there is necessarily a trade-off between the realism and control. Most studies on human learning (in neuroscience, psychology etc) use a small set of controlled stimuli (for instance, photographs or pictures of objects on a table). These stimuli have much less variation in lighting, angles, colours etc. that the real world, but this lack of variation makes it easier to understand the connection between the factors that do vary in the stimuli and the behaviour of the learner. Such control makes more precise measurement, comparison, replication and, in many instances, conclusion possible. When experimenting with artificial learning machines, there is similarly advantages and disadvantages to experimenting with more controlled vs more realistic stimuli. If we had chosen to experiment with sets of photographs (such as those in the ImageNet challenge), each individual data point would have constituted a more realistic test of visual processing, but we would also have introduced a host of confounds and challenges are absent in the highly-controlled, synthetic world. For instance, it would not have been possible to change the colour of an object while keeping its shape the same, or to study curriculum effects or negation independently of the specifics of the particular images involved. \n\nWe believe strongly that a complete understanding of the dynamics of learning and representation in neural nets will require both studies on noisy, naturalistic and work with controlled, synthetic stimuli. Indeed, to date, many of the most important exploratory research with neural networks was based on synthetic data. This ranges from the earliest experiments with shallow perceptrons and XOR], through Hinton's famous induction of relational structures from family trees (https://www.cs.toronto.edu/~hinton/absps/families.pdf) to very recent criticisms of the learning power of recurrent nets (https://arxiv.org/abs/1711.00350). These studies are useful because the synthetic data exaggerates and emphasises the learning problems that a model would face in the real world, putting the focus on to the important challenges. In this regard, we feel that the simulated environment that we have developed provides a state-of-the-art balance between realism and control for studying grounded language learning in artificial agents (given current technology).\n\nYour concern:\n\nThere are a couple of other misunderstandings about the paper that we wanted to clarify. You said: \n\nAlthough combining LSTM with CNN via a “mixing” module was interesting, it added another layer of complexity that made it more difficult to assess what the results meant....If we need to create a novel deep learning method to illustrate its efficacy, how will it be useful for solving the problem that motivated everything..\n\nOur response:\n\nThis is not correct. The paper states: \n\nA mixing module M determines how these signals are combined before they are passed to a LSTM action module A: here M is simply a feedforward linear layer operating on the concatenation of the output from V and L. \n\nWe combine visual and language information through concatenation, which is the simplest (and most general) way of combining this information. Indeed, the guiding motivation behind the model, given our purpose, was that it is as simple as a model can be given our ultimate objective of combining language, motion and vision. \n\nYour concern:\n\nOne analysis that would have helped convince me is a comparison to an equivalent non-grounded deep learning model (e.g., a CNN trained to make equivalent classifications), and show how this would not help us understand human behavior.\n\nOur response:\n\nWe are a bit worried here that you may have misunderstood the point of this paper. The objective of the work is to understand artificial models - it is not to understand human behaviour. If there was anywhere in the paper that gave the impression otherwise, please let us know and we will correct it immediately!\n\nThanks again for your time, we value your criticism. We'd really appreciate it if you could give the paper another reading, and reconsider your judgement, having considered our responses. \n", "Thank you for these very useful pointers to literature in human language learning. We have added citations to Smith and Colunga as well as to Elman and Newport. Overall, we tried to keep the discussion of human learning to a minimum as the objective was to better understand a class of (artificial) computational model. This is why we do not in general compare the observations made about the network to effects identified in humans. However, our experiments were certainly inspired by this long history of principled experimentation on humans so we want to credit relevant work, while highlighting a relatively new (and potentially vast) application of the same experimental techniques. Please say if we have missed any other studies in human learning that we should mention. \n\nAs you suggest, we have replaced the phrase 'computational level' to avoid confusion. Thank you for your review and useful suggestions. If you think the paper is worthy of acceptance, please also liaise with the other reviewers and consider our comments to their criticisms to try to help them see the merits of this research." ]
[ 4, 5, 7, -1, -1, -1, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ByZmGjkA-", "iclr_2018_ByZmGjkA-", "iclr_2018_ByZmGjkA-", "Hy5uZa8Xz", "HJzGTcLXf", "S1bLjyVmM", "SyTAvN8yf", "r1-nPRulz", "HJULpw9gz" ]
iclr_2018_SkBcLugC-
Fast and Accurate Inference with Adaptive Ensemble Prediction for Deep Networks
Ensembling multiple predictions is a widely-used technique to improve the accuracy of various machine learning tasks. In image classification tasks, for example, averaging the predictions for multiple patches extracted from the input image significantly improves accuracy. Using multiple networks trained independently to make predictions improves accuracy further. One obvious drawback of the ensembling technique is its higher execution cost during inference.% If we average 100 local predictions, the execution cost will be 100 times as high as the cost without the ensemble. This higher cost limits the real-world use of ensembling. In this paper, we first describe our insights on relationship between the probability of the prediction and the effect of ensembling with current deep neural networks; ensembling does not help mispredictions for inputs predicted with a high probability, i.e. the output from the softmax. This finding motivates us to develop a new technique called adaptive ensemble prediction, which achieves the benefits of ensembling with much smaller additional execution costs. Hence, we calculate the confidence level of the prediction for each input from the probabilities of the local predictions during the ensembling computation. If the prediction for an input reaches a high enough probability on the basis of the confidence level, we stop ensembling for this input to avoid wasting computation power. We evaluated the adaptive ensembling by using various datasets and showed that it reduces the computation cost significantly while achieving similar accuracy to the naive ensembling. We also showed that our statistically rigorous confidence-level-based termination condition reduces the burden of the task-dependent parameter tuning compared to the naive termination based on the pre-defined threshold in addition to yielding a better accuracy with the same cost.
rejected-papers
The manuscript proposes a simple technique for adaptive ensemble prediction. Unfortunately, several significant concerns were raised (by R2 and R3) that this AC agrees with. Both R2 and R3 asked fairly specific questions and requested follow-up experiments, which have not been addressed.
train
[ "H1-ki5Bef", "H1tMJeseM", "ry6zwoief", "ry3YAaQ4f", "S1Rnr37Ez", "HkW5mV67G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author" ]
[ "In this paper it is described a method that can be used to speed up the prediction process of ensembles of classifiers that output probabilistic predictions. The method proposed is very simple and it is based on the observation that in the case that the individual predictors are very sure about the potential class label, ensembling many predictions is not particularly useful. It seems it is most useful when the individual classifier are most unsure, as measured by the output probabilities. The idea proposed by the authors is to compute an estimate of the probability that the class with the highest probability will not change after querying more predictors from the ensemble. This estimate is obtained by using a t-student distribution for the distribution of the average maximum probability.\n\nThe paper is generally well written with a few mistakes that can be easily corrected using any spell checking tool.\n\nThe experiments carried out by the authors are convincing. It seems that their proposed approach can speed up the predictions of the ensemble by an important factor. The benefits of using ensemble methods are also evident, since they always improve the performance of a single classifier.\n\nAs far as I know this work is original. However, it is true that several similar ensemble pruning techniques exist for multi-class problems in which one uses majority voting for computing the combined prediction of the ensemble. Therefore it is unclear what are the advantages of the proposed method with respect to those ones. This is, in my opinion, the weakest point of the paper.\n", "The authors propose and evaluate an adaptive ensembling threshold using estimated confidence intervals from the t-distribution, rather than a static confidence level threshold. They show it can provide significant improvements in accuracy at the same cost as a naive threshold.\n\nThis paper has a nice simple idea at its core, but I don't think it's fully developed. There's a few major conceptual issues going on:\n\n- The authors propose equation (3) as a stopping criterion because \"computing CIs for all labels is costly.\" I don't see how this is true in any sense. The CI computation is literally just averages of a few numbers, which should be way less than the massive matrix multiplies needed to *generate* those numbers in the neural network. Computing pair-wise comparisons naively in O(n^2) time could potentially blow up if the number of output labels is massive, but then you should still be able to keep some running statistics to avoid having to do a quadratic number of comparisons (e.g. the threshold is just the highest bound of any CI you encounter, so you keep track of both the max predicted confidence and max CI so far...then you have your answer in O(n) time.) I think the real issue is that the authors state that the confidence interval computation code is written in Python. That is a huge knock against this paper: When writing a paper about inference time, it's just due diligence to do the most basic inference time optimizations (such as implementing an operation which should be effectively free in a C++ plugin.) \n\n- So by using (3) instead of the original proposed CI comparison that motivated this approach, the authors require that the predicted probability be greater than 1/2 + the CI at the given alpha level. This means that for problems with very large output spaces, getting enough probability mass to get over that 1/2 absolute threshold is potentially going to require a minimum number of evaluations and put a cap on the efficiency gain. This is what we see in Figure 3: for the few points evaluated, when the output space is large (ILSVRC 2012) there is no effective difference between the proposed method and a static threshold of 70%, indicating that the CI of 90% is roughly working out to be the 50% minimum + ~20% threshold from the CI. \n\n- Thus the experiments in this paper don't really add much value in understanding the benefits of this approach as currently written. For due diligence, there should be the following:\n\n1. Show the distribution of computing thresholds from the CI. Then compute, for a CI of 0.8, 0.9, etc., what is the effective threshold on average? Then for every *average threshold* from the CI method, apply that as a static threshold. Then you will get exactly the delta of your method over the static threshold method.\n\n2. Do the same, but using the pairwise CI comparison method. \n\n3. The same again, but now show how effective this is as a function of the size of the output label space. E.g. add these numbers to Table 1 and Table 2 (for every \"our adaptive ensemble\", put the equivalent static threshold.)\n\n4. Implement the CI computation efficiently if you are going to report actual runtimes. Note that for a paper like this, I don't think the runtimes are as important as the # of evaluations in the ensemble, so this is less important.\n\n- With the above experiments I think this would be a good paper.", "Summary\n\nThe authors argue that ensemble prediction takes too much computation time and resource, especially in the case of deep neural networks. They then address the problem by proposing an adaptive prediction approach. The approach is based on the observation that it is most important for ensemble approaches to focus on the \"uncertain\" examples. The proposed approach thus conducts early-stopping prediction when the confidence (certainty) of the prediction is high enough, where the confidence is based on the confidence intervals of (multi-class) labels based on the student-t distribution. Experiments on vision datasets demonstrate that the proposed approach is effective in reducing computation resources while maintaining sufficient accuracy.\n\nComments\n\n* The experiments are limited in the scope of (image) multi-class classification. It is not clear whether the proposed approach is effective for other classification tasks, or even more sophisticated tasks like multi-label classification or sequence tagging.\n* The idea appears elegant but rather straightforward. One important baseline that is easy but not discussed is to set a static threshold on pairwise comparison (p_max - p_secondmax). Would this baseline be competitive with the proposed approach? Such a comparison is able to demonstrate the benefits of using confidence interval.\n* The overall improvement in computation time seems to be within a constant scale, which can be easily achieved by doing ensemble prediction in parallel (note that the proposed approach would require predicting sequentially). So are there real applications that can benefit from the improvement?\n* typo: p4, line19, neural \"netowkrs\" -> neural \"networks\"\n", "Thank you so much for the clarification. I understand what you intend in your review.", "I don't think that observation warrants a full paper. You have to do something useful with it to have impact. As a reviewer, it's very frustrating to hear that you want to leave development of your observation for future papers. It's not enough to just point out something interesting. Interesting ideas are cheap and easy to come by; making them useful for someone else in the community is hard.\n\nMy point in the review was that I don't think you've shown enough of this impact to warrant acceptance. For example, your experiments show that your approximation doesn't improve over a static threshold for the large output space example. My comment was that (1) I think this is most likely due to your approximation and (2) there is no reason to use an approximation in the first place because full pairwise shouldn't be costly if you implement it correctly. Please correct me if I'm wrong about #2, but even so, you need to show #1 to provide some context to your results. \n\nTo be clear, I never evaluated the claim that \"this is the best among all possible methods.\" I evaluated \"this paper explores the proposed idea (ensembling with dynamic thresholds) fully enough to be useful to the community on its own.\"\n\nSo, I apologize but my review stands -- I appreciate that you took the time to read and respond to the reviews, but given your response I still do not think this paper should be accepted.", "First of all, we like to thank the reviewers for their valuable comments.\n\nAs reviewers pointed out, there can be many different approaches for the termination condition (e.g. using second max, or using pairwise CI comparison). \nWe intend to claim that our CI-based termination is a reasonable approach with robustness and accuracy compared to the naive static threshold condition.\nHowever, I do not intend to claim that it is really the best approach among all possible termination conditions; testing wider range of the termination conditions to identify the best approach will be a future work.\n\nI believe that the most important contribution of this paper is findings in Section 2; the ensembling does not help samples with high probability and hence the probability can be used to adaptively control the ensemble if we use a reasonable threshold.\nTo emphasize this point, I touched up abstract and introduction (as well as fixing typos) and updated the submission.\n" ]
[ 6, 5, 5, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_SkBcLugC-", "iclr_2018_SkBcLugC-", "iclr_2018_SkBcLugC-", "S1Rnr37Ez", "HkW5mV67G", "iclr_2018_SkBcLugC-" ]
iclr_2018_rJe7FW-Cb
A Painless Attention Mechanism for Convolutional Neural Networks
We propose a novel attention mechanism to enhance Convolutional Neural Networks for fine-grained recognition. The proposed mechanism reuses CNN feature activations to find the most informative parts of the image at different depths with the help of gating mechanisms and without part annotations. Thus, it can be used to augment any layer of a CNN to extract low- and high-level local information to be more discriminative. Differently, from other approaches, the mechanism we propose just needs a single pass through the input and it can be trained end-to-end through SGD. As a consequence, the proposed mechanism is modular, architecture-independent, easy to implement, and faster than iterative approaches. Experiments show that, when augmented with our approach, Wide Residual Networks systematically achieve superior performance on each of five different fine-grained recognition datasets: the Adience age and gender recognition benchmark, Caltech-UCSD Birds-200-2011, Stanford Dogs, Stanford Cars, and UEC Food-100, obtaining competitive and state-of-the-art scores.
rejected-papers
This paper received borderline reviews. Initially, all reviewers raised a number of concerns (clarity, small improvements, etc). Even after some back and forth discussion, concerns remain, and it's clear that while the idea has potential, another round of reviewing is needed before a decision can be reached. This would be a major revision in a journal. Unfortunately, that is not possible in a conference setting and we must recommend rejection. We recommend the authors to use the feedback to make the manuscript stronger and submit to a future venue.
train
[ "SJypm6QBf", "H1VpcYQSz", "ByogdHI4M", "BJZ7a4ING", "rkzOQxcgM", "Sky96rolf", "ry2OdYCeM", "H1H_vJ_QM", "HkNSDkdQG", "HkabPkOQM", "SyTmfkumG", "r1V-bJOXG", "B10s11dmz" ]
[ "author", "public", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "Thank you for the interest in our paper. We have included this explanation in the “Related Work” section, as a specialized solution for multilabel classification, where instead of learning universal modules, a ResNet is modified to improve its multilabel classification by enhancing the predictions with the learned most relevant regions.\n\nDifferently from this ICLR, in [1], instead of designing a general mechanism like the one proposed in our submission, the authors design an specialized attention mechanism for multilabel classification and test it on MSCOCO, NUS-WIDE, and WIDER. Namely, they use the features in “res4b22 relu” in order to extract attention scores for each label through three convolutional layers. To avoid attending to labels not present for the input being processed, these attention maps are multiplied by “confidence maps”, which are learned to be 1 if the label is present, and 0 if not. The attentional predictions are average with the network predictions. Differently, we want to incorporate fine detail at different levels of abstraction to the final prediction, thus, we propose a general “Attention Module”, that can be applied at many levels to any network, to enhance the final prediction weighted by the relevance of each prediction (for instance, details in the texture might help do distinguish between two birds which are similar at abstract level).\n\nChanges will be visible in the final version of the paper.", "Just for completeness, a relevant paper that learns spatial regularizations using an attention mechanism on a final ResNet representation is [1]. \n\n[1] Zhu, F., Li, H., Ouyang, W., Yu, N., & Wang, X. Learning Spatial Regularization with Image-level Supervisions for Multi-label Image Classification. in CVPR 2017", "Thank you for the thorough review. We think the comments help to keep a high standards on this conference, and our paper has greately improved the quality thanks to them.\n\n>> Unfortunately, I do not think this rebuttal addresses my main complaint. I understand that the benchmarks include systems that use attentional mechanisms. My main issue is that the paper is about attention but different attentional mechanisms are never compared on a level play-field (i.e., using the same architectures, optimizers, etc etc). There is no way from the benchmarks to properly assess how much of the improvement is actually driven by the proposed attentional mechanism as opposed to anything else. \n\nWe understand the concern, this is exactly why we worked hard during the review period to find the time to include a comparison between STNs and the proposed attention mechanism under the same exact settings (same base architecture, learning algorithm, hyperparameters, training steps, etc.) showing that ours generalizes much better. Moreover, through all the manuscript, we emphasize that our approach is simpler and faster than other competing approaches.\n\n>> This is all the more problematic given how small the improvements are.\n\nIn the second point of the responses to AnonReviewer3 (3/3) (https://openreview.net/forum?id=rJe7FW-Cb&noteId=BJZ7a4ING&noteId=HkabPkOQM), we explain that the improvement is not so modest given the current context. In our case, improvement is comparable to that found in STN, for example.\n\n>> There is no way from the benchmarks to properly assess how much of the improvement is actually driven by the proposed attentional mechanism as opposed to anything else. \n\nWe think it is clear that plugging the proposed mechanism into a state-of-the-art CNN results in an improvement. In fact, in every table we show how the augmented models are always better.\n\n>> I would also add that with all that said the proposed mechanism remains relatively incremental with respect to related work (work properly cited) and that it seems to be better suited for a more specialized conference. \n\nWe still think our work helps indeed to build better representations, and it could be of inspiration for future work in any other field.", "Unfortunately, I do not think this rebuttal addresses my main complaint. I understand that the benchmarks include systems that use attentional mechanisms. My main issue is that the paper is about attention but different attentional mechanisms are never compared on a level play-field (i.e., using the same architectures, optimizers, etc etc). There is no way from the benchmarks to properly assess how much of the improvement is actually driven by the proposed attentional mechanism as opposed to anything else. This is all the more problematic given how small the improvements are. I would also add that with all that said the proposed mechanism remains relatively incremental with respect to related work (work properly cited) and that it seems to be better suited for a more specialized conference. ", "The manuscript describes a novel attentional mechanism applied to fine-grained recognition. \n\nOn the positive side, the approach seems to consistently improve the recognition accuracy of the baseline (a wide residual net). The approach is also consistently tested on the main fine-grained recognition datasets (the Adience age and gender recognition benchmark, Caltech-UCSD Birds-200-2011, Stanford Dogs, Stanford Cars, and UEC Food-100).\n\nOn the negative side, the paper could be better written and motivated.\n\nFirst, some claimed are made about how the proposed approach \"enhances most of the desirable properties from previous approaches” (see pp 1-2) but these claims are never backed up. More generally since the paper focuses on attention, other attentional approaches should be used as benchmarks beyond the WRN baseline. If the authors want to claim that the proposed approach is \"more robust to deformation and clutter” then they should design an experiment that shows that this is the case. \n\nBeyond, the approach seems a little ad hoc. No real rationale is provided for the different mechanisms including the gating etc and certainly no experimental validation is provided to demonstrate the need for these mechanisms. More generally, it is not clear from reading the paper specifically what computational limitation of the CNN is being solved by the proposed attentional mechanism. \n\nSome of the masks shown in Fig 3 seem rather suspicious and prompt this referee to think that the networks are seriously overfitting to the data. For instance, why would attending to a right ear help in gender recognition? \n\nThe proposed extension adds several hyperparameters (for instance the number K of attention heads). Apologies if I missed it but I am not clear how this was optimized for the experiments reported. In general, the paper could be clearer. For instance, it is not clear from either the text or Fig 2 how H goes from XxYxK for the attention head o XxYxN for the output head.\n\nAs a final point, I would say that while some of the criticisms could be addressed in a revision, the improvements seem relatively modest. Given that the focus of the paper is already limited to fine-grained recognition, it seems that the paper would be better suited for a computer vision conference.\n\n\nMinor point: \n\n\"we incorporate the advantages of visual and biological attention mechanisms” not sure this statement makes much sense. Seems like visual and biological are distinct attributes but visual attention can be biological (or not, I guess) and it is not clear how biological the proposed approach is. Certainly no attempt is made by the authors to connect to biology.\n\n\"top-down feed-forward attention mechanism” -> it should be just feed-forward attention. Not clear what \"top-down feed-forward” attention could be...", "This paper proposes a feed-forward attention mechanism for fine-grained image classification. It is modular and can be added to any convolutional layer, the attention model uses CNN feature activations to find the most informative parts then combine with the original feature map for the final prediction. Experiments show that wide residual net together with this new attention mechanism achieve slightly better performance on several fine-grained image classification tasks.\n\nStrength of this work:\n1) It is end-to-end trainable and doesn't require multiple stages, prediction can be done in single feedforward pass.\n2) Easy to train and doesn't increase the model size a lot.\n\nWeakness:\n1) Both attention depth and attention width are small. The choice of which layer to add this module is unclear to me. \n2) No analysis on using the extra regularization loss actually helps.\n3) My main concern is the improvement gain is very small. In Table3, the gain of using the gate module is only 0.1%. It argues that this attention module can be added to any layer but experiments show only 1 layer and 1 attention map already achieve most of the improvement. From Table 4 to Table 7, WRNA compared to WRN only improve ~1% on average. \n", "Paper presents an interesting attention mechanism for fine-grained image classification. Introduction states that the method is simple and easy to understand. However, the presentation of the method is bit harder to follow. It is not clear to me if the attention modules are applied over all pooling layers. How they are combined? \n\nWhy use cross -correlation as the regulariser? Why not much stronger constraint such as orthogonality over elements of M in equation 1? What is the impact of this regularisation?\n\nWhy use soft-max in equation 1? One may use a Sigmoid as well? Is it better to use soft-max?\n\nEquation 9 is not entirely clear to me. Undefined notations.\n\nIn Table 2, why stop from AD= 2 and AW=2? What is the performance of AD=1, AW=1 with G? Why not perform this experiment over all 5 datasets? Is this performances, dataset specific?\n\nThe method is compared against 5 datasets. Obtained results are quite good.\n\n", "We thank the reviewer for the feedback,\n\n>> First, some claimed are made about how the proposed approach \"enhances most of the desirable properties from previous approaches” (see pp 1-2) but these claims are never backed up. \n\nWith this sentence we tried to convey that the proposed model accumulates the best of the following properties in the literature: (i) it works in a single pass because it uses a single feed-forward CNN, differently from recurrent and two-step models, (ii) it is trained with SGD instead of RL, thus it presents faster convergence and it does not require sampling, (iii) it can be used to augment any architecture, as we show for WRNs, and (iv) it is simple to implement (instead of creating a whole new network architecture, we just add the attention heads and the attention outputs to an already existing one, eq 9). In order to better back up these properties, we have added a table in the introduction (Table 2) comparing the different architectures in the literature with ours, showing that ours accumulates the best of them.\n\n>> More generally since the paper focuses on attention, other attentional approaches should be used as benchmarks beyond the WRN baseline. \n\nPlease note that we include other attentional approaches for fine-grained recognition in all tables: \n * Table 3 -> FAM [A]; \n * Table 4 -> RA-CNN [B], STN [C], B-CNN [D], PD [E], FCAN [F]; \n * Table 5 -> DVAN [G], FCAN [F], B-CNN [C], RA-CNN [B]; \n * Table 6 -> DVAN [G], FCAN [F], RA-CNN [B]. \n\nMoreover, most of these approaches propose singular architectures that have been especially engineered for solving their respective recognition tasks, while the purpose of our approach is to demonstrate that our proposed mechanism works on general purpose architectures.\n\n[A] Rodríguez, P., Cucurull, G., Gonfaus, J. M., Roca, F. X., & Gonzalez, J. (2017). Age and gender recognition in the wild with deep attention. Pattern Recognition, 72, 563-571.\n[B] Fu, J., Zheng, H., & Mei, T. (2017, July). Look closer to see better: recurrent attention convolutional neural network for fine-grained image recognition. In Conf. on Computer Vision and Pattern Recognition.\n[C] Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. In Advances in Neural Information Processing Systems (pp. 2017-2025).\n[D] Lin, T. Y., RoyChowdhury, A., & Maji, S. (2015). Bilinear cnn models for fine-grained visual recognition. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1449-1457).\n[E] Zhang, N., Donahue, J., Girshick, R., & Darrell, T. (2014, September). Part-based R-CNNs for fine-grained category detection. In European conference on computer vision (pp. 834-849). Springer, Cham.\n[F] Liu, X., Xia, T., Wang, J., & Lin, Y. (2016). Fully convolutional attention localization networks: Efficient attention localization for fine-grained recognition. arXiv preprint arXiv:1603.06765.\n[G] Zhao, B., Wu, X., Feng, J., Peng, Q., & Yan, S. (2016). Diversified visual attention networks for fine-grained object classification. arXiv preprint arXiv:1606.08572.", ">> If the authors want to claim that the proposed approach is \"more robust to deformation and clutter” then they should design an experiment that shows that this is the case. \n\nIn the new introduced experiments on Cluttered Translated MNIST (Section 4.1 in the new version of the paper), we confirm that indeed the proposed method is more robust than the baseline.\n\n>> Beyond, the approach seems a little ad hoc. No real rationale is provided for the different mechanisms including the gating etc and certainly no experimental validation is provided to demonstrate the need for these mechanisms. \n\nIn section 3, the rationale for the different mechanisms is mentioned in each of the subsections. For instance, gates regulate the relative importance of the predictions of each attention head. This is important when AW is high and the current input has a few informative regions. In this case, just one attention head would be enough and thus, heads focusing in other regions can be dampened by the gates. This explanation is now added in section 3.4 and the conclusion.\nIn addition, we have included experiments on cluttered MNIST showing that gates are critical to obtaining good performances with high AW (see Figure 4d in the new version of the paper).\n\n>> More generally, it is not clear from reading the paper specifically what computational limitation of the CNN is being solved by the proposed attentional mechanism. \n\nThe proposed paper addresses the same problem as other attentional methods in the literature [A,B,C, ...], i.e. it enhances the model to find the most informative parts of the image and to discard irrelevant information. This is especially relevant for fine-grained recognition, where some details are more informative than other salient features of the image.\n\n[A] Ba, J., Mnih, V., & Kavukcuoglu, K. (2014). Multiple object recognition with visual attention. ICLR2015.\n[B] Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., ... & Bengio, Y. (2015, June). Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning (pp. 2048-2057).\n[C] Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. In Advances in Neural Information Processing Systems (pp. 2017-2025).\n\n>> Some of the masks shown in Fig 3 seem rather suspicious and prompt this referee to think that the networks are seriously overfitting to the data.\n\nThe train loss vs validation loss difference does not suggest that the proposed model suffers from greater overfitting than the original architecture. Moreover, inspired by this comment, we have designed a test on cluttered MNIST showing that the attention augmented model generalizes better on the test set when increasing the number of distractors (unseen during training), see Figure 4e in the new version of the paper. We hypothesize that attention prevents the model from memorizing uninformative parts of the image, which could be associated with noise. Section 4.1 and the conclusion now reflect this new finding. \n", ">> For instance, why would attending to a right ear help in gender recognition? \n\nIn the Adience dataset, most women wear earrings, so the network might have learned to look at ears whenever possible.\n\n>> The proposed extension adds several hyperparameters (for instance the number K of attention heads). Apologies if I missed it but I am not clear how this was optimized for the experiments reported. In general, the paper could be clearer. For instance, it is not clear from either the text or Fig 2 how H goes from XxYxK for the attention head o XxYxN for the output head.\n\nIn Figure 2a, Z (of size XxYxN) is convolved with K XxYx1 masks. When these are multiplied by Z again, we obtain K XxYxN feature maps (broadcasting). Figure 2b depicts the output process for 1 of the K XxYxN feature maps. We have updated Figure2 to include all this information.\n\n>> As a final point, I would say that while some of the criticisms could be addressed in a revision, the improvements seem relatively modest. \n\nGiven that we use a strong baseline such a WRN, we do not think that the improvements are modest. Please note that other relevant papers such as Spatial Transformer Networks [A] only reported an improvement of 0.8% on CUB200-2011 with respect to their own baseline (see table 3 in their work), and residual attention networks report 0.05% improvement on Cifar100 [B]. Most importantly, the improvement is consistently obtained across datasets, and in 3 datatsets we outperform current state of the art.\n\n[A] Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. In Advances in Neural Information Processing Systems (pp. 2017-2025).\n[B] Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H., ... & Tang, X. (2017). Residual Attention Network for Image Classification. CVPR2017.\n\n>> Given that the focus of the paper is already limited to fine-grained recognition, it seems that the paper would be better suited for a computer vision conference.\n\nThis work helps to build better feature representations applied to Computer Vision, which it is clearly inside the scope of this conference, from the website (http://www.iclr.cc/): “The performance of machine learning methods is heavily dependent on the choice of data representation (or features) on which they are applied…. Applications in vision, audio, speech, natural language processing, robotics, neuroscience, or any other field...”\n\n>>Minor point: \"we incorporate the advantages of visual and biological attention mechanisms” not sure this statement makes much sense. Seems like visual and biological are distinct attributes but visual attention can be biological (or not, I guess) and it is not clear how biological the proposed approach is. Certainly no attempt is made by the authors to connect to biology.\n\nThe way the proposed mechanism relates to biological attention is similar to the relationship between artificial and real neural networks, this is similarly done in [A]. Thus we have corrected the statement for “we incorporate the advantages inspired by visual and biological attention mechanisms, as stated in [A]”\n\n[A] Ba, J., Mnih, V., & Kavukcuoglu, K. (2014). Multiple object recognition with visual attention. ICLR2015.\n\n>> \"top-down feed-forward attention mechanism” -> it should be just feed-forward attention. Not clear what \"top-down feed-forward” attention could be…\n\nIn the literature, bottom-up attention is referred to the process of finding the most relevant regions of the image at the feature level, i.e. regions that are salient from their surroundings, while top-down attention refers to a high level process which finds the most relevant part of an input taking into account global information [A] (in a CNN top-down usually means to choose the regions to attend at the output instead of directly doing it at the feature level, which is the case for [B,C].)\n\n[A] Connor, Charles E., Howard E. Egeth, and Steven Yantis. \"Visual attention: bottom-up versus top-down.\" Current Biology14.19 (2004): R850-R852.\n[B] Oliva, A., Torralba, A., Castelhano, M. S., & Henderson, J. M. (2003, September). Top-down control of visual attention in object detection. In Image processing, 2003. icip 2003. proceedings. 2003 international conference on (Vol. 1, pp. I-253). IEEE.\n[C] Rodríguez, P., Cucurull, G., Gonfaus, J. M., Roca, F. X., & Gonzalez, J. (2017). Age and gender recognition in the wild with deep attention. Pattern Recognition, 72, 563-571.\n", "Thanks for the feedback,\n\n>> 1) Both attention depth and attention width are small. \n\nAlthough higher AD and AW do result in an increment of accuracy, we considered that 2 was enough to demonstrate that the proposed mechanism enhances the baseline models at negligible computational cost. In order to address this concern, we have included experiments on deformable mnist where it can be seen that the performance increases with higher AW, and AD (Figure 4b and 4c in the new version of the paper). \n\n>> The choice of which layer to add this module is unclear to me. \n\nPlease note that the same placing problem is present in most of the well-known CNN layers such as Dropout, Local-contrast normalization, Spatial Transformers, etc. \nHowever, as we answer to R1’s first question, we have established a systematic methodology which consists in adding the attention mechanism after each subsampling layer of the WRN in order to obtain features of different levels at the smallest possible computational cost. This information is now included at the end of section 3.3, when Table 2 is introduced, and in the second paragraph of section 4. \n\n>> 2) No analysis on using the extra regularization loss actually helps.\n\nThe analysis has been included in section 4.1, where experiments with Cluttered Translated MNIST show that regularization adds an extra performance increment.\n\n>> 3) My main concern is the improvement gain is very small. In Table3, the gain of using the gate module is only 0.1%. It argues that this attention module can be added to any layer but experiments show only 1 layer and 1 attention map already achieve most of the improvement. \n\nWe hope that the new experiments on Cluttered Translated MNIST in section 4.1 help to clarify this point. Also, as it can be seen in Figure 4d, gates are crucial when AD and AW grow. \n\n>> From Table 4 to Table 7, WRNA compared to WRN only improve ~1% on average. \n\nPlease note that 1% is a remarkable amount given that, for instance, other relevant papers such as Spatial Transformer Networks only reported an improvement of 0.8% on CUB200-2011 with respect to their own baseline (see table 3 in their work). Moreover, in the case of residual attention networks [B], the reported improvement on Cifar100 is 0.05%.\n\n[A] Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. In Advances in Neural Information Processing Systems (pp. 2017-2025).\n[B] Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H., ... & Tang, X. (2017). Residual Attention Network for Image Classification. CVPR2017.\n", "Thank you for your comments,\n\n>> However, the presentation of the method is bit harder to follow. It is not clear to me if the attention modules are applied over all pooling layers.\n\nAny layer of the network can be augmented with the attention mechanism. We chose to use the augmentation after each pooling layer in order to reduce even further the computational cost. We have clarified this point at the end of section 3.3, when Table 2 is introduced, and in the second paragraph of section 4.\n\n>> How they are combined? \n\nAs it can be seen in Fig 1, 2a, and 2b, a 1x1 convolution is applied to the output of the layer we want to augment, producing an attentional heatmap. This heatmap is then element-wise multiplied with a copy of the layer output, and the result is used to predict the class probabilities and a confidence score. This process is applied to an arbitrary number N of layers, producing N class probability vectors, and N confidence scores. Then all the class predictions are weighted by the confidence scores (softmax normalized so that they add-up to 1) and averaged (using Eq 9). This is the final combined prediction of the network. This overall explanation is now placed in the “Overview” section before section 3.1.\n\n>> Why use cross -correlation as the regulariser? Why not much stronger constraint such as orthogonality over elements of M in equation 1? \n\nPlease note that the 2-norm operation requires to square all the elements of the matrix, thus the minimum norm is achieved when the inner product of all the different pairs of masks is 0 (orthogonal). Thus, orthogonality is constrained by regularizing the 2-norm of a matrix. This is now clarified after Eq 3.\n\n>> What is the impact of this regularisation?\n\nIn order to address questions R1.1, etc.. we have added experiments on deformable mnist, showing the importance of each module. In figure 4d it can be seen that the regularized model performs better than the unregularized counterpart.\n\n>> Why use soft-max in equation 1? One may use a Sigmoid as well? Is it better to use soft-max?\n\nWe use softmax because it constrains the network to choose only one region in the image, thus forcing it to learn which is the most discriminative region. Using sigmoids attains the risk of just learning to predict 1s for every region, or all zeros. Note that multiple regions can still be identified by using multiple attention heads. This explanation has been included in section 3.1.\n\n>> Equation 9 is not entirely clear to me. Undefined notations.\n\n“output” is the predicted vector of class probabilities, “g_net” is the confidence score for the original output of the network “output_net” (without attention). This information has been appended after equation 9.\n\n>> In Table 2, why stop from AD= 2 and AW=2? What is the performance of AD=1, AW=1 with G? Why not perform this experiment over all 5 datasets? \n\nWe had to constrain the number of experiments to a limited amount of time and resources, which makes it difficult to brute-force all hyperparameter combinations with all datasets. We hope that this question is now clarified with the experiments on deformable-mnist (Section 4.1 in the new version of the paper).\n\n>> Is this performances, dataset specific?\n\nNo, generally increasing AD and AW results in better performance in all datasets.\n", "We thank all the reviewers for their highly valuable feedback. We have addressed all comments one by one, and we have accordingly updated the manuscript. Changes appear in blue.\nList of changes:\n * A new table (Table 2) in the introduction has been added to clarify the advantages of our proposal with respect to the literature.\n * An overview section has been added to section 3 (Section 3.1) to summarize and clarify how the different submodules fit together.\n * Undefined notations have been clarified in equation 9.\n * Ablation experiments on cluttered translated MNIST have been introduced in section 4.1.\n * Textual clarifications addressing comments from the reviewers.\n\nThanks to these improvements resulting from the review process, the manuscript has substantially clarified the contribution of our work, has improved the technical quality and has also enhanced the experimental quality. We trust these improvements make it even more appealing for publication at this conference.\n" ]
[ -1, -1, -1, -1, 5, 5, 6, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "H1VpcYQSz", "iclr_2018_rJe7FW-Cb", "BJZ7a4ING", "H1H_vJ_QM", "iclr_2018_rJe7FW-Cb", "iclr_2018_rJe7FW-Cb", "iclr_2018_rJe7FW-Cb", "rkzOQxcgM", "rkzOQxcgM", "rkzOQxcgM", "Sky96rolf", "ry2OdYCeM", "iclr_2018_rJe7FW-Cb" ]
iclr_2018_BkiIkBJ0b
Do Deep Reinforcement Learning Algorithms really Learn to Navigate?
Deep reinforcement learning (DRL) algorithms have demonstrated progress in learning to find a goal in challenging environments. As the title of the paper by Mirowski et al. (2016) suggests, one might assume that DRL-based algorithms are able to “learn to navigate” and are thus ready to replace classical mapping and path-planning algorithms, at least in simulated environments. Yet, from experiments and analysis in this earlier work, it is not clear what strategies are used by these algorithms in navigating the mazes and finding the goal. In this paper, we pose and study this underlying question: are DRL algorithms doing some form of mapping and/or path-planning? Our experiments show that the algorithms are not memorizing the maps of mazes at the testing stage but, rather, at the training stage. Hence, the DRL algorithms fall short of qualifying as mapping or path-planning algorithms with any reasonable definition of mapping. We extend the experiments in Mirowski et al. (2016) by separating the set of training and testing maps and by a more ablative coverage of the space of experiments. Our systematic experiments show that the NavA3C-D1-D2-L algorithm, when trained and tested on the same maps, is able to choose the shorter paths to the goal. However, when tested on unseen maps the algorithm utilizes a wall-following strategy to find the goal without doing any mapping or path planning.
rejected-papers
This paper received divergent ratings (7, 3, 3). While there is value in thorough evaluation papers, this manuscript has significant presentation issues. As all three reviewers point out, the way it is currently written, it misrepresents the claims made by Mirowski et al 2016 and over-reaches in its findings. Unfortunately, we cannot make a decision on what the manuscript may look like in future once these issues are fixed, and must reject.
train
[ "rypaPgFxz", "HyDUxCKlf", "ryYlB69gz", "rkXA-Wvzf", "H1lxCJ-bM", "BJ3G3JbZf", "BkID51bWM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "author", "author" ]
[ "The paper evaluates one proposed Deep RL-based model (Mirowski et al. 2016) on its ability to generally navigate. This evaluation includes training the agent on a set of training mazes and testing it's performance on a set of held-out test mazes. Evaluation metrics include repeated latency to the goal and comparison to the shortest route. Although there are some (minor) differences between the implementation with Mirowski et al. 2016, I believe the conclusions made by the authors are mostly valid. \n\nI would firstly like to point out that measuring generalization is not standard practice in RL. Recent successes in Deep RL--including Atari and AlphaGo all train and test on exactly the same environment (except for random starts in Atari and no two games of Go being the same). Arguably, the goal of RL algorithms is to learn to exploit their environment as quickly as possible in order to attain the highest reward. However, when RL is applied to navigation problems it is tempting to evaluate the agent on unseen maps in order to assess weather the agent has learned a generic mapping & planning policy. In the case of Mirowski et al. this means that the LSTM has somehow learned to do general SLAM in a meta-learning sense. To the best of my knowledge, Mirowski et al. never made such a bold claim (despite the title of their paper). \n\nSecondly, there seems to be a big disconnect between attaining a high score in navigation tasks and perfectly solving them by doing general SLAM & optimal path planning. Clearly if the agent receives the maximal possible reward for a well designed navigation task it must, by definition, be doing perfect SLAM & path planning. However at less than optimal performance the reward fails to quality the agent's ability to do SLAM. The relationship between reward and ability to do general SLAM is not clear. Therefore it is my opinion that reinforcement learning approaches to SLAM lack a concrete goal in what they are trying to show. \n\nMinor points: Section 5.3 Square map: how much more reward will the agent gain by taking the optimal path? Perhaps not that much? Wrench map: the fact that the paths taken by the agent are not distributed evenly makes me suspicious. Could the authors generate many wrench maps (same topology, random size, random wall textures) to make sure there is no bias? ", "Science is about reproducible results and it is very commendable from scientists to hold their peers accountable for their work by verifying their results. It is also necessary to inspect claims that are made by researchers to avoid the community straying in the wrong direction. However, any critique needs to be done properly, by 1) attending to the actual claims that were made in the first place, by 2) reproducing the results in the same way as in the original work, 3) by avoiding introducing false claims based on a misunderstanding of terminology and 4) by extensively researching the literature before trying to affirm that a general method (here, Deep RL) cannot solve certain tasks.\n\nThis paper is a critique of deep reinforcement learning methods for learning to navigate in 3D environments, and seems to focus intensively on one specific paper (Mirowski et al, 2016, “Learning to Navigate in Complex Environments”) and one of the architectures (NavA3C+D1D2L) from that paper. It conducts an extensive assessment of the methods in the critiqued paper but does not introduce any alternative method. For this reason, I had to carefully re-read the critiqued paper to be able to assess the validity of the arguments made in this submission and to evaluate its merit from the point of view of the quality of the critique. The (Mirowski et al, 2016) paper shows that a neural network-based agent with LSTM-based memory and auxiliary tasks such as depth map prediction can learn to navigate in fixed environments (3D mazes) with a fixed goal position (what they call “static maze”), and in fixed mazes with changing goal environments (what they call “environments with dynamic elements” or “random goal mazes”).\n\nThis submission claims that:\n[a] “[based on the critiqued paper] one might assume that DRL-based algorithms are able to 'learn to navigate' and are thus ready to replace classical mapping and path-planning algorithms”,\n[b] “following training and testing on constant map structures, when trained and tested on the same maps, [the NavA3C+D1D2L algorithm] is able to choose the shorter paths to the goal”,\n[c] “when tested on unseen maps the algorithm utilizes a wall-following strategy to find the goal without doing any mapping or path planning”,\n[d] “this state-of-the-art result is shown to be successful on only one map, which brings into question the repeatability of the results”,\n[e] “Do DRL-based navigation algorithms really 'learn to navigate'? Our results answer this question negatively.”\n[f] “we are the first to evaluate any DRL-based navigation method on maps with unseen structures”\n\nThe paper also conducts an extensive analysis of the performance of a different version of the NavA3C+D1D2L algorithm (without velocity inputs, which probably makes learning path integration much more difficult), in the same environments but by introducing unjustified changes (e.g., with constant velocities and a different action space) and with a different reward structure (incorporating a negative reward for wall collisions). While the experimental setup does not match (Mirowski et al, 2016), thereby invalidating claim [d], the experiments are thorough and do show that that architecture does not generalize to unseen mazes. The use of attention heat maps is interesting.\n\nThe main problem however is that it seems that this submission completely misrepresents the intent of (Mirowski et al, 2016) by using a straw man argument, and makes a rather unacademic and unsubstantiated accusation of lack of repeatability of the results.\n\nRegarding the former, I could not find any claim that the methods in (Mirowski et al, 2017) learn mapping and path planning in unseen environments, that could support claim [a]. More worryingly, when observing that the method of (Mirowski et al, 2017) may not generalize to unseen environments in claim [c], the authors of this submission seem to confuse navigation, cartography and SLAM, and attribute to that work claims that were never made in the first place, using a straw man argument. Navigation is commonly defined as the goal driven control of an agent, following localization, and is a broad skill that involves the determination of position and direction, with or without a map of the environment (Fox 1998, ” Markov Localization: A Probabilistic Framework for Mobile Robot Localization and Navigation”). This widely accepted definition of navigation does not preclude being limited to known environments only.\n\nRegarding repeatability, the claim [d] is contradicted in section 5 when the authors demonstrate that the NavA3C+D1D2L algorithm does achieve a reduction in latency to goal in 8 out of 10 experiments on random goal, static map and random or static spawns. The experiments in section 5.3 are conducted in simple but previously unseen maps and cannot logically contradict results (Mirowski et al, 2016) achieved by training on static maps such as their “I-maze”. Moreover, claim [d] about repeatability is also invalidated by the fact that the experiments described in the paper use different observations (no velocity inputs), different action space, different reward structure, with no empirical evidence to support these changes. It seems, as the authors also claim in [b], that the work of (Mirowski et al, 2017), which was about navigation in known environments, actually is repeatable.\n\nAdditionally, some statements made by the authors are demonstrably untrue. First, the authors claim that they are the first to train DRL agents in all random mazes [f], but this has been already shown in at least two publications (Mnih et al, 2016 and Jaderberg et al, 2016).\n\nSecond, the title of the submission, “Do Deep Reinforcement Learning Algorithms Really Learn to Navigate” makes a broad statement [e] that cannot be logically invalidated by only one particular set of experiments on a particular model and environment, particularly since it directly targets one specific paper (out of several recent papers that have addressed navigation) and one specific architecture from that paper, NavA3C+D1D2L (incidentally, not the best-performing one, according to table 1 in that paper). Why did the authors not cite and consider (Parisotto et al, 2017, “Neural Map: Structured Memory for Deep Reinforcement Learning”), which explicitly claims that their method is “capable of generalizing to environments that were not seen during training”? It seems that the authors need to revise both their bibliography and their logical reasoning: one cannot invalidate a broad set of algorithms for a broad goal, simply by taking a specific example and showing that it does not fit a particular interpretation of navigation *in previously unseen environments*.\n", "This paper proposes to re-evaluate some of the methods presented in a previous paper with a somewhat more general evaluation method. \n\nThe previous paper (Mirowski et al. 2016) introduced a deep RL agent with auxiliary losses that facilitates learning in navigation environments, where the tasks were to go from a location to another in a first person viewed fixed 3d maze, with the starting and goal locations being either fixed or random. This proposed paper rejects some of the claims that were made in Mirowski et al. 2016, mainly the capacity of the deep RL agent to learn to navigate in such environments. \n\nThe proposed refutation is based on the following experiments:\n- an agent trained on random maps does much worse on fixed random maps that an agent trained on the same maps its being evaluated on (figure 4)\n- when an agent is trained on fixed number of random map, its performance on random unseen maps doesn't increase with the number of training maps beyond ~100 maps. (figure 5). The authors argue that the reason for those diminishing returns is that the agent is actually learning a trivial wall following strategy that doesn't benefit from more maps.\n- when evaluated on hand designed small maps, the agent doesn't perform very well (figure 6).\n\nThere is addition experimental data reported which I didn't find very conclusive nor relevant to the analysis, particularly the attention heat map and the effect of apples and texture.\n\nI don't think any of the experiments reported actually refute any of the original paper's claim. All of the reported results are what you would expect. It boils down to these simple commonly known facts about deep RL agents:\n- When evaluated outside of its training distribution, it might not generalized very well (figure 4/6)\n- It has a limited capacity so if the distribution of environments is too large, its performance will plateau (figure 5). By the way to me results presented in figure 5 are not enough to claim that the agent trained on random map is implementing a purely reactive wall-following strategy. In fact, an interesting experiment here would have been to do ablation studies e.g. by replacing the LSTM with a feed forward fully connected network. To me the reported performance plateau with number of map size is normal expected behavior, only symptomatic that this deep RL agent has finite capacity.\n\nI think this paper does not provide compelling pieces of evidence of unexpected pathological behavior in the previous paper, and also does not provide any insight of how to improve upon and address the obvious limitations of previous work. I therefore recommend not to accept this paper in its current form.", "I really find that some of the results in the paper are inspiring. I'd like to provide my thoughts on the possible reason for the results observed when training on random maps.\n\n> We do not think it is because the model is saturating out.\n\nIn contrast with the reviewer's opinion, I also do not think the model is saturating out, either. My understanding is that if the agent is trained on a distribution of random maps, according to the formulation of AC (particularly, the value function is an estimation of the expected future reward), isn't it the case that the agent should learn to perform an \"average\" of behavior even on a particular test map (especially when the goal is not in the view)? Note that this problem is a POMDP, so the agent can only estimate an average future reward when only the environment is partially observed (imagine in a different map but the agent sees the same thing and the goal is in a different location). Because the training reward is designed in a way that there is always a chance of apple appearing in a grid, then the correct \"average\" behavior should be wall-following?", "> Minor points: Section 5.3 Square map: how much more reward will the agent gain by taking the optimal path? Perhaps not that much? \n\nOUR RESPONSE: We don't know. Yes, probably not much. We can perform that experiment and that number.\n\n> Wrench map: the fact that the paths taken by the agent are not distributed evenly makes me suspicious. Could the authors generate many wrench maps (same topology, random size, random wall textures) to make sure there is no bias? \n\nOUR RESPONSE: Yes we can add more experiments. However, we do not think that there is anything suspicious about paths being evenly distributed. We think the exploration strategy learned by the agent closely mirrors a randomized version of bug exploration algorithm. We think the bias helps the algorithm avoid taking random turns canceling itself out often. \n", "> The paper also conducts an extensive analysis of the performance of a different version of the NavA3C+D1D2L .... The use of attention heat maps is interesting.\n\nOUR RESPONSE: The critique here stems from the misunderstanding that we are claiming Mirowski et al. results are not reproducible or false. In fact, we get similar results on static maps with slight different architecture, proving that the work was reproducible. Having said that we do stress that the _Latency 1:>1_ metric result is meaningfully good for only one map in Mirowski et al., not to claim that it is false but to stress the need to evaluate on a bigger set of experiments. Had we not got similar results as Mirowski et al. under similar kind of maps, we would have been pushed to make the architecture exactly same as Mirowski's. \n\n> The main problem however is that it seems that this submission completely misrepresents the intent of (Mirowski et al, 2016) by using a straw man argument, and makes a rather unacademic and unsubstantiated accusation of lack of repeatability of the results.\n\nOUR RESPONSE: We did not make any such claim. At least we did not intend to make any such claim. We never said in our paper that Mirowski et al claimed that their algorithm works on unseen maps.\n\n> Regarding the former, ... This widely accepted definition of navigation does not preclude being limited to known environments only.\n\nOUR RESPONSE: This part of the criticism seem to arise from disagreement on the definition of the word \"navigation\". In claim [a] we are careful with our use of words. We believe that there is no agreement on the \"widely accepted definition\" of the word \"navigation.\" Based on one's understanding of the word \"navigation\", \"one might assume\" that the algorithm might generalize to unseen worlds. We think that criticism of our work, based on our choice of definition of \"navigation\" is unfair. It is even unfair to cite a single paper to impose the reviewers definition of the word.\n\nThe other part of the criticism that we use a \"straw man\" is again wrong because we do not intend to show pathology with Mirowski et al. paper, experiments or claims. In other words we are raising the standards what capability a \"learning to navigate\" paper should be demonstrating.\n\n> Regarding repeatability, the claim [d] ... actually is repeatable.\n\nOUR RESPONSE: This criticism is again based on misunderstanding of sentence [d].\nWe do not claim that our experiments contradict Mirowski et al. results. We use the sentence [d] as part of our motivation that doing experiments only one random map makes results of scientific work questionable and must be repeated on a larger set of randomized samples. Yes as claimed in [b] our results actually support Mirowski et al.'s results.\n\n> Additionally,.... publications (Mnih et al, 2016 and Jaderberg et al, 2016).\n\nOUR RESPONSE: This criticism again stems from the mis-communication of our claim and disagreement on definition of word \"navigation.\" Our use of the phrase \"DRL-based navigation method\", implied application of DRL-based method on the task of \"navigation\". Both Mnih et al. 2016, Jaderberg et al. 2016 evaluate their algorithms on navigation agnostic metrics like cumulative reward or human normalized score instead of navigation specific metrics like \"Latency 1:>1\" or \"distance efficiency\". Evaluation on navigation agnostic metric means that agent could be exploring the maze better instead of actually doing better than wall following. Also, Mnih et al, 2016 do generate random maps but they chose a random map and train and test on the same random map. This is different from our claim on testing DRL-based navigation methods on unseen environments which means that test map should be structurally different from train map and the DRL\n\n> Second, .... environments*.\n\nOUR RESPONSE: We accept that our title makes a broad statement just Mirowski et al 2016 did. We did overplay our work which led to all the confusion and misunderstanding. Our claim was not to falsify the results in Mirowski et al 2016. We should have titled the paper \"Raising the bar for ``learning to navigate''.\" We can still do that if reviewers agree. We provided a rigorous set of experiments and metrics that can be used to justify if an algorithm has actually \"learned to navigate\".\n\nNeural Map is a relevant paper but we feel that is unreasonable to criticize us for not citing and considering an non-peer reviewed ArXiV paper.\n", "> I don't think any of the experiments reported actually refute any of the original paper's claim. All of the reported results are what you would expect. It boils down to these simple commonly known facts about deep RL agents:\n\nOUR RESPONSE: We do not refute any of the original paper's experimental claims. We question the claim whether the algorithm actually \"learns to navigate\" as one might mistakenly interpret the title to be. Our contribution is to bring into light the limitations of their algorithm rather than to refute their experimental claims.\n\n> - When evaluated outside of its training distribution, it might not generalized very well (figure 4/6)\n> - It has a limited capacity so if the distribution of environments is too large, its performance will plateau (figure 5). By the way to me results presented in figure 5 are not enough to claim that the agent trained on random map is implementing a purely reactive wall-following strategy. In fact, an interesting experiment here would have been to do ablation studies e.g. by replacing the LSTM with a feed forward fully connected network. To me the reported performance plateau with number of map size is normal expected behavior, only symptomatic that this deep RL agent has finite capacity.\n\nOUR RESPONSE: Yes, those facts are not only true about Reinforcement learning but also about machine learning in general. Those are commonly know facts about machine learning in general. Yet most of the problems in machine learning are about finding models that generalize from training distribution to test distribution. Our experiments show that the NavA3C-D1D2L does not generalize to test distribution. We do not think it is because the model is saturating out. We can definitely add the experiments as the reviewer suggest.\n\n> I think this paper does not provide compelling pieces of evidence of unexpected pathological behavior in the previous paper, and also does not provide any insight of how to improve upon and address the obvious limitations of previous work. I therefore recommend not to accept this paper in its current form.\n> Confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct\n\nOUR RESPONSE: We did not claim unexpected pathological behavior in the previous paper. We pointed out the failure cases of the algorithm in what we though a navigation task should be.\n\n" ]
[ 7, 3, 3, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_BkiIkBJ0b", "iclr_2018_BkiIkBJ0b", "iclr_2018_BkiIkBJ0b", "BkID51bWM", "rypaPgFxz", "HyDUxCKlf", "ryYlB69gz" ]
iclr_2018_HJPSN3gRW
Learning to navigate by distilling visual information and natural language instructions
In this work, we focus on the problem of grounding language by training an agent to follow a set of natural language instructions and navigate to a target object in a 2D grid environment. The agent receives visual information through raw pixels and a natural language instruction telling what task needs to be achieved. Other than these two sources of information, our model does not have any prior information of both the visual and textual modalities and is end-to-end trainable. We develop an attention mechanism for multi-modal fusion of visual and textual modalities that allows the agent to learn to complete the navigation tasks and also achieve language grounding. Our experimental results show that our attention mechanism outperforms the existing multi-modal fusion mechanisms proposed in order to solve the above mentioned navigation task. We demonstrate through the visualization of attention weights that our model learns to correlate attributes of the object referred in the instruction with visual representations and also show that the learnt textual representations are semantically meaningful as they follow vector arithmetic and are also consistent enough to induce translation between instructions in different natural languages. We also show that our model generalizes effectively to unseen scenarios and exhibit zero-shot generalization capabilities. In order to simulate the above described challenges, we introduce a new 2D environment for an agent to jointly learn visual and textual modalities
rejected-papers
This paper was reviewed by 3 expert reviews and received largely negative reviews, with concerns about the toy-ish nature of the 2D environments and limited novelty. Since ICLR18 received multiple papers on similar topics, we took additional measures to ensure that papers were similar papers were judged under the same criteria. Specifically, we asked reviewers of (a) this paper and (b) of a concurrent submission that also studies language grounding in 2D environments to provide opinions on (b) and (a) respectively. Unfortunately, while they may be on similar topic and both working on 2D environments, we received unanimous feedback that (b) was much higher quality ("comparison with multiple baselines, better literature review, no bold claims about visual attention, etc). We realize this may be disappointing but we encourage the authors to incorporate reviewer feedback to make their manuscript stronger.
train
[ "SyHpaZUrG", "B1ji6W8Bz", "BJLtabUHf", "HJ57T-UBz", "HyE1BPlrM", "S1ZnEPxHM", "S1UE88gSM", "rku1fPgSM", "rkf84UgSz", "BJZpwtUNM", "BJtd58mlM", "BkLPOycez", "SyXYbebMG", "ryZgGvaQz", "SJgErPTQz", "S19J6I6mG", "rJlcD8p7G", "S1z0sST7G", "S1C4UyGGf" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "We have added more results showing significant improvement over baseline both in terms of accuracy and speed of convergence. \nAlso added preliminary results of hard scenario which looks promising ... ", "We have added more results showing significant improvement over baseline both in terms of accuracy and speed of convergence. \nAlso added preliminary results of hard scenario which looks promising ... ", "We have added more results showing significant improvement over baseline both in terms of accuracy and speed of convergence. \nAlso added preliminary results of hard scenario which looks promising ... ", "We have added more results showing significant improvement over baseline both in terms of accuracy and speed of convergence. \nAlso added preliminary results of hard scenario which looks promising ... ", "We have updated the paper with results on vizdoom based 3D environment comparing our approach with other baseline in the appendix section. We show that our method converges much faster compared to the baseline.", "We have updated the paper with results on vizdoom based 3D environment comparing our approach with other baseline in the appendix section. We show that our method converges much faster compared to the baseline.", "We have updated the paper with results on vizdoom based 3D environment comparing our approach with other baseline in the appendix section. We show that our method converges much faster compared to the baseline.", "In comparison to ICLR submission ID 235 we would claim that our environment is comparatively harder specifically because of the below mentioned reasons. \n\nIncreased grid size: \nGrid size in our case is 10x10 as compared to 7x7 used by them.\n\nNumber of objects present concurrently in the environment: \nThe number of objects in the map ranges from 3 to 6 as compared to 1 to 5 used by them. More number of distractor objects makes it difficult for the agent to comprehend the correct goal.\n\nIncreased complexity of natural language instructions: \nWe have two sentence instructions and sentence length varies from 3 to 18 as compared to 2 to 13 in theirs. \n\nSize attribute of objects: \nIn addition to having different instances of the same object that differ in their color (for example : green apple, red apple), we also have a size attribute associated with every object which can be either small (1x1), medium (2x2) or large (3x3). For example, if the environment has small red apple, medium red apple and large red apple and the sentence is “There are multiple red apple. Go to larger one.”, the agent receives positive reward only on successfully reaching the 3x3 red apple. On the other hand, if the environment only has small red apple and medium red apple with the same instruction, the agent receives positive reward only on successfully reaching the 2x2 red apple. \n\nWe would like to highlight that our multimodal attention mechanism converged out of the box without any additional effort when we increased the vocabulary size from 40 (at the time of ICLR submission) to 72 (at the end of rebuttal period) and the code for the same has been updated in our Github repository. The number of objects can be further increased by picking up images of different objects from Google images or any other publicly available image source (on which we are currently working on). \nAlso in case of 3d environments we provide experimental verification that our model works in 3d unlike submission 235.\n\nOur contribution in this work should also be seen from the implementation perspective as our model trains on a GPU, our environment is thread compatible (unlike the XWORLD2D used by them, source : https://github.com/PaddlePaddle/XWorld)\n", "We would like to mention that we have now evaluated our approach on modified VizDoom 3D environment used by Chaplot et al. and have observed encouraging results. To our knowledge this is the only multimodal fusion method that works both on 2d and 3d environments. We also observed a minimum of 2x speed up in convergence compared to Chaplot et al. We have added these results in the appendix section of our paper and would keep on updating them as and when we get them.", "\"We have updated the paper(section 2) highlighting how our environment is more complex than other 2d environments like the concurrent iclr submission ID 235 ...\"\n\nThis claim is unsubstantial. The author made this claim without providing concrete statistics for comparison. As far as I can tell, submission 235 has a vocabulary size of 186, totaling ~1.6 million different sentences. There are 115 different object classes with 3 instances for each class. In contrast, this submission only has around 30 object classes. And from Appendix 9.2, I can see that the total number of sentences is far less than 1 million. Overall, even though the authors try to highlight their differences with several concurrent submissions, the arguments seem not convincing. ", "Paper summary: The paper tackles the problem of navigation given an instruction. The paper proposes an approach to combine textual and visual information via an attention mechanism. The experiments have been performed on a 2D grid, where the agent has partial observation.\n\nPaper Strengths:\n- The proposed approach outperforms the baselines.\n- Generalization to unseen combination of objects and attributes is interesting.\n\nPaper Weaknesses:\nThis paper has the following issues so I vote for rejection: (1) The experiments have been performed on a toy environment, which is similar to the environments used in the 80's. There is no guarantee that the conclusions are valid for slightly more complex environments or real world. I highly recommend using environments such as AI2-THOR or SUNCG. (2) There is no quantitative result for the zero-shot experiments, which is one of the main claims of the paper. (3) The ideas of using instructions for navigation or using attention for combining visual and textual information have been around for a while. So there is not much novelty in the proposed method either. (4) References to attention papers that combine visual and textual modalities are missing.\n\nMore detailed comments:\n\n- Ego-centric is not a correct word for describing the input. Typically, the perspective changes in ego-centric views, which does not happen in this environment.\n\n- I do not agree that the attention maps focus on the right objects. Figures 6 and 7 show that the attention maps focus on all objects. The weights should be shown using a heatmap to see if the model is attending more to the right object.\n\n- I cannot find any table for the zero-shot experiments. In the rebuttal, please point me to the results in case I am missing them.\n\n\nPost Rebuttal:\nI will keep the initial rating. The environment is too simplistic to draw any conclusion from. The authors mention other environments are unstable, but that is not a good excuse. There are various environments that are used by many users. ", "Interesting Problem, but Limited Novelty and Flawed Evaluation\n\n\nThe paper considers the problem of following natural language instructions given an first-person view of an a priori unknown environment. The paper proposes a neural architecture that employs an RNN to encode the language input and a CNN to encode the visual input. The two modalities are fused and fed to an RNN policy network. The method is evaluated on a new dataset consisting of short, simple instructions conveyed in simple environments.\n\nThe problem of following free-form navigation instructions is interesting and has achieved a fair bit of attention, previously with \"traditional\" structured approaches (rule-based and learned) and more recently with neural models. Unlike most existing work, this paper reasons over the raw visual input (vs., a pre-processed representation such as a bag-of-words model). HoA notable exception is the work of Chaplot et al. 2017, which addresses the same problem with a nearly identical architecture (see below). Overall, this paper constitutes a reasonable first-pass at this problem, but there is significant room for improvement to address issues related to the stated contributions and flawed evaluations.\n\nThe paper makes several claims regarding the novelty and expressiveness of the model and the contributions of the paper that are either invalid or not justified by the experimental results. As noted, a neural approach to instruction following is not new (see Mei et al. 2016) nor is a multimodal fusion architecture that incorporates raw images (see Chaplot et al.). The paper needs to make the contributions and novelty relative to existing methods clear (e.g., those stated in the intro are nearly identical to those of Mei et al. and Chaplot et al.). This includes discussion of the attention mechanism, for which the contributions and novelty are justified only by simple visualizations that are not very insightful. Related, the paper omits a large body of work in language understanding from the NLP and robotics domains, e.g., the work of Yoav Artzi, Thomas Howard, and Stefanie Tellex, among others (see below). While the approaches are different, it is important to describe this work in the context of these methods.\n\n\nThere are important shortcomings with the evaluation. First, one of the two scenarios involves testing on instructions from the training set. The test set should only include held-out environments and instructions, which the paper incorrectly refers to as the \"zero-shot\" scenario. This test set is very small, with only 19 instructions. Related, there is no mention of a validation set, and the discussion seems to suggest that hyperparameters were tuned on the test set. Further, the method is compared to incomplete implementations of existing baselines that admittedly don't attempt to replicate the baseline architectures. Consequently, it isn't clear what if anything can be concluded from the evaluation. There is a\n\n\n\nComments/Questions\n\n* The action space does not include an explicit stop action. Instead, a run is considered to be finished either when the agent reaches the destination or a timeout is exceeded. This is clearly not valid in practice. The model should determine when to stop, as with existing approaches.\n\n* The paper makes strong claims regarding the sophistication of the dataset that are unfounded. Despite the claims, the environment is rather small and the instructions almost trivially simple. For example, compare to the SAIL corpus that includes multi-sentence instructions with an average of 5 sentences/instruction (vs. 2); 37 words/instruction (vs. a manual cap of 9); and a total of 660 words (vs. 40); and three \"large\" virtual worlds (vs. 10x10 grids with 3-6 objects).\n\n* While the paper makes several claims regarding novelty, the contributions over existing approaches are unclear. For example, Chaplot et al. 2017 propose a similar architecture that also fuses a CNN-based representation of raw visual input with an RNN encoding of language, the result of which is fed to a RNN policy network. What is novel with the proposed approach and what are the advantages? The paper makes an incomplete attempt to evaluate the proposed model against Chaplot et al., but without implementing their complete architecture, little can be inferred from the comparison.\n\n* The paper claims that the fusion method realizes a *minimalistic* representation, but this statement is only justified by an experiment that involves the inclusion of the visual representation, but it isn't clear what we can conclude from this comparison (e.g., was there enough data to train this new representation?).\n\n* It isn't clear that much can be concluded from the attention visualizations in Figs. 6 and 7, particularly regarding its contribution. Regarding Fig 6. the network attends to the target object (large apple), but not the smaller apple, which would be necessary to reason over their relative size. Further, the attention figure in Fig. 7(b) seems to foveate on both bags. In both cases, the distractor objects are very close to the true target, and one would expect the behavior to be similar irrespective of which one was being attended to.\n\n* The conclusion states that the method is \"highly flexible\" and able to handle a \"rich set of natural language instructions\". Neither of these claims are justified by the discussion (please elaborate on what makes the method \"highly flexible\", presumably the end-to-end nature of the architecture) or the experimental results.\n\n* The significance of randomly moving non-target objects that the agent encounters is unclear. What happens when the objects are not moved, as in real scenarios?\n\n* A stated contribution is that the \"textual representations are semantically meaningful\" but the importance is not justified.\n\n* Figure captions should appear below the figure, not at top.\n\n* Figures and tables should appear as close to their first reference as possible (e.g., Table 1 is 6 pages away from its reference at the beginning of Section 7).\n\n\n* Many citations should be enclosed in parentheses.\n\n\n\nReferences:\n\n* Artzi and Zettlemoyer, Weakly Supervised Learning of Semantic Parsers for Mapping Instructions to Actions, TACL 2013\n\n* Howard, Tellex, and Roy, A Natural Language Planner Interface for Mobile Manipulators, ICRA 2014\n\n* Chung, Propp, Walter, and Howard, On the performance of hierarchical distributed correspondence graphs for efficient symbol grounding of robot instructions, IROS 2015\n\n* Paul, Arkin, Roy, and Howard, Efficient Grounding of Abstract Spatial Concepts for Natural Language Interaction with Robot Manipulators, RSS 2016\n\n* Tellex, Kollar, Dickerson, Walter, Banerjee, Teller and Roy, Understanding natural language commands for robotic navigation and mobile manipulation, AAAI 2011", "**Paper Summary**\nThe paper studies the problem of navigating to a target object in a 2D grid environment by following given natural language description as well as receiving visual information as raw pixels. The proposed architecture consists of a convoutional neural network encoding visual input, gated recurrent unit encoding natural language descriptions, an attention mechanism fusing multimodal input, and a policy learning network. To verify the effectiveness of the proposed framework, a new environment is proposed. The environment is 2-D grid based and it consists of an agent, a list of objects with different attributes, and a list of obstacles. Agents perceive the environment throught raw pixels with a limited visible region, and they can perform actions to move in the environment to reach target objects.\n\nThe problem has been studied for a while and therefore it is not novel. The proposed framework is incremental. The proposed environment is trivial and therefore it is unclear if the proposed framework is able to scale up to a more complicated environment. The experiemental results do not support several claims stated in the paper. Overall, I would vote for rejection.\n\n\n - This paper solves the problem of navigating to the target object specified by language instruction in a 2D grid environment. It requires understanding of language, language grounding for visual features, and navigating to the target object while avoiding non-target objects. An attention mechanism is used to map a language instruction into a set of 1x1 convolutional filters which are intended to distinguish visual features described in the instruction from others. The experimental results show that the proposed method performs better than other methods.\n\n\n - This paper presents an end-to-end trainable model to navigate an agent through visual sources and natural language instructions. The model utilizes a proposed attention mechanism to draw correlation between the objects mentioned in the instructions with deep visual representations, without requiring any prior knowledge about these inputs. The experimental results demonstrate the effectiveness of the learnt textual representation and the zero-shot generalization capabilities to unseen scenarios. \n\n\n**Paper Strengths**\n- The paper proposes an interesting task which is a navigation task with language instructions. This is important yet relatively unxplored.\n- The implementation details are included, including optimizers, learning rates with weight decayed, numbers of training epochs, the discount factor, etc. \n- The attention mechanism used in the paper is reasonable and the learned language embedding clearly shows meaningful relationships between instructions.\n- The learnt textual representation follows vector arithmetic, which enables the agent to perceive unseen instructions as a new combination of the attributes and perform zero-shot generalization.\n\n\n\n**Paper Weaknesses**\n- The problem of following natural language descriptions together with visual representations of environments is not completely novel. For example, Both the problem and the proposed method are similar to those already introduced in the Gated Attention method (Chaplot et al., 2017). Although the proposed method performs better than the prior work, the approach is incremental. \n\n- The proposed environment is simple. The vocabulary size is 40 and the longest instruciton only consists of 9 words. Whether the proposed framework is able to deal with more complicated environments is not clear. The experimental results shown in Figure 5 is not convincing that the proposed method only took less than 20k iterations to perform almost perfectly. The proposed environment is small and simple compared to the related work. It would be better to test the proposed method in a similar scale with the existing 3D navigation environments (Chaplot et al., 2017 and Hermann et al., 2017).\n\n- The novelty of the proposed framework is unclear. This work is not the first one which proposes the multimodal fusion network incorporating a CNN achitecture dealing with visual information and a GRU architecture encoding language instructions. Also, the proposed attention mechanism is an obvious choice.\n\n- The shown visualized attention maps are not enough to support the contribution of proposing the attention mechanism. It is difficult to tell whether the model learns to attend to correct objects. Also, the effectiveness of incorporating the attention mechanism is unclear.\n\n- The paper claims that the proposed framework is flexible and is able to handle a rich set of natural language descriptions. However, the experiemental results are not enough to support the claim.\n\n- The presentaiton of the experiment is not space efficient at all.\n\n- The reference of the related papers which fuse multimodal data (vision and language) are missing.\n\n- Comapred to 8 pages was the suggested page limit, 13 pages is a bit too long.\n\n- Stating captions of figures above figures is not recommended.\n\n- It would be better to show where each 1x1 filter for multimodal fusion attends on the input image. Ideally, one filter should attend on the target object and others should attend on non-target objects. However, I wonder how RNN can generate filters to detect non-target objects given an instruction. Although Figure 6 and Figure 7 try to show insights about the proposed attention model, they don’t tell which kernel is in charge of which visual feature. Blurred attention maps in Figure 6 and 7 make it hard to interpret the behavior of the model. \n\n- The graphs shown in the Figure 5 are hard to interpret because of their large variance. It would be better to smoothing curves, so that comparing methods clearly.\n\n- For zero-shot generalization evaluation, there is no detail about the training steps and comparisons to other methods.\n\n- A highly related paper (Hermann et al., 2017) is missing in the references.\n\n- Since the instructions are simple, the model does not require attention mechanism on the textual sources. If the framework can take more complex language, might be worthwhile to try visual-text co-attention mechanism. Such demonstration will be more convincing.\n\n- The attention maps of different attribute is not as clear as the paper stated. Why do we need several “non-target” objects highlight if one can learn to consolidate all of them?\n\n- The interpretation of n in the paper is vague, the authors should also show qualitatively why n=5 is better than that of n=1,10. If the attention maps learnt are really focusing on different attributes, given more and more objects, shouldn’t n=10 have more information for the policy learning?\n\n- The unseen scenario generalization should also include texture change on the grid environment and/or new attribute combinations on non-target objects to be more convincing.\n\n- The contribution in the visual part is marginal.\n\n\n** Preliminary Evaluation**\n- The modality fusion technique which leads to the attention maps is an effective and seem to work well approach, however, the author should present more thorough ablated analysis. The overall architecture is elegant, but the capability of it to be extended to more complex environment is in doubt. The vector arithmetic of the learnt textual embedding is the key component to enable zero-shot generalization, while the effectiveness of this method is not convincing if more complex instructions such that it contains object-object relations or interactions are perceived by the agent. \n", "We would like to thank you for your insightful comments. We have tried to address some of the concerns below.\n\n- The experiments have been performed on a toy experiment...\n\nResponse - – We have updated the paper(section 2) highlighting how our environment is more complex than other 2d environments like the concurrent iclr submission ID 235 who focus on the task of language grounding. We tried performing the experiments over the AI2-THOR environment but found the environment to be unstable where the game would often stall down. We have further increased the complexity of our environment by working on a larger vocabulary set as well as on complex instructions.\n\n-There is no quantitative result for the zero-shot experiment\n\nResponse - We have added the results in a tabular format in our latest revision(Table 1).\n\n- The ideas of using instructions for navigation or using attention for combining visual and textual information have been around for a while...\n\nResponse - Our main aim was not to use instructions for navigation but rather to make an agent understand natural language for which we took the task of navigation. Also we claim that our mechanism of fusing the visual and textual modalities is new because the same has not been used by researchers before. We had to come up with a new approach because the other approaches were not converging on our environment.\n\n-References to attention papers that combine visual and textual modalities are missing.\n\nResponse - We have updated the references.\n\n- I do not agree that the attention maps focus on the right objects...\n\nResponse - We have updated the plots corresponding to attention maps highlighting how the agent might be using arithmetic over the maps to figure out its policy.", "We have updated our paper with the following improvements:- \n\n1) Increased vocabulary size to 72 with new objects and new words present in instructions.\n\n2) Complex instructions - The agent now responds to 'Go to former/latter' and 'IF.. ELSE..' types of sentences. The maximum number of words in an instruction is now 18 compared to 9 in our previous version.\n\n3) We have updated attention maps for better visualization. We show how the agent seems to be using simple arithmetic over the attention maps to figure out the policy.\n\n4)We show the importance of learning good grounding by attempting to do translation from English to French and vice-versa.\n\n5) The paper has been updated to include more references and highlighting how our approach is different than the concurrent submissions attempting to do language grounding. ", "We would like to thank you for your comments. We have tried to address the concerns raised by you below.\n\n- The problem of following natural language descriptions together with visual representations of environments is not completely novel...\n \nResponse - We are not claiming that the problem is novel but the proposed method is simpler and trains faster compared to other approaches. Also, we found that all the mentioned approaches including Gated Attention method (Chaplot et al., 2017) didn’t work well in our simple 2d environment. \nOur simpler model consequently gives us insight into how the network is working with the help of internal attention maps. We have now shown in the paper that the network has most likely evolved a simple attention arithmetic logic to figure out which object it should go to.\nAnother interesting result is the language translation capability the network has evolved without having trained on any parallel corpus.\n \n- The proposed environment is simple....\n\nResponse - We have updated our paper with results over a larger vocabulary size of 72 with new objects and more complex sentences whereby the maximum words in an instruction are now 18.\n \n-The novelty of the proposed framework is unclear...\n \nResponse - We would like to mention that for developing an end-to-end trainable multimodal deep-learning system the two basic components of\n1. \tCNN based vision module\n2. \tRNN based language module\nwould be common across most papers. Approaches would however differ on fusion module which combines information from these modules to do the task.\nTo the best of our knowledge, other researchers haven’t mentioned this attention based fusion approach in their works either as a proposal or as a baseline. In practice, this approach has worked better than all other methods for our task. It has also lead to more interesting results as mentioned earlier.\n \n\n- The shown visualized attention maps are ...\n\nResponse - We have added new results which suggest that the network has evolved a simple attention arithmetic logic to figure out which object should it be interested in. \nThe effectiveness of attention is clear because it leads to a smaller, faster and a better performing model.\n\n\n-The paper claims that the proposed framework is flexible ...\n\nResponse - We have added more complex sentences in our latest revision .\n\n- It would be better to show where each 1x1 filter for multimodal fusion attends on the input image...\n\nResponse - We show each 1x1 filters output overlapped on the input image on figure 6,7. We think RNN is generating one filter which matches the target’s characteristics and other filters without those characteristics. In the paper the 6(d) and 7(d) filter focuses on possible targets and others on non targets. We have added 6(e) and 7(e) figures generated by plausible attention map arithmetic (for lack of a better term) which in our finding consistently seems to be the target object. \n\n- For zero-shot generalization evaluation, there is no detail about the training steps and comparisons to other methods.\n\nResponse - We have not added comparison to other methods as they performed very poorly on training set.\n\n- A highly related paper (Hermann et al., 2017) is missing in the references.\n\nResponse - Referenced\n\n- Since the instructions are simple, the model does not require attention mechanism on the textual sources...\n\nResponse - We will make this a part of our future work\n\n- The attention maps of different attribute is not as clear as the paper stated. Why do we need several “non-target” objects highlight if one can learn to consolidate all of them?\n\nResponse - Yes, we have seen 2 or 3 attention maps works well and supports the argument of consolidated maps. But we need to complete all experiments before stating that in our paper.\n\n- The interpretation of n in the paper is vague, the authors should also show qualitatively why n=5 is better than that of n=1,10...\n\nResponse - In the egocentric view there is a limited number of objects that can appear which may limit the number of attention maps needed. We need to do further experiments by increasing the ego-centric vision size and changing the number of attention maps.\n\n- The unseen scenario generalization should also include texture change on the grid environment and/or new attribute combinations on non-target objects to be more convincing.\n\nResponse - Is a part of our future work\n\n", "We would like to thank you for your insightful comments and suggestions regarding our work. We have tried to address the issues raised by you below. Our focus in this work was to achieve language grounding by make an agent understand natural language in a simulated environment, though further down the road we would want to take our work to real world. We have also updated the paper with additional information and modified language.\n \n1) \"The action space does not include an explicit stop action...\"\n\nResponse – Since our focus in this work is on language grounding, we did not need to include an explicit stop action. In our case, we are automatically getting a stop signal from our environment through the only positive reward signal. Such an approach is adopted by contemporary works on language grounding(Interactive Grounded Language Acquisition and Generalization in a 2D World, ICLR conference paper235).\n \n\n2) \"Despite the claims, the environment is rather smal......\"\n\nResponse – Our environment can be configured to increase the vocabulary, add many more instructions and increase the complexity of the environment accordingly. We have increased the complexity to a total of 72 and have also added several additional instructions (section 4) resulting in maximum 18 words in an instruction. We also like to point out that the environment is open source and over time, through public contributions, the size and complexity can be increased considerably.\n\n3) \"While the paper makes several claims regarding novelty, the contributions over existing approaches are unclear.....\"\n\nResponse - Although our paper also uses a multi-modal fusion approach ,the exact fusion mechanism differs from other similar works. Chaplot et al. directly do a hadamard product between textual and visual embeddings, whereas we generate multiple textual embeddings by passing gru features to multiple fc layers and then use each of them to convolve with the visual features thus generating multiple attention maps. The idea behind generating multiple attention maps was to let each of them capture different environmental features such as which are the necessary objects and which are not. Furthermore, we found that the attention mechanism proposed by Chaplot et al. performed poorly on our environment (we have replicated the exact architecture now) and thus we had to come forward with a different fusion mechanism.\n\n4) \"The paper claims that the fusion method realizes a *minimalistic* representation...\"\n\nResponse - We say that this minimalistic representation is important because its leading to better representations of words that too with less memory overhead. We found that the attention maps, when concatenated with the visual features for finding the policy, did not lead to convergence.\n\n5)\" It isn't clear that much can be concluded from the attention visualizations....\"\n\nResponse – We have added attention maps in our latest revision in which objects are far apart. We show in the paper as to how the agent is using simple logic over attention maps to navigate to the correct target object.\n\n6) \"The conclusion states that the method is \"highly flexible\"....\"\n\nResponse - We would like to point out that we state that our environment is highly flexible(not the method). We say this because it’s very easy to add new objects in the environment through the json file. Moreover, one can also add new set of instructions via minimal changes in the code.\n\n7) \"The significance of randomly moving non-target objects that the agent encounters is unclear...\"\n \nResponse –The significance is twofold, one is that changing the position helped converge faster than fixing it. Secondly, we are helping the agent learn to avoid a non-target object through its visual appearance rather than memorizing its location. This property can be controlled via the value of the 'hard' attribute of each of the objects in the json.\n\n8) \" A stated contribution is that the \"textual representations are semantically meaningful\" but the importance is not justified...\"\n\nResponse - We say that the textual representations being semantically meaningful is important because it can make the agent respond to new combination of words that it had not seen before in training. If the agent knows what the words 'blue' and ’bag' actually mean then it can respond to 'Go to blue bag' even without ever seeing it in training. We show those in last paragraph of section 7 wherein the agent responds to 'Go to [size][object]' type of sentences even without seeing them during training. We even attempted to do translation (section 7.1) to show the importance of language grounding.\n \n\n \n", "We would like to thank the people for taking an attempt to reproduce our results. We have now updated our github repository with the codes suggesting how to get the attention maps mentioned in our paper. We have also updated our paper with the results over zero shot instructions.", "****Summary****\n\nThe paper focuses on solving RL problem in which an agent learns to navigate to a particular target object in a given 2D environment. The environment is specified as an image, i.e. raw pixel values, and the target object is inferred from a text received by agent at the beginning of the episode. To learn the policy on both input image and the text, the paper proposes a multimodal framework which combines features from both the image and the text to generate attention maps based on which the agent then makes decisions which are sufficient to learn an effective policy.\n\nThe paper proposes a CNN architecture to extract features from the egocentric view of the environment. For extracting features from the textual description of the target, a sequence of GRUs is used, processing a concatenation of multiple one-hot encodings each corresponding to word in the input text. The embedding obtained from GRU is then passed through multiple fully connected layers. Finally, to combine the features of image and text, the paper claims that best features are obtained by convolving both textual and visual features at the final layer. A3C architecture was used to learn the policy. \n\nOur reviews are based on original paper submitted at ICLR 2018.\n\n*****Strengths*****\nAn honest effort is made to ensure transparency of the work and the involved experiments. The training phase code released on GitHub matches with the explanation and parameters provided in the paper.\n\nBoth the model and the runtime environment were shown to support multithreading, as well as operation on a GPU.\n\nThe proposed model achieves excellent performance: a mean reward of 0.95, and an accuracy of 0.92 under unseen scenarios, which is higher than the comparable existing methods.\n\nThrough training, the model is able to generalize the semantics and the meaning of textual instructions as shown by the ability to apply vector arithmetic on the encoded vectors to generate other vectors. A similar combination of vectors were shown to result in similar model behaviour as intended.\n\nThe final policy learnt by the agent is able to reach the target objects, thus proving the model capability to capture the nuances of shape, size, location and type of objects.\n\n*****Weaknesses*****\nThe code lacks comments and documentation. Some function are unused, and hard to reintegrate without original knowledge of the developers (e.g. generation of attention maps). Poor naming conventions (hard to interpret).\n\nModel implementation lacks testing stage. How to force the model to test an environment with a particular instruction? How to input particular vector to the model and see its behaviour?\n\nZero-shot generalization lacks proof. Code/Paper does not talk about implementation details.\n\nModel training stage lacks stopping criteria. Paper presents performance metrics without specifying the number of episodes the model was trained for.\n\nNovelty claim is misleading. Similar work exists in the literature [1] [2].\n\n*****Reproducibility Results*****\nWe achieved similar model performance results as reported in the original paper. However, the number of completed training episodes when presenting the findings was not specified, which makes it hard to reproduce the exact behaviour. \n\nThe overall trend for the reward does not converge at the claimed rate; we trained the model for 50K episodes and our observations indicate that the reward of the model does not converge at the rate depicted in Figure 5 of the original paper; it takes more than 10K iterations for reward to reach above 0.5 value.\n\nThe paper does not report the accuracy obtained in the zero-shot generalization scenarios. In general, the paper talks very little about how zero-shot generalization was ensured (it requires special treatment), and there is no code provided to apply/test this.\n\nThe paper does not mention the methodology used for creating the visualization of attention maps. The attention maps as mentioned in paper are of dimension 7x7x5, where 5 FC layers are used on embeddings generated from GRU, whereas, the size of an input image is 84x84x3. The mapping of the corresponding weights from attention layer to the original image is not discussed. The results of our improvised process to visualize attention maps did not match with the claimed results; the generated maps do not necessarily emphasize points of attention.\n\nWe were able to successfully confirm all claims about vector arithmetic, namely parallel vectors resulting from similar vector combinations, and generating vectors using other vectors.\n\n*****References*****\n\n[1] Haonan Yu, Haichao Zhang, and Wei Xu. A deep compositional framework for human-like language acquisition in virtual environment.\n\n[2] Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, and Ruslan Salakhutdinov. Gated-attention architectures for task-oriented language grounding.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, -1, -1, -1, -1, -1, -1 ]
[ "BJtd58mlM", "BkLPOycez", "rkf84UgSz", "SyXYbebMG", "SyXYbebMG", "BkLPOycez", "BJtd58mlM", "BJZpwtUNM", "iclr_2018_HJPSN3gRW", "ryZgGvaQz", "iclr_2018_HJPSN3gRW", "iclr_2018_HJPSN3gRW", "iclr_2018_HJPSN3gRW", "BJtd58mlM", "iclr_2018_HJPSN3gRW", "SyXYbebMG", "BkLPOycez", "S1C4UyGGf", "iclr_2018_HJPSN3gRW" ]
iclr_2018_HJNGGmZ0Z
What is image captioning made of?
We hypothesize that end-to-end neural image captioning systems work seemingly well because they exploit and learn ‘distributional similarity’ in a multimodal feature space, by mapping a test image to similar training images in this space and generating a caption from the same space. To validate our hypothesis, we focus on the ‘image’ side of image captioning, and vary the input image representation but keep the RNN text generation model of a CNN-RNN constant. We propose a sparse bag-of-objects vector as an interpretable representation to investigate our distributional similarity hypothesis. We found that image captioning models (i) are capable of separating structure from noisy input representations; (ii) experience virtually no significant performance loss when a high dimensional representation is compressed to a lower dimensional space; (iii) cluster images with similar visual and linguistic information together; (iv) are heavily reliant on test sets with a similar distribution as the training set; (v) repeatedly generate the same captions by matching images and ‘retrieving’ a caption in the joint visual-textual space. Our experiments all point to one fact: that our distributional similarity hypothesis holds. We conclude that, regardless of the image representation, image captioning systems seem to match images and generate captions in a learned joint image-text semantic subspace.
rejected-papers
Paper reviewed by three experts who have provided detailed feedback. All three recommend rejection, and this AC sees no reason to overrule their recommendation.
train
[ "S1wHgNwHz", "ryXXgEPHG", "SyZwTXVrz", "BybWlKXgz", "r1fNl7Cgz", "By5_q5y-z", "H1xPSyofM", "rkqoKB9fz", "HybctS5zf", "SkCgOrcGG", "Hk60UB5GM", "rJGFUBqfM", "S1P_G7Mfz" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public" ]
[ "We appreciate and thank the reviewer for going through our rebuttal and revised manuscript and for the comments. However, we disagree with many of the points raised in the above review.\n\n> - \"End-to-end IC models are remarkably capable of separating structure from noisy input representations, as demonstrated by pseudo-random vectors\": This statement is factually incorrect and its empirical implications are not surprising. The pseduo random vectors (pg 5) are not noisy at all! First, they are deterministic mapping from image space to a vector space (this is the definition of a feature extractor). Secondly, and more importantly, they are generated using Gold (or ground-truth) object counts. By the very definition of this, they are NOT noisy! Your conclusion that such a representation works well for image captioning is not surprising. To any system using image features, all it really cares about is how good the mapping from image to vector (feature) space is. Your mapping is defined using ground truth counts. This claim is repeated throughout the paper - abstract, introduction and conclusion. In all three repititions, the authors claim incorrectly that the representation is `noisy'.\n\n\nWe claim pseudo-random vectors as being noisy representations as they are not `just’ one to one mapping from the image to vector feature space but they are actually a composition of object vectors (where objects are represented by random vectors). The composition specifically involves addition and the information about the number of occurrences of the objects --specifically: multiplication of random vectors per object by the number of object occurrences and then addition of vectors across multiple objects. The resultant composition is `noisy’ -- this can be seen both in Figure 1, where the initial representations are shown to not form any clusters, as well as in Figure 2(d), where the initial representations again form no clusters. We kindly refer the reviewer to the full tSNE plot for the initial representations of pseudo-random vectors here: https://github.com/anonymousiclr/HJNGGmZ0Z/blob/master/tsne_initial_pseudorandom_4000.png. Further, the conclusion doesn’t change even if we use predicted counts. \n\nDespite the review strongly stating it to be factually incorrect, we stand by our conclusions. We have repeated our claims of the representations in the projected space. We observe that the resultant framework has, to some extent, captured the compositional operation, despite the initial representation being difficult to decipher.\n\n\n> - \"A sparse, low-dimensional bags-of-objects representation can be used as a tool to investigate the contribution of images in IC; we demonstrated that such a vector is sufficient for generating good image captions\": Like I had mentioned in my review earlier - bag-of-objects based representations have been shown to be sufficient for generating good image captions (Fang et al., 2015).\n\nFirstly, Fang et al 2015 does not use neural end-to-end image captioning. Secondly, they used a high 1000-dimensional \"bag of surface-level-text-labels taken directly from captions\" space, while we show a *low* barely 80-dimensional *object category* space (that is objects occuring in the images) performs as well. We disagree with the reviewer that Fang et al have the same conclusions as ours.\n", "> - \"End-to-end IC models repeatedly generate the same captions by matching images in the joint visual-textual space and ‘retrieving’ a caption in the learned joint space\": Again, a similar claim was empirically shown by Devlin et al., by using a nearest neighbor technique for image captioning. They also showed that such a simple technique outperformed end-to-end captioning systems.\n\nDevlin et al. only show that a `nearest neighbour method’ works as well as end-to-end captioning systems. We show that end-to-end IC *is* sort of a nearest neighbour retrieval in the joint space. These are two completely different conclusions. We again respectfully disagree with the reviewer that the two are same or similar. \n\n> - \"End-to-end IC models rely on test sets with a similar distribution as the training set for generating good captions\": Please look at Sec 4.3.3 of \"Show and Tell: Lessons Learned from the 2015 MSCOCO Image Captioning Challenge\" where transfer experiments are shown across datasets, showing that similarity between train and test distributions is important from transfer.\n\nWe are aware of this and have explicitly mentioned Vinyals et al 2016 in Sec 4.4 of the our paper. We have looked carefully again at Vinyals et al 2016 - they only mention that the ``BLEU scores degrade by 10 points’’ and merely suggest that this *could* be because ``more differences in vocabulary and a larger mismatch’’. In our paper, we demonstrated that what they suggested is not exactly true. We show that there are only 8.6% out of vocabulary words in Flickr30k (around 8% vocabulary mismatch is true in the MSCOCO train v/s dev set). So there is more to the performance drop than just vocabulary mismatch. And thus, we further show that it is also because the `object’ distribution (from the image side) in MSCOCO is almost identical in train and test, compared to the larger differences between MSCOCO train vs. Flickr30k test. That is, the `types’ of images are maintained in MSCOCO evaluation sets, while they vary in Flickr30k test set. To our knowledge this is still a novel and an important claim. \n\n\n> There are also a few issues with the experimental setup.\n> - Table 1 shows nearly constant numbers across B-4, M, and S.\n\nWe do not understand why constant metric scores can indicate that it’s an “issue with our experimental setup”. In fact, we think the constant scores further strengthen our claim that the metrics do not capture what exactly happens as shown by several other papers that we cite. \n\n\n> - Table 1 shows that using Pool5 features for ResNet-152 performs better than softmax for ResNet-152. Now Figure 1 shows us that softmax ResNet-152 is better at discriminating between images based on object groups rather than Pool5 ResNet-152. A similar negative correlation exists between Table 1 and Figure 1 for the pairs (softmax ResNet-152, pseudo-random). The reason for introducing Figure 1 as stated in the paper is \"If the representation is informative for IC, then the representations should ideally semantically related images together, and in turn allow for relevant captions to be generated.\" This statement is clearly falsified by the pairs (and many more such exist within your results).\n\nWe state ‘If the representation is informative for IC, then the representations should ideally semantically related images together, and in turn allow for relevant captions to be generated’ as our hypothesis and we follow it up in the next section. Figure 1 merely shows how well the initial representations cluster with cosine distance as the metric. We don’t make any conclusions using Figure 1. \n\nAs our work mainly the deals with the evaluation of representational contributions we consider ICLR the best venue to disseminate our findings.\n", "After reading the other reviews, the discussion and the revised paper, I am not convinced of the contributions of the paper (even if I were to ignore the weakness in the experimental setup, as I explain later).\nLet's focus on the conclusion section of the paper (page 11) to see what the authors claim.\n\n- \"End-to-end IC models are remarkably capable of separating structure from noisy input representations, as demonstrated by pseudo-random vectors\": This statement is factually incorrect and its empirical implications are not surprising. The pseduo random vectors (pg 5) are not noisy at all! First, they are deterministic mapping from image space to a vector space (this is the definition of a feature extractor). Secondly, and more importantly, they are generated using Gold (or ground-truth) object counts. By the very definition of this, they are NOT noisy! Your conclusion that such a representation works well for image captioning is not surprising. To any system using image features, all it really cares about is how good the mapping from image to vector (feature) space is. Your mapping is defined using ground truth counts. This claim is repeated throughout the paper - abstract, introduction and conclusion. In all three repititions, the authors claim incorrectly that the representation is `noisy'.\n- \"A sparse, low-dimensional bags-of-objects representation can be used as a tool to investigate the contribution of images in IC; we demonstrated that such a vector is sufficient for generating good image captions\": Like I had mentioned in my review earlier - bag-of-objects based representations have been shown to be sufficient for generating good image captions (Fang et al., 2015).\n- \"End-to-end IC models repeatedly generate the same captions by matching images in the joint visual-textual space and ‘retrieving’ a caption in the learned joint space\": Again, a similar claim was empirically shown by Devlin et al., by using a nearest neighbor technique for image captioning. They also showed that such a simple technique outperformed end-to-end captioning systems.\n- \"End-to-end IC models rely on test sets with a similar distribution as the training set for generating good captions\": Please look at Sec 4.3.3 of \"Show and Tell: Lessons Learned from the 2015 MSCOCO Image Captioning Challenge\" where transfer experiments are shown across datasets, showing that similarity between train and test distributions is important from transfer.\n\nThere are also a few issues with the experimental setup.\n- Table 1 shows nearly constant numbers across B-4, M, and S.\n- Table 1 shows that using Pool5 features for ResNet-152 performs better than softmax for ResNet-152. Now Figure 1 shows us that softmax ResNet-152 is better at discriminating between images based on object groups rather than Pool5 ResNet-152. A similar negative correlation exists between Table 1 and Figure 1 for the pairs (softmax ResNet-152, pseudo-random). The reason for introducing Figure 1 as stated in the paper is \"If the representation is informative for IC, then the representations should ideally semantically related images together, and in turn allow for relevant captions to be generated.\" This statement is clearly falsified by the pairs (and many more such exist within your results).\n\nI do not think this paper is ready for publication and stick with \"needs work\".", "This paper analyzes the effect of image features on image captioning. The authors propose to use a model similar to that of Vinyals et al., 2015 and change the image features it is conditioned on. The MSCOCO captioning and Flickr30K datasets are used for evaluation.\n\nIntroduction\n- The introduction to the paper could be made clearer - the authors talk about the language of captioning datasets being repetitive, but that fact is neither used or discussed later.\n- The introduction also states that the authors will propose ways to improve image captioning. This is never discussed.\n\nCaptioning Model and Table 1\n- The authors use greedy (argmax) decoding which is known to result in repetitive captions. In fact, Vinyals et al. note this very point in their paper. I understand this design choice was made to focus more on the image side, rather than the decoding (language) side, but I find it to be very limiting. In this regime of greedy decoding it is hard to see any difference between the different ConvNet features used for captioning - Table 1 shows meteor scores within 0.19 - 0.22 for all methods.\n- Another effect (possibly due to greedy decoding + choice of model), is that the numbers in Table 1 are rather low compared to the COCO leaderboard. The top 50 entries have METEOR scores >= 0.25, while the maximum METEOR score reported by the authors is 0.22. Similar trend holds for other metrics like BLEU-4.\n- The results of Table 5 need to be presented and interpreted in the light of this caveat of greedy decoding.\n\nExperimental Setup and Training Details\n- How was the model optimized? No training details are provided. Did you use dropout? Were hyperparamters fixed for training across different feature sizes of VGG19 and ResNet-152? What is the variance in the numbers for Table 1?\n\nMain claim of the paper\nDevlin et al., 2015 show a simple nearest neighbor baseline which in my opinion shows this more convincingly. Two more papers from the same group which use also make similar observations - tweaking the image representation makes image captioning better: (1) Fang et al., 2015: Multiple-instance Learning using bag-of-objects helps captioning (2) Misra et al. 2016 (not cited): label noise can be modeled which helps captioning. This claim has been both made and empirically demonstrated earlier.\n\nMetrics for evaluation\n- Anderson et al., 2016 (not cited) proposed the SPICE metric and also showed how current metrics including CiDER may not be suitable for evaluating image captions. The COCO leaderboard also uses this metric as one of its evaluation metrics. If the authors are evaluating on the test set and reporting numbers, then it is odd that they `skipped' reporting SPICE numbers.\n\nChoice of Datasets\n- If we are thoroughly evaluating the effect of image features, doing so on other datasets is very important. Visual Genome (Krishnan et al., not cited) and SIND (Huang et al., not cited) are two datasets which are both larger than Flickr30k and have different image distributions from MSCOCO. These datasets should show whether using more general features (YOLO-9k) helps.\nThe authors should evaluate on these datasets to make their findings stronger and more valuable.\n\nMinor comments\n- Figure 1 is hard to read on paper. Please improve it.\n- Figure 2 is hard to read even on screen. It is really interesting, so improving the quality of this figure will really help.", "The paper claims that image captioning systems work so well, while most recent state of the art papers show that they produce 50% errors, so far from perfect.\n\nThe paper lacks novelty, just reports some results without proper analysis or insights.\n\nMain weakness of the paper:\n - Missing many IC systems citations and comparisons (see https://competitions.codalab.org/competitions/3221#results)\n - According to \"SPICE: Semantic Propositional Image Caption Evaluation\" current metrics used in image captioning don't correlate with human judgement.\n- Most Image Caption papers which use a pre-trained CNN model, do fine-tune the image feature extractor to improve the results (see Vinyals et al. 2016). Therefore correlation of the image features with the captions is weaker that it could be.\n- The experiments reported in Table1 are way below state-of-the-art results, there a tons of previous work with much better results, see https://competitions.codalab.org/competitions/3221#results\n - To provide a fair comparison authors, should compare their results with other paper results.\n - Tables 2 and 3 are missing the original baselines.\nThe evaluation used in the paper don't correlate well with human ratings see (SPICE paper), therefore trying to improve them marginally doesn't make a difference.\n- Getting better performance by switching from VGG19 to ResNet152 is expected, however they obtain worse results than Vinyals et al. 2016 with inception_v3. \n- The claim \"The bag of objects model clusters these group the best\" is not supported by any evidence or metric.\n\nOne interesting experiment but missing in section 4.4 would be how the image features change after fine-tuning for the captioning task.\n\n\nTypos:\n - synsest-level -> synsets-level", "This paper is an experimental paper. It investigates what sort of image representations are good for image captioning systems. \n\nOverall, the idea seems relevant and there are some good findings but I am sure that image captioning community is already aware of these findings.\n\nThe main issue of the paper is the lack of novelty. Even for an experimental paper, I would argue that novelty in the experimental methodology is an important fact. Unfortunately, I do not see any novel concept in the experimental setup.\n\nI recomend this paper for a workshop presentation.\n", "We have updated the paper with these salient changes: \n\n* re-written introduction\n* updated results with SPICE\n* updated sections 4.3, 4.4 and 4.5 with more support to claims \n* re-written conclusion ", "> Main claim of the paper Devlin et al., 2015 show a simple nearest neighbor baseline which in my opinion shows this more convincingly. Two more papers from the same group which use also make similar observations - tweaking the image representation makes image captioning better: (1) Fang et al., 2015: Multiple-instance Learning using bag-of-objects helps captioning (2) Misra et al. 2016 (not cited): label noise can be modeled which helps captioning. This claim has been both made and empirically demonstrated earlier. Metrics for evaluation\n\nOnce again, we use the object representations as a tool for our investigation. Our aim is not to improve on the task.\n\n> - Anderson et al., 2016 (not cited) proposed the SPICE metric and also showed how current metrics including CiDER may not be suitable for evaluating image captions. The COCO leaderboard also uses this metric as one of its evaluation metrics. If the authors are evaluating on the test set and reporting numbel rs, then it is odd that they `skipped' reporting SPICE numbers.\n\nWe have answered this before. We note however that our observations are also consistent with the numbers on the SPICE metric. \n\n> Choice of Datasets - If we are thoroughly evaluating the effect of image features, doing so on other datasets is very important. Visual Genome (Krishnan et al., not cited) and SIND (Huang et al., not cited) are two datasets which are both larger than Flickr30k and have different image distributions from MSCOCO. These datasets should show whether using more general features (YOLO-9k) helps. The authors should evaluate on these datasets to make their findings stronger and more valuable.\n\nSIND represents a very different type of data where sentences compose a narrative. Different kinds of models are needed and these are evaluated using different metrics. Visual Genome, on the other hand, is a subset of MSCOCO with different kind of annotations (object specific captions). We are interested in investigating the CNN-LSTM model in this paper, and while it may be applied to a different domain of the same task (e.g. image captioning on Flickr30k), it is not clear how this can be applied directly to a different set of tasks.\n\n\n> Minor comments - Figure 1 is hard to read on paper. Please improve it. - Figure 2 is hard to read even on screen. It is really interesting, so improving the quality of this figure will really help.\n\nWe have enlarged the Figure 1.\n\nWe initially planned to add the full, high-resolution versions of Figure 2 in the appendix. Unfortunately each t-SNE visualisation was around 18MB -- which will increase the file size to over 100MB if we were to add all images (3 pairs before-after projection). We have added an anonymised external link in the updated version of the paper. The images can now be found here: https://github.com/anonymousiclr/HJNGGmZ0Z\n ", "> - The introduction to the paper could be made clearer\n\nWe have updated the introduction to make it clearer.\n\n> the authors talk about the language of captioning datasets being repetitive, but that fact is neither used or discussed later.\n\nIn our analysis we observed that in all cases, i.e., using any type of representation, there is only a small subset (20-30%) of the captions that are unique. This was mentioned in section 4.5 of our original submission of the paper. We have further clarified this section in the updated version.\n\n\n> The introduction also states that the authors will propose ways to improve image captioning. This is never discussed.\n\nWe do not promise to do that, but rather state that findings could help improve image captioning systems.\n\n> Captioning Model and Table 1 - The authors use greedy (argmax) decoding which is known to result in repetitive captions. In fact, Vinyals et al. note this very point in their paper. I understand this design choice was made to focus more on the image side, rather than the decoding (language) side, but I find it to be very limiting.\n> In this regime of greedy decoding it is hard to see any difference between the different ConvNet features used for captioning\n\nThis was purposefully done for determinism. We wanted to understand the best 'choice of words' by the model given a particular representation. \n\n\n> The top 50 entries have METEOR scores >= 0.25, while the maximum METEOR score reported by the authors is 0.22. Similar trend holds for other metrics like BLEU-4.\n\nOur model should be compared with the Neuraltalk model as it has the same settings. Other similar models (like Vinyals et al 2015) use ensembles and other engineering tricks that we are not interested in. \n\n> - The results of Table 5 need to be presented and interpreted in the light of this caveat of greedy decoding. Experimental Setup and Training Details - How was the model optimized? No training details are provided. Did you use dropout? Were hyperparameters fixed for training across different feature sizes of VGG19 and ResNet-152? What is the variance in the numbers for Table 1?\n\nOur settings are: \nLSTM with 128 dimensional word embeddings and 256 dimensional hidden representations\nDropout over LSTM of 0.8\nAdam for optimization. \nLearning rate = 4e-4\nWe’ll add the variance figures to an improved version of the paper.\n\n\n", "> The paper lacks novelty, just reports some results without proper analysis or insights.\n> Main weakness of the paper:\n> - Missing many IC systems citations and comparisons (see https://competitions.codalab.org/competitions/3221#results)\n\nWe stress that our evaluations are with respect to the model proposed by Karpathy et al 2015. Our goal is not to 'beat or break' systems but to understand the 'whys' and 'hows'.\n\n> - According to \"SPICE: Semantic Propositional Image Caption Evaluation\" current metrics used in image captioning don't correlate with human judgement.\n\nWe are not claiming explicitly that any of the metrics has good correlation with human judgements. As we mentioned before our focus on CIDEr is because a) the official evaluation script from MSCOCO contains only CIDEr, Meteor, BLEU and ROUGE, b) CIDEr is a metric that was officially developed for the task of image captioning, c) CIDEr is the official metric for MSCOCO, d) papers by Liu et al 2017, Kilickaya et al. 2017 and Vedantam et al, 2015 (with the human correlation experiments over Flickr8k dataset) still state the importance of CIDEr as a metric for image captioning. We further note that we observe a similar trend as we found in CIDEr, so all our observations are still valid. \n\n\n> - Most Image Caption papers which use a pre-trained CNN model, do fine-tune the image feature extractor to improve the results (see Vinyals et al. 2016). Therefore correlation of the image features with the captions is weaker that it could be.\n\nWhile it is true that fine-tuning could have been helpful to bump performance, our paper deals with an exploration of representational properties. Vinyals et al. 2016 has shown that fine-tuning gives only a minor 1-point improvement for BLEU. This is also using an ensemble of models. We again state that our experiments are about understanding image captioning models.\n\n> - To provide a fair comparison, authors should compare their results with other paper results. - Tables 2 and 3 are missing the original baselines.\n\nWe will add the results from the comparable papers, even though our focus is not comparisons or to show performance improvements over other models. However, we do not understand what the reviewer means by “original baselines”. Could you please clarify?\n\n> The evaluation used in the paper don't correlate well with human ratings see (SPICE paper), therefore trying to improve them marginally doesn't make a difference.\n\nPlease see answer above regarding metrics. In addition, our focus is not to improve the performance of the system, but to interpret the 'how' and 'why' of the system. To this end, we have made significant progress.\n\n> - Getting better performance by switching from VGG19 to ResNet152 is expected, however they obtain worse results than Vinyals et al. 2016 with inception_v3.\n\nWe have not chosen Vinyals et al. 2016 since it uses ensembles and other clever engineering tricks. This would make it hard to answer the questions we ask in this paper -- namely, the contribution of image representation. Our results are comparable to those in Karpathy et al, 2015. We will add this into the table.\n\n> - The claim \"The bag of objects model clusters these group the best\" is not supported by any evidence or metric.\n\nWe believe that the reviewer has misunderstood the sentence. This sentence explains the observations in Figure 1 (more specifically Figure 1a). The figure shows that the bag of objects representation forms better clusters. It shows the cosine distances between each group for the bag of objects representation. We see from the figure that the bag of objects representations clusters these groups best. For example, the average image representation of “dog” correlates with images containing “dog” as a pair like “dog+person” and “dog+toilet”. We are aware that this is true for our given example, however we expect this to extrapolate over other examples in the dataset. \n\n\n\n> One interesting experiment but missing in section 4.4 would be how the image features change after fine-tuning for the captioning task.\n\nWe will do it as a future work, even though this does not allow us to answer our questions posed in this paper. \n\n\n", "> Overall, the idea seems relevant and there are some good findings but I am sure that image captioning community is already aware of these findings.\n> The main issue of the paper is the lack of novelty. Even for an experimental paper, I would argue that novelty in the experimental methodology is an important fact.\n\nOur claim is in the novel 'insights' into end-to-end model of image captioning models. Our empirical evaluations with multiple representations, visualizations and out of domain experiments reveal new and important insights that should be of interest to the community.\n\nWe kindly ask clarification from the reviewer regarding what is meant by 'novelty in experimental methodology'. \n", "We thank the reviewers for the comments. \n\nOur submission is based on the simple end-to-end model as proposed by Karpathy et al 2015. We use this model because its simplicity makes it easier to focus on the image component. We are interested in the interpretability of the image-captioning system rather than the performance on the task. In addition, more advanced models can be considered similar variants of Karpathy et al 2015. We do not claim novelty with respect to the captioning model. Instead, our submission presents novel insights into the image captioning task which we are confident that it should be of interest to the community. Also as our submission involves work on understanding the representational contributions, we consider our work highly relevant to the conference on learning representations. Our main contributions are:\n\n1) We show that the image-conditioned language model implicitly learns and exploits a joint image representation and language semantic space instead of actually understanding images (sections 4.2, 4.3, 4.4). \n\n2) Our experiments with factorized and compressed image embeddings (section 4.1) reveals that the models do not benefit from the full representational space. We observe that the performance of the model trained with a 2048 dimensional representation is nearly identical to the performance of the model trained with a compressed 80-dimensional representation virtually resulting in ‘no information loss’. \n\n3) The experiments with pseudorandom representations (section 3.2) reveal that the end-to-end models learn to separate structure from noisy representations in the framework and exploit it to produce near ideal performance, i.e., the performance with structured representations versus the performance with noisy representations is similar. \n\n\nThe reviewers also raised concern regarding the absence of SPICE as a metric for evaluation. We focus on CIDEr because: a) the metrics in the official evaluation script from MSCOCO contains support for only CIDEr, Meteor, BLEU and ROUGE; b) CIDEr is a metric that was officially developed for the task of image captioning, and is supposed to be the official metric for MSCOCO; c) papers by Liu et al 2017, Kilickaya et al. 2017 and Vedantam et al, 2015 (with the human correlation experiments over Flickr8k dataset) still state the importance of CIDEr as a metric for image captioning. However, we will provide the results on SPICE in the revised version. We also note that a similar trend is observed with SPICE. \n\n* Liu et al. (ICCV 2017) Improved Image Captioning via Policy Gradient Optimization of SPIDEr\n* Kilickaya et al. (EACL 2017) Re-evaluating Automatic Metrics for Image Captioning\n* Vedantam et al. (CVPR 2015) CIDEr: Consensus-based Image Description Evaluation\n\n", "In this report, the findings of this paper submitted to the ICLR 2018 Conference were attempted to be replicated. In the process of replication, two major components were identified. The first breakdown included building the baseline model. The second subsection contained the core of the research, which was to answer the key questions and address which image transformations affected the accuracy of neural image captioning systems. Following the steps outlined in the paper as closely as possible, we were able to build a very similar baseline model, and perform three of the five image transformations that were specified. \nThe base line model used in the paper is a combination of the approaches of Karpathy [1] and Vinyals [2]. We were able to closely replicate that model by breaking down it into 3 subsets, a combination of an image model and a language model with a CNN used as an encoder of the images, and an LSTM for the language model as mentioned in the paper. \nFor the image transformations, we were able to successfully reproduce three out of the five: penultimate layer extraction, class prediction vector, and object-class word embeddings. For the penultimate layer extraction, we implemented the pretrained VGG19 and ResNet152 models. The VGG19 uses very small convolutional filters and uses very deep weight layers of up to 19. The ResNet152 model, as implemented by He et al.[9], uses 8 times deeper nets than VGG19. Both the models were implemented via the Keras distribution with a TensorFlow backend. \nThe class prediction vector transformation involved investigating more complex image representations, where the vector elements are now estimated posterior probabilities of the possible object categories. To obtain these posterior distribution vectors, the pre-trained network ResNet152 was again used to retrieve a 1000 dimensional posterior vector. \nThe last transformation we were able to replicate was the object-class word embeddings. This procedure is carried out over the entire 1000 dimensional output of the Softmax layer of pre-trained model ResNet152 where all the procured word2vec representations are finally averaged. This averaged vector acts as the image representation for the image model. \nThe evaluation metric used for the score calculation nltk based corpus BLEU introduced by Papineni et al. [12]. Using a beam size of 1, as done by the authors, a steady rise was observed in the corpus BLEU score for all three representations. Penultimate layer and softmax implementations outperformed the word2vec image representation which had BLEU scores ranging between 0.7540 and 0.4646 from BLEU-2 to BLEU4. For both penultimate and softmax image representations, ResNet152 performed better than VGG19 with BLEU scores ranging from 0.5598 to 0.9216 for softmax and 0.5889 to 0.8937 for penultimate with BLEU varying from 4 to 1. It was only marginally better than VGG19’s BLEU 4-1 scores ranging between 0.5346 and 0.9158 for softmax and 0.5962 and 0.8524 for penultimate. \nOne caveat of this report was that it was not feasible to train the model on the MSCOCO dataset as the paper. This was due to computational restrictions, as training a model on the Flickr8K dataset, which is much smaller than the MSCOCO dataset, took a K80 equipped server approximately 2 days for a small batch size. Due to the inability to use the MSCOCO, we experienced two drawbacks during the replication; The first included hindering our ability to implement the 4th and 5th image transformations, and the second was fact that we were not able to reproduce an exact copy of the works presented by the authors. Although we used a different dataset, we still noticed similar trends in the ones obtained by the tests carried out in the MSCOCO dataset. For example, both our tests and the original authors’ test both had the ResNet152 pre-trained network slightly outperforming the VGG19 network in the different image transformations.\n" ]
[ -1, -1, -1, 4, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 5, 4, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ "SyZwTXVrz", "SyZwTXVrz", "iclr_2018_HJNGGmZ0Z", "iclr_2018_HJNGGmZ0Z", "iclr_2018_HJNGGmZ0Z", "iclr_2018_HJNGGmZ0Z", "iclr_2018_HJNGGmZ0Z", "BybWlKXgz", "BybWlKXgz", "r1fNl7Cgz", "By5_q5y-z", "iclr_2018_HJNGGmZ0Z", "iclr_2018_HJNGGmZ0Z" ]
iclr_2018_rkWN3g-AZ
XGAN: Unsupervised Image-to-Image Translation for many-to-many Mappings
Style transfer usually refers to the task of applying color and texture information from a specific style image to a given content image while preserving the structure of the latter. Here we tackle the more generic problem of semantic style transfer: given two unpaired collections of images, we aim to learn a mapping between the corpus-level style of each collection, while preserving semantic content shared across the two domains. We introduce XGAN ("Cross-GAN"), a dual adversarial autoencoder, which captures a shared representation of the common domain semantic content in an unsupervised way, while jointly learning the domain-to-domain image translations in both directions. We exploit ideas from the domain adaptation literature and define a semantic consistency loss which encourages the model to preserve semantics in the learned embedding space. We report promising qualitative results for the task of face-to-cartoon translation. The cartoon dataset we collected for this purpose will also be released as a new benchmark for semantic style transfer.
rejected-papers
This paper was reviewed by 3 expert reviewers. All three recommend rejection citing significant concerns (e.g. missing baselines).
train
[ "HJFBUEEBM", "Skj8GNn4z", "Sk2PtWDVz", "rybCT6Olf", "BJklMMFez", "HJSBtHZZM", "r1Tur03Qz", "SkLHB03mG", "rJWFEC27f", "H102m03mG", "By_ZiUJzf", "SygfwiYgM", "HkcKlK_1f", "rJ8xTtx1f" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "author", "public" ]
[ "The semantic consistency loss is ||e2(d2(e1(x1))), e1(x1)||. There are two possible implementations:\n1. Treat e1(x1) as the label/target and do not back-propagate through it.\n2. Use the gradient w.r.t. the e1(x1) to update e1. So the e1 is updated with two gradient flows.\n\nI'm wondering which one the authors use.", "The rebuttal is brief and does not address my major concerns. To improve the paper, the authors may consider to include more baselines and some ablation studies of the proposed method. Additionally, the clarify and presentation of the paper could be improved too. ", "I thank the authors for their response to my review. However, I stand by my initial assessment. I think this paper suggests an interesting model but it needs much more extensive experimentation to prove its utility, especially since rather similar alternatives already exist in the literature. I'm glad to see the authors think UNIT would be a good baseline to compare against, and I would encourage them to also try CycleGAN/DiscoGAN/DualGAN. I expect those methods would do well despite the inductive bias toward pxiel-level correspondence. Even if they perform poorly, that should be tested and not simply asserted.", "\n\n- Lack of novelty\n\nThe paper has very limited novelty since the proposed method is a straightforward combination of two prior works on the same topic (unpair/unsupervised image translation or cross-domain image generation) where the two prior works are the DTN work [a] and the UNIT [b] work. To be more precise, the proposed method utilizes the weight-sharing design for enforcing the shared latent space constraint proposed in the UNIT work [b] and the feature consistency loss term for ensuring common embedding in the DTN work [a] for solving the ill-posed unpaired/unsupervised image-to-image translation problem. Since the ideas are already published in the prior work, the paper does not contribute additional knowledge to the problem. \n\nIn addition, the combination is done in a careless manner. First of all, the paper proposes jointly minimizing the common embedding loss [a] and the domain adversarial loss [b]. However, minimizing the common embedding loss [a] also results in minimizing the domain adversarial loss [c]. This can be easily seen as when the embeddings are the same, no discriminators can tell them apart. This suggests that the paper fails to see the connection and blindly put the two things together. Moreover, given the generators, minimizing the common embedding loss also results in minimizing the cycle-consistency loss [d]. As the UNIT work [b] utilize both the weight-sharing constraint and cycle-consistency loss, the proposed method becomes a close variant to the UNIT work [b].\n\n- Poor experimental verification\n\nThe paper only shows visualization results on translating frontal face images to cartoon images in the resolution of 64x64. This is apparently short as compared to the experimental validations done in several prior works [a,b,d]. In the CycleGAN work [d], the results are shown on several translation tasks (picture to painting, horse to zebra, map to image, and different scenarios) in a resolution of 256x256. In the UNIT work [b], the results are shown in various street scene (sunny to rainy, day to night, winter to summer, synthetic to real) and animal portraits (cat species and dog breeds) where the resolution is up to 640x480. In the DTN [a] and UNIT [b] work, promising domain adaptation results (SVHN to MNIST) are reported. Due to the shortage of results, the credibility of the paper is damaged. \n\n- Lack of clarity in presentation\n\nThe paper tends to introduces new key words for existing one. For example, the \"semantic style transfer\" is exactly the unpaired/unsupervised image-to-image translation or cross-domain image generation. It is not clear why the paper needs to introduce the new keyword. Also, the Coupled GAN work [e] is the first work that utilizes both weight-sharing (shared latent space assumption) and GAN for unpaired/unsupervised image-to-image translation. It is unfortunately that the paper fails to refer to this closely related prior work.\n\n[a] Yaniv Taigman, Adam Polyak, Lior Wolf \"Unsupervised Cross-Domain Image Generation\", ICLR 2017\n\n[b] Ming-Yu Liu, Thomas Breuel, Jan Kautz \"Unsupervised Image-to-Image Translation Networks\", NIPS 2017 \n\n[c] YaroslavGanin et al. \"Domain-adversarial Training of Neural Networks\" JMLR 2016\n\n[d] Jun-Yan Zhu, Taesung Park, Philip Isola, and Alexei A. Efros \"Unpaired Image-to-Image Translation Using Cycle-consistent Adversarial Networks\" ICCV 2017\n\n[e] Ming-Yu Liu, Oncel Tuzle \"Coupled Generative Adversarial Networks\", NIPS 2016", "This paper proposed an X-shaped GAN for the so called semantic style transfer task, in which the goal is to transfer the style of an image from one domain to another without altering the semantic content of the image. Here, a domain is collectively defined by the images of the same style, e.g., cartoon faces. \n\nThe cost function used to train the network consists of five terms of which four are pretty standard: a reconstruction loss, two regular GAN-type losses, and an imitation loss. The fifth term, called the semantic consistency loss, is one of the main contributions of this paper. This loss ensures that the translated images should be encoded into about the same location as the embedding of the original image, albeit by different encoders. \n\nStrengths:\n1. The new CartoonSet dataset is carefully designed and compiled. It could facilitate the future research on style transfer. \n2. The paper is very well written. I enjoyed reading the paper. The text is concise and also clear enough and the figures are illustrative.\n3. The semantic consistency loss is reasonable, but I do not think this is significantly novel. \n\nWeaknesses:\n1. Although “the key aim of XGAN is to learn a joint meaningful and semantically consistent embedding”, the experiments are actually devoted to the qualitative style transfer only. A possible experiment design for evaluating “the key aim of XGAN” may be the facial attribute prediction. The CartoonSet contains attribute labels but the authors may need collect such labels for the VGG-face set.\n2. Only one baseline is considered in the style transfer experiments. Both CycleGAN and UNIT are very competitive methods and would be better be included in the comparison. \n3. The “many-to-many” is ambiguous. Style transfer in general is not a one-to-one or many-to-one mapping. It is not necessary to stress the many-to-many property of the proposed new task, i.e., semantic style transfer. \n\nThe CartoonSet dataset and the new task, which is called semantic style transfer between two domains, are nice contributions of this paper. In terms of technical contributions, it is not significant to have the X-shaped GAN or the straightforward semantic consistency loss. The experiments are somehow mismatched with the claimed aim of the paper. ", "This paper proposes a new GAN-based model for unpaired image-to-image translation. The model is very similar to DTN [Taigman et al. 2016] except with trained encoders and a domain confusion loss to encourage the encoders to map source and target domains to a shared embedding. Additionally, an optional teacher network is introduced, but this feels rather tangential and problem-specific. The paper is clearly presented and I enjoyed the aesthetics of the figures. The method appears technically sound, albeit a bit complicated. The new cartoon dataset is also a nice contribution.\n\nMy main criticism of this paper is the experiments. At the end of reading, I don’t know clearly which aspects of the method are important, why they are important, and how the proposed system compares against past work. First, the baselines are insufficient. Only DTNs are compared against, yet there are many other recent methods for unpaired image-to-image translation, notably, cycle-consistency-based methods and UNIT. These methods should also be compared against, as there is little evidence that DTNs are actually SOTA on cartoons (rather, the cartoon dataset was not public so other papers did not compare on that dataset). Second, although I appreciated the ablation experiments, they are not comprehensive, as discussed more below. Third, there is no quantitative evaluation. The paper states that quantifying performance on style transfer is an unsolved problem, but this is no excuse for not at least trying. Indeed, there are many proposed metrics in the literature for quantifying style transfer / image generation, including the Inception score [Salimans et al. 2016], conditional variants like the FCN-score [Isola et al. 2017], and human judgments. These metrics could all be adapted to the present task (with appropriate modifications, e.g., switching from Inception to a face attribute classifier). Additionally, as the paper mentions at the end, the method could be applied to domain adaptation, where plenty of standard metrics and benchmarks exist.\n\nUltimately, the qualitative results in the paper are not convincing to me. It’s hard to see the advantages/disadvantages in each comparison. For example in Figure 8, it’s hard to even see any overall change in the outputs due to ablating the semantic consistency loss and the teacher loss (especially since I’m comparing these to Figure 6, which is referred to “Selected results” and therefore might not be a fair comparison). Perhaps the effect of the ablations would be clearer if the figures showed a single input followed by a series of outputs for that same input, each with a different term ablated. A careful reader might be able to examine the images for a long time and find some insights, but it would be much better if the paper distilled these insights into a more concise and convincing form. I feel sort of like I’m looking at raw data, and it still needs to be analyzed.\n\nI also think the ablations are not sufficiently comprehensive. In particular, there is no ablation of the domain adversarial loss. This seems like an important one to test since it’s one of the main differences from DTNs. I was a bit confused by the “finetuned DTN” in Section 7.2. Is this an ablation experiment where the domain adversarial loss and teacher loss are removed? If so, referring to it as so may be clearer than calling it a finetuned DTN. Interestingly, the results of this method look pretty decent, suggesting that the domain adversarial loss might not be having a big effect, in which case XGAN looks very close indeed to DTNs. It would be great here to actually quantify the mentioned sensitivity to hyperparameters.\n\nIn terms of presentation, at several points, the paper argues that previous, pixel-domain methods are more limited than the proposed feature-space method, but little evidence is given to support these claims. For example, “we argue that such a pixel-level constraint is not sufficient in our case” in the intro, and “our proposed semantic consistency loss acts at the feature level, allowing for more flexible transformations” in related work. I would like to see more motivation for these assertions, and ultimately, the limitations should be concretely demonstrated in experiments. In models like CycleGAN the pixel-level constraint is between inputs and reconstructed inputs, and I don’t see why this necessarily is overly restrictive on the kinds of transformations in the outputs. The phrasing in the current paper seems to suggest that the pixel-level constraints are between input and output, which, I agree, would be directly restrictive. The reasoning here should be clarified. Better yet would be to provide empirical evidence that pixel-domain methods are not successful (e.g., by comparing against CycleGAN).\n\nThe usage of the term “semantic” is also somewhat confusing. In what sense is the latent space semantic? The paper should clarify exactly what this term refers to, perhaps simply defining it to mean a “low-dimensional shared embedding.”\n\nI think the role of the GAN objective is somewhat underplayed. It is quite interesting that the current model achieves decent results even without the GAN. However, there is no experiment keeping the GAN but ablating other parts of the method. Other papers have shown that a GAN objective plus, e.g., cycle-consistency, can do quite well on this kind of problem. It could be that different terms in the current objective are somewhat redundant, so that you can choose any two or three, let’s say, and get good results. To check this, it would be great to see more comprehensive ablation experiments. \n\n\nMinor comments:\n1. Page 1: I wouldn’t call colorization one-to-one. Even though there is a single ground truth, I would say colorization is one-to-many in the sense that many outputs may be equally probable according to a Bayes optimal observer.\n2. Fig 1: It should be clarified that the left example is not a result of the method. At a glance this looks like an exciting new result and I think that could mislead casual readers.\n3. Fig 1 caption: “an other” —> “another”\n4. Page 2: “Recent work … fail for more general transformations” — DiscoGAN (Kim et al. 2017) showed some success beyond pixel-aligned transformations\n5. Page 5: “particular,the” —> “particular, the”; quotes around “short beard” are backwards\n6. Page 6: “founnd” —> “found”\n7. Page 11: what is \\mathcal{L}_r? I don’t see it defined above.", "The dataset is in the process of being made public. We will update the submission as soon as it is available.", "We thank the reviewer for their detailed comments and helpful suggestions. As was raised by other reviewers, we take note of the lack of experimental validation. In the following we address some more specific issues addressed by the reviewer.\n\nAblation experiments\n----------------------------\nIndeed, the finetuned DTN would be equivalent to XGAN with a fully shared encoder, only one decoder (so no reconstruction loss on the source domain), and no domain-adversarial nor teacher loss.\n On the long term, the combination of GAN + semantic consistency loss has a similar effect to the domain-adversarial loss; however including the domain-adversarial loss should lead faster to a regime where the embeddings for both domains lie closer. \n\nComparison to Baselines\n----------------------------------\nOur main reasons for not including CycleGAN as a baseline were (i) the original paper claims in the conclusion that experiments on translation between significantly different domains were unsuccessful and (ii) CycleGAN uses fully convolutional networks (no latent representation bottleneck) hence we hypothesize it strongly retains local pixel information, even though there is no explicit pixel-level constraint between input and output.\nWe agree that UNIT would be a well fitted baseline to compare to on the face-to-cartoon task.\n", "We thank the reviewers for their comments, in particular for their suggestion of quantitative evaluation.\n\n(Response to Weaknesses 2/) We originally focused on the DTN work as it was the closest in terms of motivation and applications. As for more recent work, the CycleGAN paper mentions that the method often fails on task where input and output are significantly different in structure (e.g., cat to dog) so it might be a weak baseline for face-to-cartoon. However, as was also mentioned by the other reviewers, we agree that UNIT could act as another strong baseline for the task as it seems to allow feature-level transfer between the two domains.", "We thank the reviewer for their detailed comment. We take note that the experiments section and comparison to baselines are lacking. In the following, we propose some clarifications to the specific issues raised in the review:\n\nSemantic style transfer\n-------------------------------\nWe introduce the semantic style transfer keyword as a distinction to pixel-level translation tasks. While the notion of \"unsupervised image-to-image translation\" should be general enough, in recent work, this task often refers to pixel-level translation, where the input and output images have very similar structure (e.g., as you mentioned: horses to zebras, sunny to rain). Instead, we focused on translation tasks allowing for significant structure changes while retaining semantic content, such as the face-to-cartoon task introduced in DTN. \n\nLoss redundancy\n-----------------------\nThe semantic consistency [a] and domain-adversarial loss [b] are indeed redundant (in the sense that low value of [a] implies low value of [b]) when the model perfectly maps inputs to the correct target domain; however this is not necessarily the case in practice, e.g. at the beginning of training. \nMore specifically, the domain-adversarial [b] loss makes embeddings from D1 and D2 lie in close subspaces, while the semantic consistency loss makes embedding e1(x) close to e2 o d2 o e1(x) (and vice-versa) for a specific input x in D1. Hence, [a] is a stronger constraint than [b]. However, in the case where the decoder d2 does not properly maps to the target domain D2 (e.g., at the beginning of training, the generated faces are not realistic until the GAN kicks in), then [a] does not bring any information about embeddings from real D2 samples, contrary to [b].\n\nComparison to baselines\n---------------------------------\nThe main differences with the UNIT paper [b] is we impose stronger constraints on the learned embeddings, i.e. we make use of (i) the domain-adversarial loss [c] and (ii) the semantic consistency loss (rather than pixel-level cycle consistency) to constrain the learned embedding explicitly, while UNIT only relies on weight sharing in the encoder. We will include the COGAN reference in future revisions.\n\n\nExperimental validation\n-------------------------------\nWe did not thoroughly investigate previous pixel-level translation tasks as they were not our main focus, but we agree that additional experiments would definitely support the proposed model. \nWe also experimented on SVHN to MNIST to compare to the DTN baseline, however we did not observe significant improvement in the classification accuracy compared to these baselines. We omitted these results as they did not seem significant enough for a task like MNIST classification.\n", "could the code and data sets be made public?", "I disagree that extending to conditioning on noise is \"trivial\", as this is a well-known alignment problem in unsupervised domain mapping. Please see \nhttps://arxiv.org/pdf/1709.00074.pdf\n\n", "Thank you for the question. To clarify, the *tasks* we consider are many-to-many in the sense there is no pre-defined one-to-one mapping between the domains, eg one face maps to many possible cartoons and vice-versa. However, the face/cartoon-to-latent-space mapping is many-to-one. We indeed only report results from deterministic models, which can be trivially extended to be conditional to a noise vector as well. An example of how this can be done is outlined in the CVPR 2017 PixelDA paper: Unsupervised Pixel-level Domain Adaptation with GANs by Bousmalis et al. In this and other papers, they found that introducing such noise does not affect the quality of the generated samples.", "It's unclear how this model is many-to-many. The mappings are deterministic as far as I can tell, no?" ]
[ -1, -1, -1, 3, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 5, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkWN3g-AZ", "BJklMMFez", "SkLHB03mG", "iclr_2018_rkWN3g-AZ", "iclr_2018_rkWN3g-AZ", "iclr_2018_rkWN3g-AZ", "By_ZiUJzf", "HJSBtHZZM", "BJklMMFez", "rybCT6Olf", "iclr_2018_rkWN3g-AZ", "HkcKlK_1f", "rJ8xTtx1f", "iclr_2018_rkWN3g-AZ" ]
iclr_2018_BJDH5M-AW
Synthesizing Robust Adversarial Examples
Neural network-based classifiers parallel or exceed human-level accuracy on many common tasks and are used in practical systems. Yet, neural networks are susceptible to adversarial examples, carefully perturbed inputs that cause networks to misbehave in arbitrarily chosen ways. When generated with standard methods, these examples do not consistently fool a classifier in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations. Adversarial examples generated using standard techniques require complete control over direct input to the classifier, which is impossible in many real-world systems. We introduce the first method for constructing real-world 3D objects that consistently fool a neural network across a wide distribution of angles and viewpoints. We present a general-purpose algorithm for generating adversarial examples that are robust across any chosen distribution of transformations. We demonstrate its application in two dimensions, producing adversarial images that are robust to noise, distortion, and affine transformation. Finally, we apply the algorithm to produce arbitrary physical 3D-printed adversarial objects, demonstrating that our approach works end-to-end in the real world. Our results show that adversarial examples are a practical concern for real-world systems.
rejected-papers
This paper studies the problem of synthesizing adversarial examples that will succeed at fooling a classification system under unknown viewpoint, lighting, etc conditions. For that purpose, the authors propose a data-augmentation technique (called "EOT") that makes adversarial examples robust against a predetermined family of transformations. Reviewers were mixed in their assessment of this work, on the one hand highlighting the potential practical applications, but on the other hand warning about weak comparisons with existing literature, as well as lack of discussion about how to improve the robustness of the deep neural net against that form of attacks. The AC thus believes this paper will greatly benefit from a further round of iteration/review, and therefore recommends rejection at this time.
train
[ "HyIrp18rz", "H1FaG8FgM", "SyRDNVqxz", "H1nifI9eG", "SJnDHwnQf", "SkwQmwnXM", "Skul7w3mz", "H1B9fD3Qf", "BJ2bfP3QM", "HymvZv37G", "r1bnlcZMf", "Byp5w7OAW", "HJb4TgKAb", "BkwU6xF0b", "rJkwuN9RZ", "Hk3iatqCW", "Bk-AZjY0W", "rk4nKr_0b", "B1p3Q4_Rb", "B1e1yJLCb" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "author", "author", "author", "author", "public", "public", "public", "public", "public" ]
[ "Can you comment on what aspects can be explored further?", "Summary: This work proposes a way to create 3D objects to fool the classification of their pictures from different view points by a neural network.\nRather than optimizing the log-likelihood of a single example, the optimization if performed over a the expectation of a set of transformations of sample images. Using an inception v3 net, they create adversarial attacks on a subset of the imagenet validation set transformed by translations, lightening conditions, rotations, and scalings among others, and observe a drop of the classifier accuracy performance from 70% to less than 1%. They also create two 3D printed objects which most pictures taken from random viewpoints are fooling the network in its class prediction.\n \n\nMain comments:\n- The idea of building 3D adversarial objects is novel so the study is interesting. However, the paper is incomplete, with a very low number of references, only 2 conference papers if we assume the list is up to date. \nSee for instance Cisse et al. Houdini: fooling Deep Structured Prediction Models, NIPS 2017 for a recent list of related work in this research area.\n- The presentation of the results is not very clear. See specific comments below.\n- It would be nice to include insights to improve neural nets to become less sensitive to these attacks.\n\n\nMinor comments:\nFig1 : a bug with color seems to have been fixed\nModel section: be consistent with the notations. Bold everywhere or nowhere\nResults: The tables are difficult to read and should be clarified:\nWhat does the l2 metric stands for ? \nHow about min, max ?\nAccuracy -> classification accuracy\nModels -> 3D models\nDescribe each metric (Adversarial, Miss-classified, Correct)\n", "The authors present a method to enable robust generation of adversarial visual\ninputs for image classification.\n\nThey develop on the theme that 'real-world' transformations typically provide a\ncountermeasure against adversarial attacks in the visual domain, to show that\ncontextualising the adversarial exemplar generation by those very\ntransformations can still enable effective adversarial example generation.\n\nThey adapt an existing method for deriving adversarial examples to act under a\nprojection space (effectively a latent-variable model) which is defined through\na transformations distribution.\n\nThey demonstrate the effectiveness of their approach in the 2D and 3D\n(simulated and real) domains.\n\nThe paper is clear to follow and the objective employed appears to be sound. I\nlike the idea of using 3D generation, and particularly, 3D printing, as a means\nof generating adversarial examples -- there is definite novelty in that\nparticular exploration for adversarial examples.\n\nI did however have some concerns:\n\n1. What precisely is the distribution of transformations used for each\n experiment? Is it a PCFG? Are the different components quantised such that\n they are discrete rvs, or are there still continuous rvs? (For example, is\n lighting discretised to particular locations or taken to be (say) a 3D\n Gaussian?) And on a related note, how were the number of sampled\n transformations chosen?\n\n Knowing the distribution (and the extent of it's support) can help situate\n the effectiveness of the number of samples taken to derive the adversarial\n input.\n\n2. While choosing the distance metric in transformed space, LAB is used, but\n for the experimental results, l_2 is measured in RGB space -- showing the\n RGB distance is perhaps not all that useful given it's not actually being\n used in the objective. I would perhaps suggest showing LAB, maybe in\n addition to RGB if required.\n\n3. Quantitative analysis: I would suggest reporting confidence intervals;\n perhaps just the 1st standard deviation over the accuracies for the true and\n 'adversarial' labels -- the min and max don't help too much in understanding\n what effect the monte-carlo approximation of the objective has on things.\n\n Moreover, the min and max are only reported for the 2D and rendered 3D\n experiments -- it's missing for the 3D printing experiment.\n\n4. Experiment power: While the experimental setup seems well thought out and\n structured, the sample size (i.e, the number of entities considered) seems a\n bit too small to draw any real conclusions from. There are 5 exemplar\n objects for the 3D rendering experiment and only 2 for the 3D printing one.\n\n While I understand that 3D printing is perhaps not all that scalable to be\n able to rattle off many models, the 3D rendering experiment surely can be\n extended to include more models? Were the turtle and baseball models chosen\n randomly, or chosen for some particular reason? Similar questions for the 5\n models in the 3D rendering experiment.\n\n5. 3D printing experiment transformations: While the 2D and 3D rendering\n experiments explicitly state that the sampled transformations were random,\n the 3D printing one says \"over a variety of viewpoints\". Were these\n viewpoints chosen randomly?\n\nMost of these concerns are potentially quirks in the exposition rather than any\nissues with the experiments conducted themselves. For now, I think the\nsubmission is good for a weak accept –- if the authors address my concerns, and/or\ncorrect my potential misunderstanding of the issues, I'd be happy to upgrade my\nreview to an accept.", "The paper proposes a method to synthesize adversarial examples that remain robust to different 2D and 3D perturbations. The paper shows this is effective by transferring the examples to 3D objects that are color 3D-printed and show some nice results.\n\nThe experimental results and video showing that the perturbation is effective for different camera angles, lighting conditions and background is quite impressive. This work convincingly shows that adversarial examples are a real-world problem for production deep-learning systems rather than something that is only academically interesting.\n\nHowever, the authors claim that standard techniques require complete control and careful setups (e.g. in the camera case) is quite misleading, especially with regards to the work by Kurakin et. al. This paper also seems to have some problems of its own (for example the turtle is at relatively the same distance from the camera in all the examples, I expect the perturbation wouldn't work well if it was far enough away that the camera could not resolve the HD texture of the turtle).\n\nOne interesting point this work raises is whether the algorithm is essentially learning universal perturbations (Moosavi-Dezfooli et. al). If that's the case then complicated transformation sampling and 3D mapping setup would be unnecessary. This may already be the case since the training set already consists of multiple lighting, rotation and camera type transformations so I would expect universal perturbations to already produce similar results in the real-world.\n\nMinor comments:\nSection 1.1: \"a affine\" -> \"an affine\"\nTypo in section 3.4: \"of a of a\"\nIt's interesting in figure 9 that the crossword puzzle appears in the image of the lighthouse.\n\nMoosavi-Dezfooli, S. M., Fawzi, A., Fawzi, O., & Frossard, P. Universal adversarial perturbations. CVPR 2017.", "We thank the anonymous reviewers for helping us improve the paper. In response to their feedback, we have made the following revisions to our paper (in addition to the fixing of a few spelling mistakes/typos):\n\n* Related work: We have updated our description of Kurakin et al. and added a comparison with Universal Adversarial Perturbations (Moosavi-Dezfooli et al.) and adversarial eyeglasses (Sharif et al.)\n\n* Evaluation: We have included an additional 5 models in our Robust 3D Adversarial Examples evaluation. We have additionally further explained our evaluation metrics, improved and defined previously confusing terminology, and added standard deviations to further elucidate the distributions of adversariality and classification accuracy.\n", "Thank you for your review. In the latest revision of our paper we greatly expand on the related work section, both discussing in more detail our current list, and introducing other related works from the field which we explain and differentiate from our own. We hope that this gives the reader a more complete view of the field, and further indicates the novelty of our work.\n\nThe focus of this work was to demonstrate that it is possible to construct transformation-tolerant adversarial examples, even in the physical world; defenses against adversarial examples are beyond the scope of this paper. We hesitate to present intuitions for defenses without rigorous experimentation, because as researchers like Carlini have shown, developing defenses is challenging, and many proposed ideas for defenses are easily defeated [1].\n\nWe have addressed all of the minor comments including:\n* Fixing the color bug\n* Removing the selected bolding from the model section\n* Elaborated and defined the l2 metric, and removed min/max in favor of mean/stdev\n* Models -> 3D models and accuracy -> Classification accuracy\n* Added a paragraph defining the terms “adversarial,” “misclassified,” and “correct” as we use them\n\n[1]: https://arxiv.org/abs/1705.07263\n", "Thank you for your review. We have made several clarifications in the exposition that we believe address your concerns and improve the paper. In particular:\n\n1. The parameters of the distribution used in generating examples was given in the Appendix, but the method by which they are sampled was not made clear; we now explicitly state in the evaluation section that the parameters are sampled as independent uniformly distributed continuous random variables (except for Gaussian noise, which is sampled as a Gaussian continuous RV). There was no fixed number of transformations chosen during the synthesis of the adversarial example: the transformations are independently sampled at each gradient descent step. We have updated the text in the approach section to clarify this.\n\n2. Yes, we agree: we minimized LAB, not RGB, and Euclidean distances make more sense in a perceptually uniform color space like LAB. We have switched to reporting LAB distances.\n\n3. While we gave the distribution of adversariality across examples in a graph the appendix, we did not explicitly state the standard deviation/confidence intervals. This has been resolved in the latest version. We have also removed the min and max metrics from the evaluation section, and have added the standard deviation over the accuracies for the true and ‘adversarial’ labels as suggested. We report mean/standard deviation for 2D and rendered 3D experiments and not the 3D printing experiment because we report the statistics for each 3D objects separately.\n\n4. In the case of the 3D printing experiment, we were limited by printer capability and shipping feasibility for this revision, but would be happy to include a few more in the camera ready version. We also included 5 more models in the 3D rendering experiment, making a total of 200 adversarial examples (10 models, 20 randomly chosen targets for each model). The turtle and baseball models were chosen because they could be easily adapted for the 3D printing process. The adversarial targets for the turtle and baseball (as well as all our other experiments) were randomly chosen across all the eligible ImageNet classes. Models for the 3D simulation experiment were chosen based on the first 10 realistic, textured 3D models we could find in OBJ format.\n\n5. We have added a footnote to address this concern; although the viewpoints were not selected or cherry-picked in any capacity, we opt to not call them “random” because in contrast to the 2D and 3D virtual examples, the viewpoints were not (and realistically could not have been) uniformly sampled from some concrete distribution of viewpoints; instead the objects were repeatedly moved and rotated on a table with humans walking around them and taking pictures from “a variety of viewpoints.”\n", "Thank you for the review and detailed comments. We are glad you enjoyed the paper. We have made revisions to the related work section, including a clearer description of Kurakin et al. and a more thorough discussion of other works (including the suggested “Universal Perturbations” paper by Moosavi-Dezfooli et al). We have additionally fixed all the minor issues you pointed out.", "Our updated paper includes a revised abstract and related work that takes into account your feedback. We hope this clarifies our explanation of the work of Kurakin et al. Please let us know if you have any other feedback on the related work or the ideas presented in the rest of the paper, and we will take it into account before the next deadline.", "Thank you for taking the time to reproduce the results in our paper! We’re glad that you were able to replicate our results.\n\nWe assume that you did not reproduce our 3D results due to a lack of an openly available differentiable renderer (it is somewhat of a pain to implement). We’ll do our best to have ours open-sourced by the time of the conference.\n\nIt’s interesting to see that there’s some degree of transferability between these EOT adversarial examples as well; we hadn’t explored this much in our work. If you do explore this further, or try new things like optimizing over an ensemble, please let us know how it goes!\n", "The current report has been produced as a part of ICLR reproducibility challenge\n\nAuthor: Prabhant Singh, University of Tartu, prabhant.singh@ut.ee\n\n**Abstract:**\nThe paper’s main goal was to provide an algorithm to generate adversarial examples that are robust across any chosen distribution of transformations. The authors demonstrated this algorithm in 2 and 3 dimensions in the paper. The authors were successfully able to demonstrate that adversarial examples are a practical concern for real-world systems. During the reproducibility of the paper, we have implemented authors’ algorithm on the 2D scenario and were able to verify authors’ claim. We have also checked for transferability with the image of 3D adversarial example generated in this paper in the real-world environment. This report also checks the robustness of adversarial examples on black box scenario which was not in the selected paper.\n\n\n**Experimental methodology:**\nAfter reproducing the Expectation Over Transformation (EOT) algorithm we have generated adversarial examples on the pre-trained inceptionV3 model trained on ImageNet dataset. The adversarial examples were robust under the predefined distribution. One interesting observation here is that whenever we rotated the image out of the distribution there was confidence reduction in case of prediction and the target class which was predefined while creating the adversarial example was within the top 10 probabilities. The probability of target class was decreased when we rotated it away from the distribution and vice versa. As the paper states, there are no guarantees of adversarial examples being robust outside the chosen distribution but the adversarial example was still able to reduce the confidence of the prediction.\n\n\n**Transferability:**\nThe transferability was checked on four images. First image was generated by EOT and other three were of adversarial Turtle mentioned in the paper [1]. The transferability was tested on six different architectures pre-trained on the ImageNet dataset (Resnet50, InceptionV3, InceptionResnetV2, Xception, VGG16, VGG19). Our adversarial examples were generated using Tensorflow pre-trained Inception model. The transferability was checked with pre-trained keras models[2].\nThe results of the experiments are listed below:\n\nGenerated adversarial image using EOT\nParameters:\nLearning rate: 2e-1\nEpsilon: 8.0/255.0\nTrue class: Tabby cat\nTarget class: Guacamole\n\n1. InceptionV3:\nPrediction: Flatworm, Confidence: 100%\n2. InceptionResnet:\nPrediction: Comicbook, Confidence : 100%\n3. Xception:\nPrediction: Necklace, Confidence : 92.5%\n4. Resnet50:\nPrediction: Tabby cat, Confidence: 35%\n5. VGG 19:\nPrediction: Tabby cat, Confidence: 47.9%\n6. VGG16\nPrediction: Tabby cat, Confidence: 34.8%\n\nImage of 3D adversarial turtle[1] mentioned in the paper\nTrue class: Turtle\n\n1. InceptionV3:\nPrediction: Pencil sharpner, Confidence : 67.7%\n2. InceptionResnet:\nPrediction: Comic book, Confidence : 100%\n3. Xception:\nPrediction: Table lamp, Confidence : 84.8%\n4. Resnet50:\nPrediction: Bucket , Confidence: 20%\n5. VGG 19:\nPrediction: Mask, Confidence: 10.9%\n6. VGG16\nPrediction: Turtle, Confidence: 3.6%\n\nOther images of Adversarial turtle generated similar results.\n\n**Observations:**\n\nBoth images of adversarial turtle and cat were detected incorrectly by inception related architectures with a high confidence.\nBoth images were classified as “Comic book” with 100 percent confidence by InceptionResnetV2.\nThe adversarial examples were able to reduce the confidence by a high margin, about 50-60 percent in case of Tabbycat. Only VGG16 was able to classify the turtle correctly but by a very low confidence of 3.6%\nSimilar results were found when we rotated, cropped and zoomed out of the image.[3]\nIn case of adversarial turtle, the photo was taken out of the distribution(Not inside the chosen distribution as mentioned in the paper ie camera distance between 2.5cm -3.0cm) ,still the image was misclassified.\n\n**Conclusion:**\n\nThe author successfully generated robust adversarial examples which are robust under the given distribution in case of targeted misclassification. The adversarial examples were also robust in case of untargeted misclassification under any distribution if classified against Inception related models.The adversarial examples reduced confidence by a wide margin in case of non-inception architectures. The image of 3D adversarial turtle can be considered robust under any distribution as it has been misclassified against all the architectures and only classified correctly by VGG16 but with a very insignificant percentage.\n\n**Sources:**\n[1] The Image of the adversarial turtle was taken at the recent NIPS conference by a number of viewpoints out of the given distribution.\n\n[2] Pre-trained keras models: https://keras.io/applications/\n\n[3] The source code and experiments info can be found in this Github repo: https://github.com/prabhant/synthesizing-robust-adversarial-examples", "Thank you for the feedback. We’ve made the following changes for the revised version:\n\n1. To resolve the misunderstanding caused by the term “controlled,” we’ve removed the word and, instead, directly stated the experimental conditions of Kurakin et al., as described in their Section 3.2 Experimental Setup. In their setup, a photo is taken of the image, then warped so that “each example has known coordinates” using printed QR codes, and cropped “so that they would become squares of the same size as source images.” We also describe the attached video directly as “approximately axis-aligned.” We write our analysis of Kurakin et al. based on the distribution of transformations described in the latest version of the peer-reviewed ICLR 2017 Workshop paper. Please advise if a better description of the setup is available. Note that our method covers any differentiable transformation, and our 2D experiments cover noise and lightening/darkening, as well as rotation, skew, translation, and zoom, the three of which are not covered in Kurakin et al. (our 3D experiments cover even more). We are sorry for the misunderstanding.
\n\n2. We will remove the world “only” and mention exactly the setup described in Kurakin et al as described above. We did experiments and they indicated that the Kurakin et al. method fails under a combination of rescaling, rotation, and translation (e.g. as used in the distribution used in our 2D case, see Table 4 in the Appendix); However, because robustness to such transformations was never claimed in Kurakin et al., we decided to not include these findings in our paper. 
\n\n3. Our work states that in general, adversarial examples fail to transfer because of the combination of “viewpoint shifts, camera noise, and other natural transformations.” The degrees to which each of these transformations contribute was not studied, nor were any related claims made. We will make this more explicit in our revised version.
\n\nWe hope that these edits, in addition to the other differentiations already stated in the paper (untargeted vs targeted adversarial attacks, and 2D vs 3D examples) now appropriately represent the difference between Kurakin et al. and this work. We welcome additional feedback, and we thank you again for helping us improve our writeup.", "EOT produces examples that are robust under the chosen distribution; it does not promise anything about out-of-distribution samples. We demonstrate that our adversarial examples work over varying levels of zoom (in addition to other transformation) in both the 2D and 3D cases: see our Appendix for the exact parameters we chose.\n\nOur research focuses on classifiers. We did not try attacking YOLO or Fast-RCNN in this work. However, given that detectors use a pretty similar architecture (and basically re-use the classifier, like VGG-16), we expect that it wouldn't be very different to attack a detector.\n", "Yes, we've disambiguated it to mean the combination of the natural transformations as you suggest. Thanks again for your feedback, and please let us know if you see anything else we can improve.", "Thanks! We think it would be neat to work on extending this to black-box systems and systems deployed in the real world.", "I agreed with the authors that generating robust adversarial examples to detection algorithm is possible. There is already one paper demonstrates the successful attack on faster-rcnn, \n\nXie C, Wang J, Zhang Z, et al. Adversarial Examples for Semantic Segmentation and Object Detection\n\nmaybe simply combine them can achieve this goal.", "Validating your ideas on a 3D printed model is interesting.\nWhat is the future direction of your work? ", "Thanks, that sounds like the revision fixes the issue.\n\nRegarding point 3, it sounds like you're going to fix it, but to explain why I made this comment: in my original comment, item 3, the sentence I quote doesn't contain a word like \"combination\" to disambiguate whether the reader is meant to parse the list as an OR or as an AND. From your reply it sounds like you're going to disambiguate this as AND.\n\nI hope it's clear that I wasn't saying your paper lacks novelty. I was just saying the Kurakin paper wasn't as limited as described. Overall I like your paper.", "The images including the videos appear to be taken within a short distance from the object, I wonder if the distance will affect the perturbation. If so, what's the distance range within which the perturbation is robust. \nIs such perturbation able to attack detection algorithm, such as YOLO and Fastrcnn?", "This comment isn't a complete review and I won't make an accept / reject recommendation.\n\nThis comment is just a request to improve the description of the difference between this work and the work of Kurakin et al 2016.\n\n1)\nThis submission says that the method from Kurakin et al \"only works in carefully controlled environments.\" This is a direct contradiction of Kurakin et al, which states that the method works \"without careful control of lighting, camera angle, distance to the page, etc.\"\n\nThe discrepancy probably results from different definitions of what \"careful control\" means. The language used in the paper should be more precise and specific in order to avoid seeming contradictory. We used considerably less control of the photograph conditions than standard protocols for commercial studio / event photography (which use special lighting equipment and camera tripods) or lab photography for scientific experiments.\n\nIt's also worth mentioning that we did a successful live demonstration of the method on stage at GeekPwn 2016, where the lighting, viewpoint, etc. were considerably less controlled than the experiments in the original paper (bright stage lights in a dark room, paper held in a presenters' hands instead of lying on a flat surface, etc.)\n\n I would suggest rewording to something like \"Kurakin et al 2016 evaluated their method for approximately axis-aligned views in office lighting conditions and in stage lighting conditions at GeekPwn 2016. They used a hand-held camera to photograph the images from positions that are natural from a human user. This was controlled more than the work in the sense that the viewpoint and lighting were usually approximately the same but not high controlled, in the sense that the camera was handheld, the distance and angle were not measured, and no attempt was made to control the lighting to be more standard than normal office conditions.\"\n\n2)\nWithout new experiments, it cannot be said that the method from Kurakin et al \"only works\" in those settings. It is accurate to say that Kurakin et al \"only evaluated\" their method in those conditions. Unless you've repeated our experiments with more diversity in viewpoint, you can't claim positively that the method doesn't work.\n\n3)\nThis paper says \"When generated with standard methods, these examples do not consistently fool a classifier in the physical world due to viewpoint shifts, camera noise, and other natural transformations.\" This is not quite true; Figure 6 of Kurakin et al shows that some of these transformations easily destroy adversarial examples while others have smaller effects." ]
[ -1, 5, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "HymvZv37G", "iclr_2018_BJDH5M-AW", "iclr_2018_BJDH5M-AW", "iclr_2018_BJDH5M-AW", "iclr_2018_BJDH5M-AW", "H1FaG8FgM", "SyRDNVqxz", "H1nifI9eG", "rk4nKr_0b", "r1bnlcZMf", "iclr_2018_BJDH5M-AW", "B1e1yJLCb", "B1p3Q4_Rb", "rk4nKr_0b", "Bk-AZjY0W", "HJb4TgKAb", "iclr_2018_BJDH5M-AW", "Byp5w7OAW", "iclr_2018_BJDH5M-AW", "iclr_2018_BJDH5M-AW" ]
iclr_2018_S1680_1Rb
CAYLEYNETS: SPECTRAL GRAPH CNNS WITH COMPLEX RATIONAL FILTERS
The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely-connected graphs, and can handle different constructions of Laplacian operators. Extensive experimental results show the superior performance of our approach on spectral image classification, community detection, vertex classification and matrix completion tasks.
rejected-papers
This paper considers graph neural representations that use Cayley polynomials of the graph Laplacian as generators. These polynomials offer better frequency localization than Chebyshev polynomials. The authors illustrate the advantages of Cayleynets on several benchmarks, producing modest improvements. Reviewers were mixed in the assessment of this work, highlighting on the one hand the good quality of the presentation and the theoretical background, but on the other hand skeptical about the experimental section significance. In particular, some concerns were centered about the analysis of complexity of Cayley versus the existing alternatives. Overall, the AC believes this paper is perhaps more suited to an audience more savvy in signal processing than ICLR, which may fail to appreciate the contributions.
train
[ "rJKsozOxM", "BJWjA85xz", "HyYek2cgM", "ByM26osmf", "H16j_wsQf", "HyPqsdq7G", "Skudq_cQf", "rJYQYd97G", "S1vcOrtmM", "S18cwVtQG", "Hks9aT_Qz", "Sy_LzeO7M", "BylmKBImM", "r1fAlRBQM", "BkO6CcXQM", "HyqGHjDzz", "rka7EjvGM", "B1E7loPGG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "public", "author", "public", "author", "public", "author", "public", "author", "author", "author", "author" ]
[ "The paper proposes a new filter for spectral analysis on graphs for graph CNNs. The filter is a rational function based on the Cayley transform. Unlike other popular variants, it is not strictly supported on a small graph neighborhood, but the paper proves an exponential-decay property on the norm of a filtered vertex indicator function.\n\nThe paper argues that Cayley filters allow better spectral localization than Chebyshev filters. While Chebyshev filters can be applied efficiently using a recursive method, evaluation of a Cayley filter of order r requires solving r linear system in dimension corresponding to the number of vertices, which is expensive. The paper proposes to stop after a small number of iterations of Jacobi's method to alleviate this problem.\n\nThe paper is clear and well written.\n\nThe proposed method seems of interest, although I find the experimental section only partly convincing. \n\nThere seems to be a tradeoff here. The paper demonstrates that CayleyNet achieves similar efficiency as ChebNet in multiple experiments while using smaller filter orders. Although using smaller filter orders (and better-localized filters) is an interesting property, it is not necessarily a key objective, especially as this seems to come at the cost of a significantly increased computational complexity. \n\nThe paper could help us understand this tradeoff better. For instance:\n- Middle and right panels of Figure 4 could use a more precise Y scale. How much slower is CayleyNet here with respect to the ChebNet?\n\n- Figure 4 mentions time corresponds to \"test times on batches of 100 samples\". Is this an average value over multiple 100-sample batches? What is the standard deviation? How do the training times compare?\n\n- MNIST accuracies are very similar (and near perfect) -- how did the training and testing time compare? Same for the MovieLens experiment. The improvement in performance is rather small, what is the corresponding computational cost?\n\n- CORA results are a bit confusing to me. The filter orders used here are very small, and the best amongst the values considered seems to be r=1. Is there a reason only such small values have been considered? Is this a fair evaluation of ChebNet which may possibly perform better with larger filter orders?\n\n- The paper could provide some insights as to why ChebNet is unable to work with unnormalized Laplacians while CayleyNet is (and why the ChebNet performance seems to get worse and worse as r increases?).\n", "Summary: This paper proposes a new graph-convolution architecture, based on Cayley transform of the matrix. Succinctly, if L denotes the Laplacian of a graph, this filter corresponds to an operator that is a low degree polynomial of C(L) = (hL - i)/(hL+i), where h is a scalar and i denotes sqrt(-1). The authors contend that such filters are interesting because they can 'zoom' into a part of the spectrum, depending on the choice of h, and that C(L) is always a rotation matrix with eigenvalues with magnitude 1. The authors propose to compute them using Jacobi iteration (using the diagonal as a preconditioner), and present experimental results.\n\nOpinion: Though the Cayley filters seem to have interesting properties, I find the authors theoretical and experimental justification insufficient to conclude that they offer sufficient advantage over existing methods. I list my major criticisms below:\n1. The comparison to Chebyshev filters (small degree polynomials in the Chebyshev basis) at several places is unconvincing. The results on CORA (Fig 5a) compare filters with the same order, though Cayley filters have twice the number of variables for the same order as Chebyshev filters. Similarly for Fig 1, order 3 Cayley should be compared to Order 6 Chebyshev (roughly).\n\n2. Since Chebyshev polynomials blow up exponentially when applied to values larger than 1, applying Chebyshev filters to unnormalized Laplacians (Fig 5b) is an unfair comparison.\n\n3. The authors basically apply Jacobi iteration (gradient descent using a diagonal preconditioner) to estimate the Cayley filters, and contend that a constant number of iterations of Jacobi are sufficient. This ignores the fact that their convergence rate scales quadratically in h and the max-degree of the graph. Moreover, this means that the Filter is effectively a low degree polynomial in (D^(-1)A)^K, where A is the adjacency matrix of the graph, and K is the number of Jacobi iterations. It's unclear how (or why) a choice of K might be good, or why does it make sense to throw away all powers of D^(-1)Af, even though we're computing all of them.\nAlso, note that this means a K-fold increase in the runtime for each evaluation of the network, compared to the Chebyshev filter.\n\nAmong the other experimental results, the synthetic results do clearly convey a significant advantage at least over Chebyshev filters with the same number of parameters. The CORA results (table 2) do convey a small but clear advantage. The MNIST result seems a tie, and the comparison for MovieLens doesn't make it obvious that the number of parameters is the same. \n\nOverall, this leads me to conclude that the paper presents insufficient justification to conclude that Cayley filters offer a significant advantage over existing work.", "This paper is on construction graph CNN using spectral techniques. The originality of this work is the use of Cayley polynomials to compute spectral filters on graphs, related to the work of Defferrard et al. (2016) and Monto et al. (2017) where Chebyshev filters were used. Theoretical and experimental results show the relevance of the Cayley polynomials as filters for graph CNN.\n\nThe paper is well written, and connections to related works are highlighted. We recommend the authors to talk about some future work.", "Changes in the new version:\nOn page 5, the complexity section was updated, and gives more emphasis on the trade-off in the choice of parameters.\nOn page 6, the section \"Chebyshev as a special case of Cayley\" was added.\nOn page 7, in the MNIST experiment, the number of Jacobi iterations and the run-times were made explicit.\nOn page 8, Table 2 was updated to compare ChebNets and CayleyNets based on the same number of real coefficients.\nOn page 8, in \"Citation network\", the nonormalized Laplacian was replaced by the scaled nonormalized Laplacian, and additional experiments were added to compare ChebNets and CayleyNets with the same number of parameters. Figure 5 was thus updated.\nOn page 9, in \"Recommender system\", the number of parameters in the ChebNet was updated to match the number of parameters in the CayleyNet. A comparison of the run-times was also added.\nOn page 14, a computational complexity appendix was added, that better compares run-times of ChebNets and CayleyNets.\n", "As we already stated, CayleyNet (as ChebNet, which is our main term of comparison) is not meant to work with the extreme conditions depicted in [2].\n\nAlso, stating that CayleyNet \"cannot work for semi-supervised learning problem\" is misleading. CayleyNet outperforms all the competitors on the semi-supervised learning problem we considered, achieving an 87.9% of accuracy with order 1 and symmetric normalized laplacian compared with the 86.50% of GCN.", "According to your reply, can I conclude that CAYLEYNETS cannot work for semi-supervised learning studied in [2]?\nAlso, with more labeled data in CORA, the improvement is still marginal compared with GCN in [2] as shown in your experiment.", "The order r is an architecture parameter independent of the input size, hence r=O(1). Typically, it's a small number 1-10. Therefore, it is just a constant in the complexity. ", "As in your reply, I quote \" it’s computation amounts to multiplying the signal by the Laplacian matrix r times...If the graph is sparsely connected with O(n) edges, this costs O(n) operations. \"\n\nI am confused now as you said the computational complexity is O(n), which has no relationship with the order r you choose? How can that be? How do you define the sparsity of a graph? If a graph is not that \"sparse\", what is the computational complexity?\n\n", "This statement is (again) wrong. We highly recommend the anonymous commenter to familiarize him/herself with previous works on spectral graph CNNs and also carefully read our paper. \n\nUnlike the first paper on spectral CNNs by Bruna et al where explicit eigendecomposition of the Laplacian is performed, the main point of the follow up works (Defferrard et al, Kipf&Welling and our present paper, which is based on the former) is to avoid this expensive operation altogether. \n\nThe way to do it is to implement a filter as a function of the Laplacian f(Delta)*x applied to the graph signal x. This is equivalent to applying f to the Laplacian eigenvalues, but does not require their explicit computation if f can be expressed in terms of simple matrix operations (addition, multiplication, and scaling). \n\nDefferrard et al used polynomial functions as f. In this case, the resulting filter is an FIR in signal processing terms and it’s computation amounts to multiplying the signal by the Laplacian matrix r times, where r is the polynomial degree. If the graph is sparsely connected with O(n) edges, this costs O(n) operations. \n\nIn our paper, we use rational functions as f, which are IIR filters. The whole point of our paper is how to compute such functions efficiently with linear complexity. The use of Jacobi iterations again brings the computation to a series of Laplacian multiplications which has O(n) complexity. ", "As in Section 3, eigenvalues of Graph Laplacian is needed. However, computing the eigenvalues leads to additional operation with complexity O(N^3).", "As already presented in Section 3 and Fig. 4 center-right, the computational complexity of the proposed method scales linearly wrt number of vertices available in the given domain for sparse graphs (and thus complexity is a O(n) and not a O(n^3)). Furthermore, stating that the method “needs Laplacian matrix polynomials with order 12” is erroneous and misleading. The number of Jacobi iterations required by CayleyNet is problem dependent and as we shown in our community detection experiment (Fig.4 left) even a small amount of iterations (e.g. 1-5) may be sufficient for significantly outperforming the performance achieved by ChebNet.", "This paper needs Laplacian matrix polynomials with order 12, which means the computational complexity is O(12N^3), with N being the number of graph nodes. The computational complexity is overwhelming even consider a small graph with N=10,000 nodes. With such high computational complexity, the performance improvement compared with other method is marginal.", "We thank the reader for the provided comment.\n\nWe are well aware of the data splitting outlined in [1-4]. However, we stress how our work is not aimed at outperforming the mentioned works in extreme semi-supervised learning problems. In particular, the solution outlined in [2] by Kipf & Welling is nothing more than a pure simplification of the architecture presented by Defferrard et al. at NIPS 2016, which dramatically reduces the amount of required parameters in order to cope with the small amount of available training samples* (140 in the standard splitting outlined in [1-4] for the CORA dataset). Since however the main term of comparison is represented in our case by ChebNet, we decided to extend the amount of available data in order to avoid overfitting and thus exploit more complicated but powerful models. This shows in particular how if a sufficient amount of training samples are provided, the approach outlined in [2] appears as just a suboptimal solution able to achieve non-optimal filters because of the simplicity of the defined operator (in the end convolution in [2] is nothing more than a weighted sum of the features available in the one-hop neighborhood that by no means really exploits the local topology of the provided domain**). Please, see in this sense Figure 5 left of our paper where we show how both ChebNet and CayleyNet are able to outpeform GCN by respectively 0.6% and 1.4% (i.e. not a marginal improvement). \n\nWe will provide our data splitting whether the paper will be accepted in order to provide a valuable term of comparison to the community.\n\n* this is in general the main reason why the authors in [2] don't compare ChebNet with the proposed architecture.\n** To see this let's consider the MNIST classification problem outlined in ChebNet and our paper. If instead of ChebNet/MoNet/CayleyNet, GCN would be used, all the features produced at the various convolutional layers would be nothing more than just simple averages of the original grey levels available in the provided pixels i.e. not meaningful representations of the local image behavior. GCN appears in this sense as a good solution whenever a small amount of training data is available (and multiple input features), however richer solutions can be exploited to better discriminate interesting local patterns if a sufficient amount of labeled samples can be provided (i.e. as in the proposed CORA experiment).\n", "It is well known that for the problem of graph vertex classification problem, standard data splitting for PubMed, Cora, and Citesser data sets are used as in [1-4] listed below. However, the authors only chose Cora data set, which is the smallest one of the three. Also, the authors use a data splitting method with much more labeled data. It makes the experiment unconvincing and difficult to compare performance with other papers. \n\nAlso, as the authors show in the paper, their improvement is marginal even using data splitting method defined by themselves compared with that in [2]. Please be noticed that [2] only uses a filter of adjacency matrix with order 1, however, this paper needs Laplacian matrix polynomials with order 12. The computational complexity is much larger than that of [2], and in this case, the marginal performance improvement could be omitted. \n\n1. Revisiting semi-supervised learning with graph embeddings. ICLR 2016.\n\n2. Semi-supervised classification with graph convolutional networks.\nICLR 2017.\n\n3. Convolutional neural networks ¨on graphs with fast localized spectral filtering, NIPS2016\n\n4 Geometric deep learning on graphs and manifolds using mixture model CNNs. CVPR 2017.\n", "We thank anonymous Reviewer2 for the work he/she provided. We present here various insights on the highlighted points.\n\nWe first note that smaller filter orders are not the key objective. They lead to more regular filter spaces, which ultimately leads to less overfitting, and better accuracy, which is the key goal. This is evident in the experimental results.\n\n\n1. Community dataset test/training times \n\nWe agree with the reviewer that the scale proposed on the y axis of Fig. 4 is not detailed enough, which will be fixed in the revision. All test times have been computed running 30 times our models with batches of 100 samples and averaging times across batches (thus the reported times should be considered as mean test times per batch). \n\nIn order to provide a better understanding, we attach here 2 anonymous links to figures showing ratios between times obtained with CayleyNet and ChebNet (i.e. test_time_CayleyNet / test_time_ChebNet): \n\nhttps://ibb.co/bD4xNG\nhttps://ibb.co/jbNdUw\n\nWe will add these plots as supplementary material in our final revisions.\n\n\nStandard deviations have been avoided in our analysis since do not add much to what already presented in Fig.4. For completeness, we attach here 2 links showing mean test times and corresponding standard deviations:\n\nhttps://ibb.co/jnODwb\nhttps://ibb.co/nezWhG\n\nFinally, training times have been avoided in the paper for reasons of space. In general they present a similar trend to test times: https://ibb.co/gUjE2G, https://ibb.co/bHAnNG, https://ibb.co/kbrYwb, https://ibb.co/kcYP2G. We will add training times in the final version of this work.\n\n\n2. CORA accuracies\n\nAs also requested by Reviewer 3, we further extended our analysis with additional orders. The best CayleyNet still outperform the competitor requiring at the same time a smaller amount of parameters (see point 1 and 2 of our response to Reviewer 3).\n\n\n3. MNIST/MovieLens performance and test times\n\nPerformance obtained over the MNIST dataset have been computed by means of 11 Jacobi iterations. Test time required by the proposed approach thus appears equal to 0.1776 +/- 0.06079 sec wrt the 0.0268 +/- 0.00841 sec required by ChebNet per batch (batch size = 100 images). We stress that MNIST digit classification just represents a toy example for ensuring the performance of our approach on a well known benchmark in standard conditions and should not be considered as a valuable example for proving the superior capabilities of the proposed spectral filters. \n\nFor what concern MovieLens, ChebNets with order 4 and 8 respectively require 0.0698 +/- 0.00275 sec and 0.0877 +/- 0.00362 sec at test time, CayleyNet with order 4 and 15 jacobi iterations requires instead 0.165 +/- 0.00332 sec. As presented to Reviewer 3, the only modest improvement obtained by CayleyNet on this dataset is due to the construction of the graph. \n\n\n4. ChebNet and unnormalized Laplacians\n\nChebyshev polynomials are only well defined in the interval [-1,-1], and plugging in values away from this interval leads an ill behaved system. Following the comments of Reviewer 3 we updated the paper to compare the two methods over the scaled version of the unnormalized laplace operator proposed by Defferrard et al.\nThe eigenvalues of the unnormalized laplacian \\Delta are bounded by max{d(u)+d(v):uv∈E} (where d(u) corresponds to the degree of node u and E is the edge set, doi.org/10.1080/03081088508817681). In ChebNet, Defferrard et al. proposed to divide the unnormalized laplace operator by the maximum eigenvalue (thus producing a contraction from the original laplacian). The Chebyshev polynomial basis is well defined on this normalized version of the Laplacian. However, a side effect of this normalization is that all of the ``macroscopic frequencies’’ get squeezed near zero, and thus Chebyshev polynomials cannot separate them. This phenomenon is avoided in CayleyNets, as explained in the “Cayley vs Chebyshev” section. In the updated comparison, CayleyNet still achieves better performance while requiring a lower amount of parameters.\n\nRegarding the last remark of the reviewer, the performance gets worse as the filter order increases due to overfitting.", "\n3. Jacobi approximate inversion: We agree that the choice of parameters and their tradeoffs deserves a more detailed discussion. First of all, please note that for K=0, we obtain standard polynomials expressed in the basis (x-i)^j, which makes ChebNet a particular case of our method. \n\nFor K>0, the iterations of the Jacobi method for matrix inversion can indeed be interpreted as a polynomial of degree K in (D^(-1)A)^K. Please note that the coefficients of this polynomial are fixed and not learnable, otherwise we would have too many parameters prone to overfitting. \n\nMost importantly, we argue that an accurate inversion of the matrix is not needed and thus use a fixed number K of Jacobi iterations. The reason is that the application of the approximate inverse to the input signal (\\tilde{y}in our notation) is then combined with learned coefficients, which “compensate”, as necessary, for the inversion inaccuracy. \nSuch behavior is well-documented in the literature in other contexts of model compression and accelerated convergence of iterative algorithms (see e.g. Gregor&LeCun ICML 2010 and numerous follow-up works); for example, learning sparse signal coding by unrolling iterative shrinkage algorithms (FISTA) into a neural network, where each layer emulates an iteration of the original algorithm but has extra learnable parameters. It is shown that FISTA networks with just a few layers outperform hundreds or thousands of iterations of the original algorithm thanks to the learnable parameters. We believe that a more careful analysis of this phenomenon is an interesting future work direction. \n\nReviewer 3 rightfully noted that the convergence rate of the Jacobi inversion depends on h. Indeed, there is a trade-off between the value of h, and the accuracy of the approximate inversion. Since h is a learnable parameter, ultimately, the training finds the right balance between the spectral zoom amount and the inversion accuracy. Moreover, as Reviewer 3 noted, the accuracy of the Jacobi inversion also depends on the max degree of the graph. This means that different graphs may require different h and number K of Jacobi iterations. However, once the graph is fixed, the max degree is fixed, so the number of iterations corresponding to the graph is also fixed. Naturally, different problems based on different graphs, require different numbers of iterations. We do not ignore this fact, but on the contrary, report it in Proposition 1. The importance of this proposition is to set a uniform bound on the convergence rate, that only depends on the graph. As a result, the number of iterations can be globally fixed for each graph, while as noted above, the training of h is underlied by a trade-off between accuracy and spectral zoom. We will make these facts more explicit in the paper.\n\n4. Experimental results: As shown by our toy experiment (communities graph), the advantage of our method is especially pronounced when the spectrum of the graph Laplacian has clustered eigenvalues (in particular, this is the case of graphs with strong communities, where there are multiple near-zero eigenvalues). The non-linear transformation of the eigenvalues by means of the Cayley transform and the spectral zoom property allow to achieve filters that better separate these frequencies. We thus expect our method to be especially advantageous in the analysis of social networks where strong communities are typically observed. \n\nThe citation network Cora is well known to have strong community structure, hence the pronounced advantage of our method. \n\nThe fact that experiments on MNIST does not show a significant advantage of CayleyNet is that being planar regular graph (2D grid), there is no clustering of eigenvalues. We regard MNIST as a mere “sanity check”, to ensure that in the simple Euclidean setting our approach is as good as classical CNN (LeNet). \n\nA slightly different situations appear in the MovieLens experiments. While we would typically expect similar users/items to show similar scores inside the provided communities, this is not exactly true for the MovieLens dataset. We followed [Monti et al. 2017 and Rao et al. 2015] constructing the users/items graphs as 10-NN graphs in the space of user/items features (e.g. age, gender, occupation of users; and gender, year, etc. of the movies). The macro-communities in the users/items graphs built in this way do not necessarily coincides with clusters of similar values in the score matrix. \n\nA better alternative, which we did not explore in this paper, would be to construct the graphs from the data. Even better (a future direction mentioned in the response to Reviewer1) would be to learn the graph (or more specifically, the metric defined in this case on the feature space of the users, which determines the edge weights) together with the filters. We believe that this will allow to construct graphs where community structures are consistent with the data and thus result in a better performance. ", "We thank anonymous Reviewer3 for thorough and insightful comments. We have run extensive experiments requested by the reviewer and provide these results as well as our detailed response to his/her main concerns below. We will revise the paper to address these issues and our responses to them. \n\n\n1. Number of coefficients: We agree that, since Cayley filters use complex coefficients while Chebyshev filters use real coefficients, in principle complex coefficients should be counted as twice more parameters. There are two different ways to make the number of parameters fairly comparable: (i) compare Cayley filters of order r vs Chebyshev filters of order 2*r (as suggested by Reviewer3), or (ii) use real coefficients in Cayley filters (as we note in our paper on p.4, in paragraph preceding Fig 1).\n\nWe produce these two comparisons below, using the Cora dataset with symmetric normalized Laplacian (#params = #real coefficients; 1 complex coefficient is counted as 2 parameters):\n\n(i) Cayley filter with complex coefficients, twice lower order than Chebyshev filter:\n\nChebNet order r=2 (#params = 69136) - Accuracy = 86.607986 +/- 0.65477967 (reported in paper)\nChebNet order r=4 (#params = 115216) - Accuracy = 85.203995 +/- 0.83185506\nChebNet order r=6 (#params = 161296) - Accuracy = 84.487999 +/- 0.83249897\n\nComplex CayleyNet order r=1 (#params = 69136) - Accuracy = 87.9 +/- 0.97508276 (reported in paper)\nComplex CayleyNet order r=2 (#params = 115216) - Accuracy = 86.9 +/- 0.28902602 (reported in paper)\nComplex CayleyNet order r=3 (#params = 161296) - Accuracy = 87.1 +/- 0.30883133 (reported in paper)\n\n\n(ii) Cayley filter using real coefficients, same order as Chebyshev filter:\n\nReal CayleyNet order r=1 (#params = 46096) - Accuracy = 87.311989 +/- 0.50936872\nReal CayleyNet order r=2 (#params = 69136) - Accuracy = 86.863991 +/- 0.57611096\nReal CayleyNet order r=3 (#params = 92176) - Accuracy = 86.147995 +/- 0.56823856\nReal CayleyNet order r=4 (#params = 115216) - Accuracy = 86.395996 +/- 0.62544805\nReal CayleyNet order r=5 (#params = 138256) - Accuracy = 85.251991 +/- 0.86330509\nReal CayleyNet order r=6 (#params = 161296) - Accuracy = 85.255997 +/- 0.76737374\n\nIn both cases (i) and (ii), CayleyNet outperforms ChebNet (higher accuracy) for the same number of parameters. \n\n\nFurthermore, for the MovieLens experiment, the CayleyNet outperforms ChebNet (lower RMS error) when using lower polynomial order:\n\nComplex CayleyNet order r=4 (#params = 23126) - RMSE = 0.922 (reported in paper)\nChebNet order r=8 (#params = 23124) - RMSE = 0.925\n\n\nWe will include these results and a more detailed discussion regarding a fair comparison of the number of parameters. \n\n\n2. Normalized vs Unnormalized Laplacian: We agree that poor performance of ChebNet in case of unnormalized Laplacian can be attributed to large eigenvalues. As requested by Reviewer3, we reproduce this experiment using scaled unnormalized Laplacian (2*Delta/lambda_max - I) to ensure the magnitude of its eigenvalues is <= 1, thus avoiding the numerical instability raised by the reviewer. We note that in our approach, no such scaling is necessary, since the eigenvalues of any Laplacian are mapped to the complex unit circle and thus automatically numerically stable. \n\nThe best performing models on Cora dataset with scaled unnormalized Laplacian are reported below:\n\nChebNet order r=7 (#params = 184336) - Accuracy = 87.232002 +/- 0.68511164\nCayleyNet order r=1 (#params = 69136) - Accuracy = 87.676003 +/- 0.13199957 \n\nThus, CayleyNet outperforms ChebNet (accuracy of 87.68% vs 87.23%) at the same time requiring significantly less parameters (69K vs 184K). We will update the results reported in the paper for the unnormalized Laplacian by this experiment. ", "We thank the Reviewer for a positive evaluation of our work. \n\nFuture work: One of the key issues in our method (and deep learning on graphs in general) is the assumption of a given graph. In many settings, such as recommender systems, the graph has to be estimated from the data/side information. Learning the graph together with the filters on the graph is the next logical step which we will address in future works. In particular, for graphs constructed in some feature space (e.g. demographic information of users in the recommender system examples), “learning the graph” boils down to learning a metric on the feature space, which in turn determines the graph edge weights. \n\nSecond, as we note in the response to Reviewer3, the behavior of our approximate matrix inversion is akin to “model compression”. In future work, we will analyze this phenomenon in light of previous results on learnable iterative algorithms." ]
[ 6, 4, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1680_1Rb", "iclr_2018_S1680_1Rb", "iclr_2018_S1680_1Rb", "iclr_2018_S1680_1Rb", "HyPqsdq7G", "BylmKBImM", "rJYQYd97G", "S1vcOrtmM", "S18cwVtQG", "Hks9aT_Qz", "Sy_LzeO7M", "iclr_2018_S1680_1Rb", "r1fAlRBQM", "iclr_2018_S1680_1Rb", "rJKsozOxM", "rka7EjvGM", "BJWjA85xz", "HyYek2cgM" ]
iclr_2018_SkaPsfZ0W
Network of Graph Convolutional Networks Trained on Random Walks
Graph Convolutional Networks (GCNs) are a recently proposed architecture which has had success in semi-supervised learning on graph-structured data. At the same time, unsupervised learning of graph embeddings has benefited from the information contained in random walks. In this paper we propose a model, Network of GCNs (N-GCN), which marries these two lines of work. At its core, N-GCN trains multiple instances of GCNs over node pairs discovered at different distances in random walks, and learns a combination of the instance outputs which optimizes the classification objective. Our experiments show that our proposed N-GCN model achieves state-of-the-art performance on all of the challenging node classification tasks we consider: Cora, Citeseer, Pubmed, and PPI. In addition, our proposed method has other desirable properties, including generalization to recently proposed semi-supervised learning methods such as GraphSAGE, allowing us to propose N-SAGE, and resilience to adversarial input perturbations.
rejected-papers
This paper proposes a multiscale variant of Graph Convolutional Networks (GCN) , obtained by combining separate GCN modules using powers of normalized adjacency as generators. The model is tested on several node classification semi-supervised tasks obtaining excellent numerical performance. Reviewers acknowledged the good empirical performance of the model, but all raised the issue of limited novelty, relative to the growing body of literature on graph neural networks. In particular, they missed an analysis that compares random walks powers to other multiscale approaches and justifies its performance in the context of semi-supervised learning. Overall, the AC believes this is a good paper, but it can be significantly stronger with an extra iteration that addresses these limitations.
train
[ "BJququIlf", "H1kTclOlf", "H12U-wYgG", "rJGeK9pXG", "HkZAY5T7G", "HJLHwTdQM", "SkeGD2hff", "HJeR42nGG", "BkNk82nMz", "BJkjr22MG", "H1rQ5HmgG", "BJfkzLgxf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author", "author", "author", "author", "public" ]
[ "The paper proposes a novel graph convolutional network in which a variety of random walk steps are involved with multiple GCNs.\n\nThe basic idea, introducing long rage dependecy, would be interesting. Robustness for the feature remove is also interesting.\n\nThe validation set would be important for the proposed method, but for creating larger validation set, labeled training set would become small. How the good balance of training-and-validation can be determined?\n\nDiscussing choice of the degree would be informative. In introducing many degrees (GCNs) for small labeled nodes semi-supervised setting seems to cause over-fitting.", "In this work a new network of GCNs is proposed. Different GCNs utilize different powers of the transition matrix to capture varying neighborhoods in a graph. As an aggregation mechanism of the GCN modules two approaches are considered: a fully connected layer on top of stacked features and attention mechanism that uses a scalar weight per GCN. The later allows for better interpretability of the effects of varying degree of neighborhoods in a graph.\n\nProposed approach, as authors noted themselves, is quite similar to DCNN (Atwood and Towsley, 2016) and becomes equivalent if the combined GCNs have one layer each. While comparison to vanilla GCN is quite extensive, there is no comparison to DCNN at all. I would be curious to see at least portion of the experiments of the DCNN paper with the proposed approach, where the importance of number of GCN layers is addressed. DCNN did well on Cora and Pubmed when more training samples were used. It also was tested on graph classification datasets, but the results were not as good for some of the datasets. I think that comparison to DCNN is important to justify the importance of using multilayer GCN modules.\n\nSome questions and concerns:\n- I could not quite figure out how many layers did each GCN have in the experiments and how impactful is this parameter \n- Why is it necessary to replicate GCNs for each of the transition matrix powers? In section 4.3 it is mentioned that replication factors r = 1 and r = 4 were used, but it is not clear from Table 2 what are the results for respective r.\n- Early stopping implementation seems a bit too intense. \"We invoke many runs over all datasets\" - how many? Mean and standard deviation are reported for top 3 performers, which is not enough to get a sense of standard deviation and mean. Kipf and Welling (2017) report results over 100 runs without selecting top performers if I understood correctly their setup. Could you please report mean and standard deviation of all the runs? Given relatively small performance improvement (comparatively to GCN), more than 3 (selected) runs are needed for comparison.\n- I liked the attention idea and its interpretation in Fig. 2. Could you please add the error bars for the attention weights. It is interesting to see them shifting towards higher powers of the transition matrix, but also it is important to know if this phenomena is statistically significant.\n- Following up on the previous item - did you try not including self connections when computing transition matrix powers? This way the effect of different degrees of neighborhoods in a graph could be understood better. When self-connections are present, each subsequent transition matrix power contains neighborhoods of lower degrees and interpretation becomes not as apparent.\n\nMinor comments:\n- Understanding of this paper quite heavily relies on the reader knowing Kipf and Welling (2017) paper. Particularly, the comment about approximations derived by Kipf and Welling (2017) in Section 3.3 and how directed graph was converted to undirected (Section 4.1) require a bit more details.\n- I am not quite sure why Section 2.3 is needed. Connection to graph embeddings is not given much attention in the paper later on (except t-SNE picture).\n- Typo in Fig. 1 caption - right and left are mixed up.\n- Typo in footnote on page 3.", "The paper presents a Network of Graph Convolutional Networks (NGCNs) that uses\nrandom walk statistics to extract information from near and distant neighbors\nin the graph.\n\nThe authors show that a 2-layer Graph Convolutional Network, with linear\nactivation and W0 as identity matrix, reduces to a one-step random walk.\nThey build on this notion to introduce the idea to make the GCN directly operate\non random walk statistics to better model information across distant nodes.\n\nGiven that it is not clear how many steps of random walk to use a-priori it is\nproposed to make a mixture of models whose outputs are combined by a\nsoftmax classifier, or by an attention based mixing (learning the mixing coefficients).\n\nI find that the comparison can be considered slightly unfair as NGCN has k-times\nthe number of GCN models in it. Did the authors compare with a deeper GCN, or\nsimply with a mixture of plain GCN using one-step random walk?\nThe datasets used for comparison are extremely simple, and I am glad that the\nauthors point out that this is a significant issue for benchmark driven research.\nHowever, doing calibration on a subset of the validation nodes via gradient\ndescent is not very clean as by doing it one implicitly uses those nodes for training.\nThe improvement of the calibrated model on 5 nodes per class (Table 3) seems\nto hint that this peeking into the validation is indeed happening.\n\nThe authors mention that feeding explicitly the information on distant nodes\nmakes learning easier and that otherwise such information it would be hard to\nextract from stacking several GCN layers. While this is true for the small datasets\nusually considered it is not clear at all whether this still holds when we will\nhave large scale graph benchmarks.\n\nExperiments are well conducted but lack a comparison with GraphSAGE and MoNet,\nwhich are the reference models for the selected benchmarks. A comparison would have made the contribution stronger in my opinion. Improvements in performance are minor\nexcept for decimated inputs setting reported in Table 3. In this last case though\nno statistics over multiple runs are shown.\n\nOverall I like the interpretation, even if a bit forced, of GCN as using one-step\nrandom walk statistics. The paper is clearly written.\nThe main issue I have with the approach is that it does not bring a very novel\nway to perform deep learning on graphs, but rather improves marginally upon\na well established one.\n", "We added experiments for PPI dataset (we downloaded it from the GraphSAGE paper).\n\nfrom SAGE authors:\nSAGE-LSTM gets 61.2\nSAGE [i.e. pooling] gets 60.0\n\nOur implementation of SAGE gets 59.8\nOur method (N-SAGE) gets 65.0\n\nResults are added to the table.", "We added experiments for PPI dataset (we downloaded it from the GraphSAGE paper).\n\nThe PPI dataset has about ~20 times more edges than our previously-largest dataset.\n\nfrom SAGE authors:\nSAGE-LSTM gets 61.2\nSAGE [i.e. pooling] gets 60.0\n\nOur implementation of SAGE gets 59.8\nOur method (N-SAGE) gets 65.0\n\nThis shows (unmodified) random walk indeed help increase performance. Results are added to the table.", "Thanks for adding this explicit comparison! It looks like N-SAGE can improve upon the GraphSAGE results by quite a bit.", "Thomas,\n\nWe now added experiments to GraphSAGE and also to Network of GraphSAGE. We only used one version of GraphSAGE, which is the mean pooling aggregation, as the authors of GraphSAGE mention that it performs on-par with their max-pooling aggregation model -- we did not try their LSTM aggregation.\n\n\nTLDR:\n\n* GraphSAGE performs better than GCN, when training data is very scarce (e.g. 5 or 10 labeled nodes per class).\n\n* GCN out-performs GraphSAGE with more training data (e.g. >= 20 labeled nodes per class).\n\n* Network of GraphSAGE (N-SAGE) is better than GraphSAGE in all scenarios.", "Thank you for your review! It made our work much better!\n\n* It is unfair to compare N-GCN which has k-times more parameters to GCN.\n\nWe tried deeper GCN and the results were worse. We also tried >16 hidden dimensions (e.g. 32, 64, 512), and the results were also worse. Potentially because these datasets over-fit, reaching 100% accuracy on training set in all cases.\nNonetheless, we tried mixture of experts on GCNs (i.e. K=1 but r>1), and it is better than K = r = 1, but not as good as ours using random walks. For example, look at appendix and scroll down, comparing every (K=1,r=4; i.e. mixture of GCN) with (K=4,r=1; ours), and you will find that ours is better in all cases, showing that random walks indeed help.\n\n\n* Calibration is not clean:\n\nYou are right. We got excited about the \"calibration\" paper. Now we removed calibration, as it deviates from our story, which gave us more room to experiment with GraphSAGE (SAGE) and DCNN, and show that we can build a Network of GraphSAGE (N-SAGE).\n\n\n* Does this hold for large datasets?\n\nThere are many benchmarks on the datasets we use, including at least a handful of concurrent submissions to ICLR. We said that we will tackle more datasets in \"future work\". We will do our best to do so by the end of the rebuttal cycle.\n\n\n* Experiments with GraphSAGE and MoNet?\n\nWe added experiments with GraphSAGE and DCNN. Our models still outperform. We plan to try-out MoNet but perhaps after adding another (larger) dataset.\n\n\n* Not much novelty?\n\nAs we added experiments for GraphSAGE (SAGE), we decided to wrap SAGE in a network and train it with random walks, showing that Network of SAGE (N-SAGE) is better than SAGE. We feel that this generalization makes our work novel enough, and we hope that you agree.\n", "Thank you for taking the time to review our work!\n\n* Balance on training and validation:\n\nWe re-use the splits created by Planetoid paper (including train, validate, test) and we do not control it in this paper.\n\n\n* Degree of GCNs:\n\nWe assume that you meant the \"capacity\" (e.g. number of parameters) of GCNs. We now conduct experiments in Appendix on GCNs when we give them more parameters, and we show that they perform worse than our models, showing that our methods are out-performing because of random walks, and not necessarily more parameters.\n\n", "Thank you for your review! It made our work much better!\n\n* Compare with DCNN:\nWe added experiments to DCNN, and clearly explained how they are a special case of ours in new Section 3.6. We outperform them in the \"standard\" setup that was used by Kipf and Planetoid (i.e. 20 nodes per class). However, DCNN is showing more power than GCN's with more training data (e.g. see table on Pubmed, up to 100 labeled nodes).\n\n\n* Layers of GCN?\nThanks! We now made it clear in writing. Our GCN and SAGE modules for both our models and baselines use 2 layers.\n\n\n* Why replication r > 1?\nWe added extensive evaluation in Appendix. More \"r\" helps, similar to \"ensemble of classifiers\" (e.g. mixture of experts). This seems to help on validation+test but not on train accuracy, as all models reach ~100% accuracy on training anyway.\n\n\n* Early stopping and \"many runs\" for validation.\nWe beleive that model selection we do is acceptable. We choose models based on *validation* accuracy. In fact, we now choose the top 1 model based on validation accuracy and report its test accuracy, which is the true practical setting. We do many runs (total == thousands, for all parameter sweeps) and put mean and standard deviation in appendix. Also, we now re-ran all experiments without early stopping.\n\n\n* Add error bars to attention.\nGood idea. Now done. Thank you!\n\n\n* Self-connections:\nWe add self connections, and already mentioned it in at least 2 places as we follow Kipf's setup.\n\n\n* Understanding paper requires knowing Kipf's work\nWe tried to explain what it means that approximations \"still valid\". Is it better now? we tried our best to make the paper stand on its own and will continue doing so before the camera ready (in hopes it gets accepted)\n\n\n* Section 2.3 not needed.\nWe feel that it gives the reader background of embeddings on adjacency VS embeddings using random walks. It also gives us defines \\mathcal{T} in terms of D and A (which might be known by many readers and they could skim that section).\n\n\n* Typos:\nWe fixed them. Thank you for pointing them out.\n", "Thomas,\n\nThanks for your kind words about our work!\n\nWe share your feelings about the challenges of assessing work that uses the benchmark splits, and we agree that testing on more datasets (e.g. graphs introduced in GraphSAGE) would further test if our model can generalize to other settings which hopefully do not suffer from the train VS validation size variance.\n\nWe were not aware of GraphSAGE at the time of our work (it is recent, to appear in NIPS). Nonetheless, it should be a one-line addition to our baseline (Kipf's GCN) and our model (NGCN), as it is just a layer-norm transformation (https://arxiv.org/abs/1607.06450).\n\nWe hope to add some additional experimental results during the rebuttal phase, as we are also quite interested in understanding the impact of newer models (e.g. GraphSAGE and/or mixture of CNNs) in the context of our proposed method. This should strengthen our work -- Thank you for the suggestion!", "Very interesting work!\n\nI very much appreciate that you pointed out a significant issue with the benchmark dataset splits for Cora/Citeseer/Pubmed that are now often being used to compare models for semi-supervised learning on graph-structured data. Following https://arxiv.org/abs/1609.02907, the setting is typically as follows (as you mentioned): a small number of labeled examples are used for training (typically 20 labeled nodes per class), whereas a large fixed-size split of 500 labeled nodes is used for validation / hyperparameter optimization. \n\nWhile the hyperparameter optimization procedure in https://arxiv.org/abs/1609.02907 was kept to a bare minimum (same hyperparameter choice across all three datasets, chosen from a very small grid search on Cora), it is indeed possible to easily \"cheat\" the benchmark by making use of the rich information provided by the validation set, as your results denoted by 'calibrated' (where you perform gradient descent on some of the model parameters based on validation loss) nicely demonstrate. I am a bit worried that this issue affects a number of recently proposed models that make use of this benchmark (some of the other concurrent submissions to ICLR2018 included), as it is hard to evaluate how much effort has been put into hyperparameter optimization.\n\nIt looks to me like your uncalibrated model (i.e. without gradient descent optimization based on validation loss) is unaffected by this and indicates that your proposed Network of GCNs indeed helps improve model performance. \n\nRecently, a number of improvements of the GCN model have been proposed, and I think it would make your results stronger if you compared to the most prominent ones that have been published lately: GraphSAGE (https://arxiv.org/abs/1706.02216) and mixture model CNNs (https://arxiv.org/abs/1611.08402). Arguably their contributions are orthogonal to yours, so ideally these model improvements could easily be combined. Nonetheless it would provide a clearer picture to see how different contributions can make this class of model more powerful / stable.\n\nIt would be interesting to see an evaluation of your model on at least one different type of dataset (such as one of the benchmark datasets introduced in GraphSAGE), where calibration on the validation set hopefully wouldn't have such a large impact.\n\n" ]
[ 5, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkaPsfZ0W", "iclr_2018_SkaPsfZ0W", "iclr_2018_SkaPsfZ0W", "H1kTclOlf", "H12U-wYgG", "SkeGD2hff", "BJfkzLgxf", "H12U-wYgG", "BJququIlf", "H1kTclOlf", "BJfkzLgxf", "iclr_2018_SkaPsfZ0W" ]
iclr_2018_SyBBgXWAZ
Optimal transport maps for distribution preserving operations on latent spaces of Generative Models
Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or Gaussian. After a trained model is obtained, one can sample the Generator in various forms for exploration and understanding, such as interpolating between two samples, sampling in the vicinity of a sample or exploring differences between a pair of samples applied to a third sample. In this paper, we show that the latent space operations used in the literature so far induce a distribution mismatch between the resulting outputs and the prior distribution the model was trained on. To address this, we propose to use distribution matching transport maps to ensure that such latent space operations preserve the prior distribution, while minimally modifying the original operation. Our experimental results validate that the proposed operations give higher quality samples compared to the original operations.
rejected-papers
This paper exposes a simple recipe to manipulate the latent space of generative models in such a way to minimize the mismatch between the prior distribution and that of the manipulated latent space. Manipulations such as linear interpolation are commonplace in the literature, and this work will be helpful to improve assessment on that front. Reviewers found this paper interesting, yet unpolished and incomplete. In subsequent iterations, the paper has significantly improved on those fronts, however the AC believes an extra iteration will make this work even more solid. Thus, unfortunately this paper cannot be accepted at this time.
train
[ "HyBft3dgM", "SJuG7tqxz", "SJ13MSaxf", "Bylsiz3QG", "H1EeeKM7G", "Bk6VytfXz", "S1jQT_G7z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Authors note that models may be trained for a certain distribution (e.g. uniform or Gaussian) but then \"used\" by interpolating or jittering known examples, which has a different distribution. While the authors are clear about the fact that this is a mismatch, I did not find it well-motivated why it was \"the right thing to do\" to match the training prior, given that the training prior is potentially not at all representative or relevant. The fact that a Gaussian/prior distribution is used in the first place seems like a matter of convenience rather than it being the \"right\" distribution for the problem goals, and that makes it less clear that it's important to match this \"convenience\" distribution. The key issue I had throughout is \"what is the real-world problem metric or evaluation criteria and how does this proposal directly help\"?\n\nFor example, authors cover the usual story that random Gaussian examples lie on a thin sphere shell in high-d space, and thus interpolation of those examples will like on a thin shell of slightly less radius. In contrast, the Uniform distribution on a hypercube [-1,1]^D in D dimensions \"looks\" like a sharp-pointy star with 2^D sharp points and all the mass in those 2^D corners. But the key question is, what are these examples being used for, and what are the trade-offs between interpolation (which tends to be fairly safe) and extrapolation of the given examples?\n\nThis is echoed in the experiments, which I found unsatsifactory for the same key issue: \"What is the criteria for “higher-quality interpolated samples”? in the examples they give, it seems to be the sharpness of the images. Is that realistic/relevant? These are pretty images, but the evaluation criteria is unclear.\n\n\n", "This paper is concerned with the mismatch between the input distribution used for training and interpolated input. It extends the discussion on this phenomenon and the correction method proposed by White (2016), and proposes an optimal transport-based approach, which essentially makes use of the trick of change of variables. The discussion of the phenomenon is interesting, and the proposed method seems well motivated and useful. There are a number of errors or inconsistencies in the paper, and the experiments results, compared to those given by SLERP, see rather weak. My big concern about the paper is that it seems to be written in a rush and needs a lot of improvement before being published. Below please see more detailed comments.\n\n- In Introduction, the authors claim that \"This is problematic, since the generator G was trained on a fixed prior and expects to see inputs with statistics consistent with that distribution.\" Here the learned generative network might still apply even if the input distribution changes (e.g., see the covariate shift setting); should one claim that the support of the test input distribution may not be contained in the support of the input distribution for training? Is there any previous result supporting this? \n- Moreover, I am wondering whether Sections 2.2 and 2.3 can be simplified or improved--the underlying idea seems intuitive, but some of the statements seem somewhat confusing. For instance, what does equation (6) mean?\n- Note that a parenthesis is missing in line 3 below (4). In (6), the dot should follow the equation.\n- Line 1 of page 7: here it would be nice to make it clear what p_{y|x} means. How did you obtain values of f(x) from this conditional distribution?\n- Theorem 2: here does one assume that F_Y is invertible? (Maybe this is not necessary according to the definition of F_Y^{[-1]}...)\n- Line 4 above Section 4.2: the sentence is not complete.\n- Section 4.2: It seems that Figure 3 appears in the main text earlier than Figure 2. Please pay attention to the organization.\n- Line 3, page 10: \"slightly different, however...\"\n- Line 3 below Figure 2: I failed to see \"a slight loss in detain for the SLERP version.\" Perhaps the authors could elaborate on it?\n- The paragraph above Figure 3 is not complete.", "The authors demonstrate experimentally a problem with the way common latent space operations such as linear interpolation are performed for GANs and VAEs. They propose a solution based on matching distributions using optimal transport. Quite heavy machinery to solve a fairly simple problem, but their approach is practical and effective experimentally (though the gain over the simple SLERP heuristic is often marginal). The problem they describe (and so the solution) deserves to be more widely known.\n\nMajor comments:\n\nThe paper is quite verbose, probably unnecessarily so. Firstly, the authors devote over 2 pages to examples that distribution mismatches can arise in synthetic cases (section 2). This point is well made by a single example (e.g. section 2.2) and the interesting part is that this is also an issue in practice (experimental section). Secondly, the authors spend a lot of space on the precise derivation of the optimal transport map for the uniform distribution. The fact that the optimal transport computation decomposes across dimensions for pointwise operations is very relevant, and the matching of CDFs, but I think a lot of the mathematical detail could be relegated to an appendix, especially the detailed derivation of the particular CDFs.\n\nMinor comments:\n\nIt seems worth highlighting that in practice, for the common case of a Gaussian, the proposed method for linear interpolation is just a very simple procedure that might be called \"projected linear interpolation\", where the generated vector is multiplied by a constant. All the optimal transport theory is nice, but it's helpful to know that this is simple to apply in practice.\n\nMight I suggest a very simple approach to fixing the distribution mismatch issue? Train with a spherical uniform prior. When interpolating, project the linear interpolation back to the sphere. This matches distribution, and has the attractive property that the entire geodesic between two points lies in a region with typical probability density. This would also work for vicinity sampling.\n\nIn section 1, overfitting concerns seem like a strange way to motivate the desire for smoothness. Overfitting is relatively easy to compensate for, and investigating the latent space is interesting regardless.\n\nWhen discussing sampling from VAEs as opposed to GANs, it would be good to mention that one has to sample from p(x | z) not just p(z).\n\nLots of math typos such as t - 1 should be 1 - t in (2), \"V times a times r\" instead of \"Var\" in (3) and \"s times i times n\" instead of \"sin\", etc, sqrt(1) * 2 instead of sqrt(12), inconsistent bolding of vectors. Also strange use of blackboard bold Z to mean a vector of random variables instead of the integers.\n\nCould cite an existing source for the fact that most mass for a Gaussian is concentrated on a thin shell (section 2.2), e.g. David MacKay Information Theory, Inference and Learning Algorithms.\n\nAt the end of section 2.4, a plot of the final 1D-to-1D optimal transport function (for a few different values of t) for the uniform case would be incredibly helpful.\n\nSection 3 should be a subsection of section 2.\n\nFor both SLERP and the proposed method, there's quite a sudden change around the midpoint of the interpolation in Figure 2. It would be interesting to plot more points around the midpoint to see the transition in more detail. (A small inkling that samples from the proposed approach might change fastest qualitatively near the midpoint of the interpolation perhaps maybe be seen in Figure 1, since the angle is changing fastest there??)\n\n", "Dear Reviewers, thanks again for your feedback and happy new year! \n\nSince two reviewers felt the experiments could be stronger, we have added a new revision with additional quantitative experiments (Section 3.3. and Table 2.) which compare the interpolation operations using Inception scores. These results mirror what was qualitatively observed in Section 3.2 -- namely that when compared with the original models, the linear interpolation gives a significant quality degradation (up to 29% lower Inception scores), while our matched operations do not degrade the quality (less than 1% observed difference in scores).\n\nRegarding individual comments we refer to the individual responses previously posted. All other Figures, Tables and Sections referenced there have the same numbers as in the previous revised edition, so you only need to look at the latest revision.", "\nThank you for your review. \nWe discuss your raised concerns and hope you reconsider the rating.\n\n\"It is not well-motivated why it is 'the right thing to do' to match the training prior, given that the training prior is potentially not at all representative or relevant...\" \"... [using the prior] seems like a matter of convenience ...\"\n\nWhile true that the specific prior chosen is a matter of convenience, after it has been chosen it is *the prior that the model is trained for*. This is the standard practice when training GANs, so our point is that after you train your model you need to respect the prior you chose. So then you might say that the \"wrong\" prior was chosen, but it is well known that any distribution (in principle) can be sampled from via a mapping G applied to samples of a fixed (e.g. uniform) distribution z. See Multivariate Inverse Transform Sampling (e.g. slide 24 in https://www.slac.stanford.edu/slac/sass/talks/MonteCarloSASS.pdf ).\n\n\n\"...what is the real-world problem metric or evaluation criteria and how does this proposal directly help?\"\n\nThe goal of this work is to improve upon how generative models such as GANs are visualized and explored when working with operations on samples. Too see why this is relevant in Section 1.1 (revised edition) we mention eight papers (out of many more) in the recent literature which use such operations to explore their models. A 'real-world' use case hinges on real-world use cases of generative models, but just to give an example you could imagine an application that allows a user to 'navigate' the latent space of a generated model to synthesize a new example (say logo/face/animated character) for use in some real world application. Such exploration of the model needs to allow for various operations to adjust the synthesized samples.\n\n\nRegarding the 'usual thin sphere story' we note that the radius difference is quite significant, see Figure 2 (revised edition) which shows the radius distribution for the latent spaces typically used in the literature. Our approach completely sidesteps the issue.\n\n\nFor the experiments, we have added more examples of latent space operations and a discussion on the differences. A key property of our proposed approach is that it is 'safe': if you repeatedly look at some output of any operation (say e.g. midpoint of the matched interpolation), it will have exactly the same distribution as random samples from the model. Hence no matter what kind of image quality assessment you would use, it would be the (statistically) the same as for samples from the model without any operations.", "\nThanks for the feedback! \n\nWhile sticking to the main story we have significantly polished the paper. \nWe have improved discussion of the experiments, acknowledging that there is not a really noticeable difference between the SLERP heuristic and our matched interpolation in practice. This is perhaps not so surprising since SLERP does tend to match the biggest distribution difference (the norm mismatch) quite OK in practice (see Fig. 2 revised paper). \nNonetheless, our proposed framework has many benefits which we have also better highlighted in the paper:\n\t- it gives a new and well grounded perspective on how to do operations in the latent space of distributions\n\t- it is straightforward to implement, especially for a Gaussian prior (see Tab. 1 revised paper).\n - it generalizes to almost any operation you can think of, not just interpolation (see e.g. random walk in Fig. 11 (revised paper)).\n\nRegarding specific comments:\n\n- while the trained model -might- apply also for a different distribution, for the linear interpolation we typically see a clear difference. Note we do not claim that the supports of the distributions do not overlap - we only claim this for the distribution of the norms.\n\n- we significantly simplified the explanation and motivation of Sec 2.1-2.2 (old version), removing the synthetic example (including eq (6)) and better focus on the (more relevant in practice) norm distribution difference - with detailed calculations moved to appendix. The subsections are merged into the intro of Sec 2 in the revised edition.\nThese changes were also in line with suggestions from AnonReviewer3 on simplifying the paper.\n\n- p_{y|x} has been clarified in the text, it was referring to f(x) being a random variable where f(x) is drawn from the conditional distribution over y given a fixed x. If this is unclear/confusing in our notation, we can also instead just cite the fact that KP is a relaxation of MP.\n\n- Theorem 2: while the derivations would be easier if F_Y were invertible, it is not needed. F_Y is always monotonic, and F_Y^{[-1]} denotes the pseudo-inverse (hence the bracket [-1] ). See https://en.wikipedia.org/wiki/Cumulative_distribution_function#Inverse_distribution_function_(quantile_function) and (Santambrogio, 2015) for more details.\n\n- other typos/mistakes: should be fixed in revised version\n", "\nThanks for the feedback!\nWe followed your suggestion in the major comment and significantly polished and shortened the paper. \n\n - as suggested, we focus on explaining the effect distribution mismatch through the norm distribution, moving unnecessary details to the appendix.\n\n- we moved Lemma 1 to appendix as well as the detailed calculations of the examples, while summarizing the Gaussian case in Table 1.\n\n- We now mention how simple the formulas end up in the Gaussian case. This is because the operators we consider are additive in the samples, which means the results of the operations are still Gaussian - requiring only a multiplicative adjustment for matching the variance.\n\n- Working on the hypersphere is also a valid approach. This setting is very similar to our framework applied to the Gaussian prior when taking the prior dimension towards infinity - and the projection to the sphere can be interpreted as the transport map. Note however by fixing points to lie exactly on the sphere one introduces a dependency between the coordinates (which means you can't do distribution matching coordinate-wise), but this dependency is very small since an i.i.d. Gaussian will already be on the sphere w.h.p. We actually tried this setting at some point before, but found it (surprisingly) less stable for DCGAN, e.g. resulting in collapse for the icon dataset.\n- We adjust the motivation, as you mention interpolations and other operations are interesting on their own, and overfitting can be measured through other means.\n\n- on VAEs vs GANs, we are currently only discussing the sampling in the test setting - where one only samples from p(z) ( see Figure 5 in https://arxiv.org/pdf/1606.05908.pdf )\n\n- Typos/inconsistencies should now be fixed\n\n- We added plots showing the 1D-to-1D monotone transport maps for Uniform and Gaussian, see Figure 3 revised edition.\n\n- We will add a citation to David MacKay for the mass distribution of a Gaussian. However we didn't find a nice reference which gives the same result for arbitrary distributions with i.i.d components.\n\n- In Figure 15 in the appendix, we show example interpolations with twice as many points, so the transition is clearer. We note that the color may change sharply when interpolating between examples if the inbetween color is not 'realistic' for the data." ]
[ 4, 6, 6, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_SyBBgXWAZ", "iclr_2018_SyBBgXWAZ", "iclr_2018_SyBBgXWAZ", "iclr_2018_SyBBgXWAZ", "HyBft3dgM", "SJuG7tqxz", "SJ13MSaxf" ]
iclr_2018_H139Q_gAW
Learning Graph Convolution Filters from Data Manifold
Convolution Neural Network (CNN) has gained tremendous success in computer vision tasks with its outstanding ability to capture the local latent features. Recently, there has been an increasing interest in extending CNNs to the general spatial domain. Although various types of graph convolution and geometric convolution methods have been proposed, their connections to traditional 2D-convolution are not well-understood. In this paper, we show that depthwise separable convolution is a path to unify the two kinds of convolution methods in one mathematical view, based on which we derive a novel Depthwise Separable Graph Convolution that subsumes existing graph convolution methods as special cases of our formulation. Experiments show that the proposed approach consistently outperforms other graph convolution and geometric convolution baselines on benchmark datasets in multiple domains.
rejected-papers
This paper proposes to combine Depthwise separable convolutions developed for 2d grids with recent graph convolutional architectures. The resulting architecture can be seen as learning both node and edge features, the latter encoding node similarities with learnt weights. Reviewers agreed that this is an interesting line of work, but that further work is needed in both the presentation and the experimental front before publication. In particular, the paper should also compare against recent models (such as the MPNN from Gilmer et al) that also propose edge feature learning. THerefore, the AC recommends rejection at this time.
test
[ "BkCxP2Fez", "S1rTmy9xG", "BJFoMD9eG", "HJaNmI3ZM", "B1e8aB3WM", "Hkoc3H2bG", "Hywv3ShbM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper presents a Depthwise Separable Graph Convolution network that aims\nat generalizing Depthwise convolutions, that exhibit a nice performance in image\nrelated tasks, to the graph domain. In particular it targets\nGraph Convolutional Networks.\n\nIn the abstract the authors mention that the Depthwise Separable Graph Convolution\nthat they propose is the key to understand the connections between geometric\nconvolution methods and traditional 2D ones. I am afraid I have to disagree as\nthe proposed approach is not giving any better understanding of what needs to be\ndone and why. It is an efficient way to mimic what has worked so far for the planar\ndomain but I would not consider it as fundamental in \"closing the gap\".\n\nI feel that the text is often redundant and that it could be simplified a lot.\nFor example the authors state in various parts that DSC does not work on\nnon-Euclidean data. Section 2 should be clearer and used to better explain\nrelated approaches to motivate the proposed one.\nIn fact, the entire motivation, at least for me, never went beyond the simple fact\nthat this happens to be a good way to improve performance. The intuition given\nis not sufficient to substantiate some of the claims on generality and understanding\nof graph based DL.\n\nIn 3.1, at point (2), the authors mention that DSC filters are learned from the\ndata whereas GC uses a constant matrix. This is not correct, as also reported in\nequation 2. The matrix U is learned from the data as well.\n\nEquation (4) shows that the proposed approach would weight Q different GC\nlayers. In practical terms this is a linear combination of these graph\nconvolutional layers.\nWhat is not clear is the \\Delta_{ij} definition. It is first introduced in 2.3\nand described as the relative position of pixel i and pixel j on the image, but\nthen used in the context of a graph in (4). What is the coordinate system used\nby the authors in this case? This is a very important point that should be made\nclearer.\n\nWhy is the Related Work section at the end? I would put it at the front.\n\nThe experiments compare with the recent relevant literature. I think that having\nless number of parameters is a good thing in this setting as the data is scarce,\nhowever I would like to see a more in-depth comparison with respect to the number\nof features produced by the model itself. For example GCN has a representation\nspace (latent) much smaller than DSCG.\nNo statistics over multiple runs are reported, and given the high variance of\nresults on these datasets I would like them to be reported.\n\nI think the separability of the filters in this case brings the right level of\nsimplification to the learning task, however as it also holds for the planar case\nit is not clear whether this is necessarily the best way forward.\nWhat are the underlying mathematical insights that lead towards selecting\nseparable convolutions?\n\nOverall I found the paper interesting but not ground-breaking. A nice application\nof the separable principle to GCN. Results are also interesting but should be\nfurther verified by multiple runs.\n", "The paper presents an extension of the Xception network of (Chollet et al. 2016) 2D grids to generic graphs. The Xception network decouples the spatial correlations from depth channels correlations by having separate weights for each depth channel. The weights within a depth channel is shared thus maintaining the stationary requirement. The proposed filter relaxes this requirement by forming the weights as the output of a two-layer perception. \n\nThe paper includes a detailed comparison of the existing formulations from the traditional label propagation scheme to more recent more graph convolutions (Kipf & Welling, 2016 ) and geometric convolutions (Monti et al. 2016). \n\nThe paper provides quantitative evaluations under three different settings i) image classification, ii) Time series forcasting iii) Document classification. The proposed method out-performs all other graph convolutions on all the tasks (except image classification) though having comparable or less number of parameters. For image classification, the performance of proposed method is below its predecessor Xception network. \n\nPros:\ni) Detailed review of the existing work and comparison with the proposed work.\nii) The three experiments performed showed variety in terms of underlying graph structure hence provides a thorough evaluation of different methods under different settings.\niii) Superior performance with fewer number of parameters compared to other methods. \nCons:\ni) The architecture of the 2 layer MLP used to learn weights for a particular depth channel is not provided.\nii) The performance difference between Xception and proposed method for image classification experiments using CIFAR is incoherent with the intuitions provided Sec 3.1 as the proposed method have more parameters and is a generalized version of DSC.", "Paper Summary:\nThis work proposes a new geometric CNN model to process spatially sparse data. Like several existing geometric CNNs, convolutions are performed on each point using nearest neighbors. Instead of using a fixed or Gaussian parametric filters, this work proposes to predict filter weights using a multi-layer perception. Experiments on 3 different tasks showcase the potential of the proposed method.\n\nPaper Strengths:\n- An incremental yet interesting advance in geometric CNNs.\n- Experiments on three different tasks indicating the potential of the proposed technique.\n\nMajor Weaknesses:\n- Some important technical details about the proposed technique and networks is missing in the paper. It is not clear whether a different MLP is used for different channels and for different layers, to predict the filter weights. Also, it is not clear how the graph nodes and connectivity changes after the max-pooling operation.\n- Since filter weight prediction forms the central contribution of this work, I would expect some ablation studies on the MLP (network architecture, placement, weight sharing etc.) that predicts filter weights. But, this is clearly missing in the paper.\n- If one needs to run an MLP for each edge in a graph, for each channel and for each layer, the computation complexity seems quite high for the proposed network. Also, finding nearest neighbors takes time on large graphs. How does the proposed technique compare to existing methods in terms of runtime?\n\nMinor Weaknesses:\n- Since this paper is closely related to Monti et al., it would be good if authors used one or two same benchmarks as in Monti et al. for the comparisons. Why authors choose different set of benchmarks? Because of different benchmarks, it is not clear whether the performance improvements are due to technical improvements or sub-optimal parameters/training for the baseline methods.\n- I am not an expert in this area. But, the chosen benchmarks and datasets seem to be not very standard for evaluating geometric CNNs.\n- The technical novelty seems incremental (but interesting) with respect to existing methods.\n\nClarifications:\n- See the above mentioned clarification issues in 'major weaknesses'. Those clarification issues are important to address.\n- 'Non-parametric filter' may not be right word as this work also uses a parametric neural network to estimate filter weights?\n\nSuggestions:\n- It would be great if authors can add more details of the multi-layer perceptron, used for predicting weights, in the paper. It seems some of the details are in Appendix-A. It would be better if authors move the important details of the technique and also some important experimental details to the main paper.\n\nReview Summary:\nThe proposed technique is interesting and the experiments indicate its superior performance over existing techniques. Some incomplete technical details and non-standard benchmarks makes this not completely ready for publication.", "Thank you for the comments. Following are our responses to the major points:\n\n\n“the Depthwise Separable Graph Convolution that they propose is the key to understand the connections between geometric convolution methods and traditional 2D ones.”\nLet us clarify: the key insight in our paper is that many successful 2d-grid based CNNs (including DSC) cannot directly apply to generic spatial convolution problems, and we close the gap (in a mathematically compatible way) by proposing a unified framework for both 2d-grid based convolution and for more generic spatial convolution with automatically learning link weights for the underlying graphs. It is not our claim that DSGC is the only way to close the gap. We apologize if our wording in the Abstract is not clear enough; we will make it clear in the revised version.\n\n\n“the authors mention that DSC filters are learned from the data whereas GC uses a constant matrix. This is not correct,”\nApologies for the confusion. Following the terminology in [1], DSC consists of two parts, i.e., the spatial convolution (W in eq.3) and the channel convolution (U in eq.3). What we really mean here is that GC learns the channel convolution but relies on a constant filter to perform the spatial convolution. On the other hand, DSC learns both the spatial convolution and the channel convolution. Further, DSGC generalizes the DSC method. \n\n“In practical terms this is a linear combination of these graph convolutional layers.”\nJust to clarify, we are not learning to combine GC layers, but learning the filter weights associated with the edges in each graph. The resulting operation is not a simple linear combination of GC layers. This can be read from eq. (4), where the summation is carried out over edges/neighbors instead of over layers. It learns graph spatial filters, while conventional GC is not able to do. \n\n\n“What are the underlying mathematical insights that lead towards selecting separable convolutions?”\n“It is an efficient way to mimic what has worked so far for the planar domain but I would not consider it as fundamental in \"closing the gap\"\nHow to generalize standard convolution over a 2d grid to the general spatial domain is the fundamental problem we are trying to address and the major contribution of our paper. Existing techniques such as traditional graph convolution (GC) is not compatible with grid-based convolution as it uses the constant spatial filter across all channels. Our approach provides a natural mechanism to close the gap by learning a separable convolution filter for different channels using function approximation. Besides, the effectiveness of our approach was empirically verified by our experiments using datasets over a variety of domains.\n \n“What is not clear is the \\Delta_{ij} definition. It is first introduced in 2.3 and described as the relative position of pixel i and pixel j on the image, but then used in the context of a graph in (4)..”\nGiven a pair of nodes i and j, Delta_{ij} can be viewed as the embedding of its spatial attributes (e.g. the relative difference between the two nodes’ spatial coordinates), which is needed for MLPs to predict the filter weights. In other words, \\Delta_{ij} serves as a “key” to retrieve “values” (filter weights) either from a lookup table as in 2d-grid convolution (sec 2.4) or from a compressed table as in DSGC (MLP in (4)). As stated in the introduction, we did not explore deeper in graph systems without spatial coordinate information, although our model can subsume GC with the certain manually defined coordinate system (see [2] for more detailed discussions). We will make these points more explicit in the revised paper.\n \n“Results are also interesting but should be further verified by multiple runs.”\nWe agree reporting variance would further strengthen the paper, and will add such results in our revision. Actually, we did not observe high variance performance in our experiment. We rerun the DSGC model for 10 times and report the mean(std error) in three tasks: CIFAR 7.39(0.136), USHCN-TMAX 5.211(0.0498), 20news 71.70(0.285). Obviously, the variance is significantly smaller than the performance gap between the DSGC model and best baseline results (CIFAR 8.34, USHCN-TMAX 5.467, 20news 71.01).\n\n\n[1] Chollet, François. \"Xception: Deep Learning with Depthwise Separable Convolutions.\" arXiv preprint arXiv:1610.02357(2016).\n\n[2] Monti, Federico, Davide Boscaini, Jonathan Masci, Emanuele Rodolà, Jan Svoboda, and Michael M. Bronstein. \"Geometric deep learning on graphs and manifolds using mixture model CNNs.\" arXiv preprint arXiv:1611.08402 (2016).", "Thank you for the comments. Following are our responses for the major points:\n\n“The architecture of the 2 layer MLP used to learn weights for a particular depth channel is not provided.”\n\nWe will add the details in the revised version: The 2 layer MLP takes the \\Delta_{ij} as the input. The hidden dimension is 256 with tanh activation. The output dimension is 1. Parameters of the MLPs are learned independently for each filter. We have conducted an ablation study for the MLP, by changing their depth, activation functions, and weight sharing strategies. However, their results are very similar; the two-layer MLP provides the reasonable performance with the shortest running time. \n\n“The performance difference between Xception and proposed method for image classification experiments using CIFAR is incoherent with the intuitions provided Sec 3.1 as the proposed method have more parameters and is a generalized version of DSC.”\n\nBy “incoherent” do you mean that DSGC (our proposed method) should always outperform the original DSC in image classification? This may not be necessarily true and is not our expectation. That is, when the underlying structure is a truly 2d grid (like images), the simpler DSC model would fit the problem better and hence is expected to outperform the more generalized model of DSGC. On the other hand, when the true underlying structure is not a 2d grid (as in many graph convolution problems), then the DSGC is more powerful, as we have shown in our experimental results for the spatiotemporal modeling tasks.\n", "“the chosen benchmarks and datasets seem to be not very standard for evaluating geometric CNNs.”\n\nWe use benchmark datasets for image classification (CIFAR) and document categorization (20news), following ([1],[2],[3]). As you mentioned, Monti et al. [3] used MNIST and citation network as the experiment datasets. We use CIFAR instead of MNIST as the former is more difficult and can better demonstrate the scalability of our algorithm. Moreover, we chose not to use some “standard” citation networks as their training sets are extremely small, e.g. 140 samples for Cora and 60 samples for Pubmed in their experiment setting, which usually lead to unreliable results, as pointed out by Monti et al. \\cite{3}, “The tuning of the network hyper-parameters has been fundamental in this case for avoiding overfitting, due to a very small size of the training set.” Finally, the spatio-temporal forecasting is a valuable application for the graph convolution method ([4],[5]).\n\n\n“The technical novelty seems incremental (but interesting) with respect to existing methods.”\n \nOne major novel contribution is to provide a unified mathematical view for both 2d-grid based convolution methods and more generic graph convolution, which has not been done before. Since our approach is fully compatible with 2d-grid convolution, it would enable people to better leverage architectures and techniques developed for 2d-grid convolution with mathematical understanding, not just intuition. \n\n“‘'Non-parametric filter' may not be right word as this work also uses a parametric neural network to estimate filter weights?”\n\nDo you refer this sentence “which is weaker than non-parametric filter in the depthwise separable convolution and neural network function in the proposed method”? Hereby “non-parametric filter” we mean the standard (2d-grid based) depthwise separable convolution, and by “neural network function” we mean the same as in your words. Sorry for the confusion in wording; we will rephrase this sentence to make it clear. \n \n“It would be great if authors can add more details of the multi-layer perceptron, used for predicting weights, in the paper. It seems some of the details are in Appendix-A. It would be better if authors move the important details of the technique and also some important experimental details to the main paper.”\n\nWe will do that. Thanks for the suggestion. \n\n[1] Bruna, Joan, Wojciech Zaremba, Arthur Szlam, and Yann Lecun. \"Spectral networks and locally connected networks on graphs.\" In International Conference on Learning Representations (ICLR2014), CBLS, April 2014. 2014.\n[2] Defferrard, Michaël, Xavier Bresson, and Pierre Vandergheynst. \"Convolutional neural networks on graphs with fast localized spectral filtering.\" In Advances in Neural Information Processing Systems, pp. 3844-3852. 2016.\n[3] Monti, Federico, Davide Boscaini, Jonathan Masci, Emanuele Rodolà, Jan Svoboda, and Michael M. Bronstein. \"Geometric deep learning on graphs and manifolds using mixture model CNNs.\" arXiv preprint arXiv:1611.08402 (2016).\n[4] Li, Yaguang, Rose Yu, Cyrus Shahabi, and Yan Liu. \"Graph Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting.\" arXiv preprint arXiv:1707.01926 (2017).\n[5] Yu, Bing, Haoteng Yin, and Zhanxing Zhu. \"Spatio-temporal Graph Convolutional Neural Network: A Deep Learning Framework for Traffic Forecasting.\" arXiv preprint arXiv:1709.04875 (2017).\n", "We thank the reviewer for the comments/questions. Following are our main clarifications:\n\n“Some important technical details about the proposed technique and networks is missing in the paper. It is not clear whether a different MLP is used for different channels and for different layers, to predict the filter weights. Also, it is not clear how the graph nodes and connectivity changes after the max-pooling operation.”\n\nWe consider the most generic setup where each filter comes with its own MLP (eq. 4, w^{(q)} refers different function.), although partially sharing those MLPs could be a potential option. The pooling layer is performed based on k-means clustering, namely, nodes in the previous layer before pooling will be connected to their cluster centroid in the next layer after pooling (described in Section 4.1). After pooling, edges in the graph are still defined based on k-nearest neighbors. We will make these points more clearly in the revisited version.\n\n“Since filter weight prediction forms the central contribution of this work, I would expect some ablation studies on the MLP (network architecture, placement, weight sharing etc.) that predicts filter weights. “\n\nWe have indeed conducted ablation tests with MLP, by changing the number of layers and activation function of each hidden layer, and by trying several weight sharing strategies. The results are very similar in terms of accuracy; the two-layer MLP provides a reasonable performance with the shortest running time and hence is used in the current paper. We chose not to report more results on our preliminary ablation study because we don’t have enough mathematical understanding about the influence of different MLP architectures to the final performance, which makes the design of ablation experiments very subjective. We will include those details in the appendix of the revisited paper. \n\n“If one needs to run an MLP for each edge in a graph, for each channel and for each layer, the computation complexity seems quite high for the proposed network. Also, finding nearest neighbors takes time on large graphs. How does the proposed technique compare to existing methods in terms of runtime?”\n\nThe number of edges grows only linearly in the graph size, i.e., the number of nodes, because of the sparsity of the graph. Therefore the training is fairly efficient. Also note that the nearest neighbor computation can be carried out during the preprocessing, hence does not affect the training time. We will provide the detailed information in our revised version of the paper for comparing the running time of all graph convolution algorithms, as shown below: m is for minutes\nCifar\nDcnn 922m\tChebynet 1715.71m\tGCN 706m\tMoNet 2504m\tDSGC 1527m\nTIme series prediction\nDcnn 176m\tChebynet 286m \tGCN 93m \tMoNet 620m \tDSGC 346m\nDocument Classification \nDcnn 158m \tChebynet 278m \tGCN 83m \tMoNet 842m \tDSGC 112m\nNotably, learning the convolution filters as in DSGC leads to consistently better performance over all previous methods, with around 0.5x-3x running time. \n" ]
[ 4, 6, 5, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1 ]
[ "iclr_2018_H139Q_gAW", "iclr_2018_H139Q_gAW", "iclr_2018_H139Q_gAW", "BkCxP2Fez", "S1rTmy9xG", "Hywv3ShbM", "BJFoMD9eG" ]
iclr_2018_SyjsLqxR-
Universality, Robustness, and Detectability of Adversarial Perturbations under Adversarial Training
Classifiers such as deep neural networks have been shown to be vulnerable against adversarial perturbations on problems with high-dimensional input space. While adversarial training improves the robustness of classifiers against such adversarial perturbations, it leaves classifiers sensitive to them on a non-negligible fraction of the inputs. We argue that there are two different kinds of adversarial perturbations: shared perturbations which fool a classifier on many inputs and singular perturbations which only fool the classifier on a small fraction of the data. We find that adversarial training increases the robustness of classifiers against shared perturbations. Moreover, it is particularly effective in removing universal perturbations, which can be seen as an extreme form of shared perturbations. Unfortunately, adversarial training does not consistently increase the robustness against singular perturbations on unseen inputs. However, we find that adversarial training decreases robustness of the remaining perturbations against image transformations such as changes to contrast and brightness or Gaussian blurring. It thus makes successful attacks on the classifier in the physical world less likely. Finally, we show that even singular perturbations can be easily detected and must thus exhibit generalizable patterns even though the perturbations are specific for certain inputs.
rejected-papers
This paper studies to what extent adversarial training affects the properties of adversarial examples in object classification. Reviewers found the work going in the right direction, but agreed that it needs further evidence/focus in order to constitute a significant contribution to the ICLR community. In particular, the AC encourages authors to relate their work to the growing body of (mostly concurrent) work on robust optimization and adversarial learning. For the above reasons, the AC recommends rejection at this time.
train
[ "rydWCNKxz", "HkmB_HqlG", "ryMchFseG", "rkSVxjvGM", "B10EqVUGf", "B1h7pVUGf", "r14Id_uyM", "BJ29Juukz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "Summary:\n\nThis paper empirically studies adversarial perturbations dx and what the effects are of adversarial training (AT) with respect to shared (dx fools for many x) and singular (only for a single x) perturbations. Experiments use a (previously published) iterative fast-gradient-sign-method and use a Resnet on CIFAR.\n\nThe authors conclude that in this experimental setting:\n- AT seems to defend models against shared dx's.\n- This is visible on universal perturbations, which become less effective as more AT is applied.\n- AT decreases the effectiveness of adversarial perturbations, e.g. AT decreases the number of adversarial perturbations that fool both an input x and x with e.g. a contrast change.\n- Singular perturbations are easily detected by a detector model, as such perturbations don't change much when applying AT.\n\nPro:\n- Paper addresses an important problem: qualitative / quantitative understanding of the behavior of adversarial perturbations is still lacking.\n- The visualizations of universal perturbations as they change during AT are nice.\n- The basic observation wrt the behavior of AT is clearly communicated.\n\nCon:\n- The experiments performed are interesting directions, although unfocused and rather limited in scope. For instance, does the same phenomenon happen for different datasets? Different models?\n- What happens when we use adversarial attacks different from FGSM? Do we get similar results?\n- The papers lacks a more in-depth theoretical analysis. Is there a principled reason AT+FGSM defends against universal perturbations?\n\nOverall:\n- As is, it seems to me the paper lacks a significant central message (due to limited and unfocused experiments) or significant new theoretical insight into the effect of AT. A number of questions addressed are interesting starting points towards a deeper understanding of *how* the observations can be explained and more rigorous empirical investigations.\n\nDetailed:\n-\n", "This paper analyses adversarial training and its effect on universal adversarial examples as well as standard (basic iteration) adversarial examples. It also analyses how adversarial training affects detection.\n\nThe robustness results in the paper are interesting and seem to indicate that interesting things are happening with adversarial training despite adversarial training not fixing the adversarial examples problem. The paper shows that adversarial training increases the destruction rate of adversarial examples so that it still has some value though it would be good to see if other adversarial resistance techniques show the same effect. It's also unclear from which epoch the adversarial examples were generated from in figure 5. Further the transformations in figure 5 are limited to artificially controlled situations, it would be much more interesting to see how the destruction rate changes under real-world test scenarios.\n\nThe results on the detector are not that surprising since previous work has shown that detectors can learn to classify adversarial examples and the additional finding that they can detect adversarial examples for an adversarially trained model doesn't seem surprising. There is also no analysis of what happens for adversarial examples for the detector.\n\nAlso, it's not clear from section 3.1 what inputs are used to generate the adversarial examples. Are they a random sample across the whole dataset?\n\nFinally, the paper spends significant time on describing MaxMin and MinMax and the graphical visualizations but the paper fails to show these graphical profiles for real models.", "This paper investigates the effect of adversarial training. Based on experiments using CIFAR10, the authors show that adversarial training is effective in protecting against \"shared\" adversarial perturbation, in particular against universal perturbation. In contrast, it is less effective to protect against singular perturbations. Then they show that singular perturbation are less robust to image transformation, meaning after image transformation those perturbations are no longer effective. Finally, they show that singular perturbations can be easily detected.\n\nI like the message conveyed in this paper. However, as the statements are mostly backed by experiments, then I think it makes sense to ask how statistically significant the present results are. Moreover, is CIFAR 10 experiments conclusive enough. ", "We would like to thank the reviewer for the comments. Regarding the reviewer's questions:\n* \"The experiments performed are interesting directions, although unfocused\". In our opinion, the paper is focused on the question \"how does adversarial training affect properties of adversarial examples?\" We study a number of properties of adversarial examples such as their sharedness, universality, detectability, and robustness against image transformations. We think it is important to consider all these properties together rather than purely focusing on a single property such as the fooling rate of the model. We tried to connect the different experiments with a common thread. \n* \"Does the same phenomenon happen for different datasets? Different models?\"\nWe have added experimental results on the German Traffic Sign recognition Benchmark (GTSRB) to the revised version of paper. We have also evaluated a different model (a non-residual, convolutional network) on GTSRB. The main findings remain the same: adversarial training makes classifiers considerably more robust against universal perturbations, increases the destruction rate of perturbations considerably under most image transformations, and leaves perturbations well detectable. \n* \"What happens when we use adversarial attacks different from FGSM? Do we get similar results?\" Results reported in the paper were for the Basic Iterative (BI) adversary. We have added results for FGSM and DeepFool to the appendix of the revised version of the paper (both regarding destruction rate and detectability). The main findings remain the same for these adversaries.\n* \"The papers lacks a more in-depth theoretical analysis. Is there a principled reason AT+FGSM defends against universal perturbations?\" Our main results are empirical; however, we think that they provide potential direction for future theoretical analysis and are useful for this reason. It would be, for instance, interesting to connect our empirical findings on universal perturbations to the theoretical insights from the paper \"Analysis of universal adversarial perturbations\" (https://infoscience.epfl.ch/record/228329/files/universal_perturbations_theory.pdf). In this paper, the authors show that (assuming a locally curved decision boundary model) the existence of shared directions along which the decision boundary is positively curved implies the existence of small universal perturbations. The existence of such shared directions is closely related to our notion of sharedness of perturbations: our notion of sharedness corresponds to common directions (perturbations) which increase classification cost for many inputs while their concept is based on shared positive curvature of the decision boundary in a direction. As discussed in \"Analysis of universal adversarial perturbations\", when the decision boundary is positively curved in a direction, it will lie closer to the datapoint in this direction and thus moving in this direction will increase classification cost considerably. As we have empirically shown, adversarial training is effective in reducing the sharedness of perturbations, and thus potentially also in removing shared directions in which the decision boundary is positively curved (which would be the basis for the existence of universal perturbations). While this argument is informal, we hope that the observation reported in our paper can motivate future research into a better theoretical understanding on methods for preventing universal perturbations.", "We would like to thank the reviewer for the comments. \n * How statistically significant are the present results? We have run 5 repetitions of adversarial training on CIFAR10 for 50 epochs and evaluated the fooling rate of universal perturbations (as in Section 3.2). The accuracy of the model on inputs containing universal perturbations was 87.55%, 87.84%, 88.4%, 87.78%, 86.92%. As there is little variance between runs, we believe the presented results are not specific to the one run of adversarial training we investigated in more details. \n * \"Moreover, is CIFAR 10 experiments conclusive enough?\" We have added an experiment on the German Traffic Sign Recognition Benchmark (GTRSB) dataset, both for the same classifier architecture used on CIFAR-10 and for a classifier using a non-residual, convolutional net. The main findings remain the same: adversarial training makes classifiers considerably more robust against universal perturbations, increases the destruction rate of perturbations considerably under most image transformations, and leaves perturbations well detectable. See the revised PDF version for more details.", "We would like to thank the reviewer for his comments. Regarding the reviewer's questions:\n * Figure 5: the adversarial examples were generated from the epochs shown in the legend as \"model epochs\" (0, 1, 5, 51, 251). Please note that epoch 0 corresponds to a model pretrained with standard training (without adversarial training). Epoch 1 denotes the model after one additional epoch of adversarial training.\n * Reporting how the destruction rate changes under real-world transformations would make it difficult to attribute changes in destruction rates to individual changes (since typically changes of brightness, contrast, noise would occur at the same time). Moreover, artificially controlled situations have the advantage that the amount of brightness change, blurring etc. can be systematically varied. They thus allow to systematically study in which aspects adversarial training makes a model more robust. Because of this, we focused the experiments on snythetic transformations. We would also like to note that our point in the paper is not that adversarial training makes physical world attacks impossible but rather that physical-world attacks should be tested against models hardened with, e.g., adversarial training. For instance, our results on GTSRB (see revised PDF) show that adversarial training greatly increases robustness on this dataset and also increases destruction rate of the remaining perturbations under image transformations. In the lights of these results, it would be interesting to see if the results presented in works like https://arxiv.org/abs/1707.08945v4 would carry over to a classifier hardened with adversarial training. However, replicating this attack is beyond the scope of this paper but an important direction for future work (for which this work forms the basis).\n* In contrast to the reviewer's opinion, it was surprising for us that a detector was able to detect adversarial examples of an adversarially trained model. It was a likely assumption that detectability of perturbations were closely related to their sharedness and the existence of universal perturbations (since sharedness/universality is related to different perturbations being more \"similar\" and similarity in turn would make detectability easier). However, as the paper shows, adversarial training greatly reduces shared/universal perturbations but leaves detectability unchanged. Thus, these two properties seem to be unrelated which was surprising and insightful for us.\n* We do not claim that the detector could not be fooled. Our main point is not that the combination of adversarial training and detection is a robust defence but rather that adversarial training fails to become robust against certain shared patterns in adversarial perturbations that are picked up by a detector. Thus, combining adversarial training with a detection loss appears to be a promising direction for future work.\n* \"Also, it's not clear from section 3.1 what inputs are used to generate the adversarial examples. Are they a random sample across the whole dataset?\" If not stated otherwise, the adversarial examples were generated for the entire CIFAR10 test set. Otherwise, they are a randomly sampled subset of the test set.", "Thanks for your interest. At the moment, we cannot release code unfortunately. Feel free to ask any questions you have when trying to replicate our results.", "Hi, I am participating in the reproducibility challenge, would you mind sharing your code?\n\nThanks!" ]
[ 3, 6, 6, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SyjsLqxR-", "iclr_2018_SyjsLqxR-", "iclr_2018_SyjsLqxR-", "rydWCNKxz", "ryMchFseG", "HkmB_HqlG", "BJ29Juukz", "iclr_2018_SyjsLqxR-" ]
iclr_2018_Hki-ZlbA-
Ground-Truth Adversarial Examples
The ability to deploy neural networks in real-world, safety-critical systems is severely limited by the presence of adversarial examples: slightly perturbed inputs that are misclassified by the network. In recent years, several techniques have been proposed for training networks that are robust to such examples; and each time stronger attacks have been devised, demonstrating the shortcomings of existing defenses. This highlights a key difficulty in designing an effective defense: the inability to assess a network's robustness against future attacks. We propose to address this difficulty through formal verification techniques. We construct ground truths: adversarial examples with a provably-minimal distance from a given input point. We demonstrate how ground truths can serve to assess the effectiveness of attack techniques, by comparing the adversarial examples produced by those attacks to the ground truths; and also of defense techniques, by computing the distance to the ground truths before and after the defense is applied, and measuring the improvement. We use this technique to assess recently suggested attack and defense techniques.
rejected-papers
This paper describes a method to generate provably 'optimal' adversarial examples, leveraging the so-called 'Reluplex' technique, which can evaluate properties of piece-wise linear representations. Reviewers agreed that incorporating optimality certificates into adversarial examples is a promising direction to follow, but were also concerned about the lack of empirical justification the current paper provides and missed discussion about the relevance of choosing Lp distances. They all recommended pushing experiments to more challenging datasets before the paper can be accepted, and the AC shares the same advice.
train
[ "S1Q_cbqxf", "H1TnZzcgz", "Sy5sYncgM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: The paper proposes a method to compute adversarial examples with minimum distance to the original inputs, and to use the method to do two things: Show how well heuristic methods do in finding \"optimal/minimal\" adversarial examples (how close the come to the minimal change that flips the label) and to assess how a method that is designed to make the model more robust to adversarial examples actually works.\n\nPros:\n\nI like the idea and the proposed applications. It is certainly highly relevant, both in terms of assessing models for critical use cases as well as a tool to better understand the phenomenon.\n\nSome of the suggested insights in the analysis of defense techniques are interesting.\n\nCons:\n\nThe is not much technical novelty. The method boils down to applying Reluplex (Katz et al. 2017b) in a binary search (although I acknowledge the extension to L1 as distance metric).\n\nThe practical application of the method is very limited since the search is very slow and is only feasible at all for relatively small models. State-of-the-art practical models that achieve accuracy rates that make them interesting for deployment in potentially safety critical applications are out of reach for this analysis. The network analysed here does not reach the state-of-the-art on MNIST from almost two decades ago. The analysis also has to be done for each sample. The long runtime does not permit to analyse large amounts of input samples, which makes the analysis in terms of the increase in robustness rather weak. The statement can only be made for the very limited set of tested samples.\n\nIt is also unclear whether it is possible to include distance metrics that capture more sophisticated attacks that fool network even under various transformations of the input.\nThe paper does not consider the more recent and highly relevant Moosavi-Dezfooli et al. “Universal Adversarial Perturbations” CVPR 2017.\n\nThe distance metrics that are considered are only L_inf and L1, whereas it would be interesting to see more relevant “perceptual losses” such as those used in style transfer and domain adaptation with GANs.\n\nMinor details:\n* I would consider calling them “minimal adversarial samples” instead of “ground-truth”.\n* I don’t know if the notation in the Equation in the paragraph describing Carlini & Wagner comes from the original paper, but the inner max would be easier to read as \\max_{i \\neq t} \\{Z(x’)_i \\}\n* Page 3 “Neural network verification”: I dont agree with the statement that neural networks commonly are trained on “a small set of inputs”.\n* Algorithm 1 is essentially only a description of binary search, which should not be necessary.\n* What is the timeout for the computation, mentioned in Sec 4?\n* Page 7, second paragraph: I wouldn’t say the observation is in line with Carlini & Wagner, because they take a random step, not necessarily one in the direction of the optimum? That’s also the conclusion two paragraphs below, no?\n* I don’t fully agree with the conclusion that the defense of Madry does not overfit to the specific method of creating adversarial examples. Those were not created with the CW attack, but are related because CW was used to initialize the search.\n\n", "The authors propose to employ provably minimal-distance examples as a tool to evaluate the robustness of a trained network. This is demonstrated on a small-scale network using the MNIST data set.\n\nFirst of all, I find it striking that a trained network with 97% accuracy (as claimed by the authors) seems extremely brittle -- considering the fact that all the adversarial examples in Figure 1 are hardly borderline examples at all, at least to my eyes. This does reinforce the (well-known?) weakness of neural networks in general. I therefore find the authors' statement on page 3 disturbing: \"... they are trained over a small set of inputs, and can then perform well, in general, on previously-unseen inputs\" -- which seems false (with high probability over all possible worlds).\n\nSecondly, the term \"ground truth\" example seems very misleading to me. Perhaps \"closest misclassified examples\"?\n\nFinally, while the idea of \"closest misclassified examples\" seems interesting, I am not convinced that they are the right way to go when it comes to both building and evaluating robustness. All such examples shown in the paper are indeed within-class examples that are misclassified. But we could equally consider another extreme, where the trained network is \"over-regularized\" in the sense that the closest misclassified examples are indeed from another class, and therefore \"correctly\" misclassified. Adding these as adversarial examples could seriously degrade the accuracy.\n\nAlso, for building robustness, one could argue that adding misclassified examples that are \"furthest\" (i.e. closest to the true decision boundary) is a much more efficient training approach, since a few of these can possibly subsume a large number of close examples.\n\n", "The paper describes a method for generating so called ground truth adversarial examples: adversaries that have minimal (L1 or L_inf) distance to the training example used to generate them. The technique uses the recently developed reluplex, which can be used to verify certian properties of deep neural networks that use ReLU activations. The authors show how the L1 distance can be formulated using a ReLU and therefore extend the reluplex also work with L1 distances. The experiments on MNIST suggest that the C&W attack produces close to optimal adversarial examples, although it is not clear if these findings would transfer to larger more complex networks. The evaluation also suggests that training with iterative adversarial examples does not overfit and does indeed harden the network to attacks in many cases.\n\nIn general, this is a nice idea, but it seems like the inherent computational cost will limit the applicability of this approach to small networks and datasets for the time being. Incidentally, it would have been useful if the authors provided indicative information on the computational cost (e.g. in the form of time on a standard GPU) for generating these ground truths and carrying out experiments.\n\nThe experiments are quite small scale, which I expect is due to the computational cost of generating the adversarial examples. It is difficult to say how far the findings can be generalized from MNIST to more realistic situations. Tests on another dataset would have been welcomed.\n\nAlso, while interesting, are adversarial examples that have minimal L_p distance from training examples really that useful in practice? Of course, it's nice that we can find these, but it could be argued that L_p norms are not a good way of judging the similarity of an adversarial example to a true example. I think it would be more useful to investigate attacks that are perceptually insignificant, or attacks that operate in the physical world, as these are more likely to be a concern for real world systems. \n\nIn summary, while I think the paper is interesting, I suspect that the applicability of this technique is possibly limited at present, and I'm unsure how much we can really read into the findings of the paper when the experiments are based on MNIST alone.\n" ]
[ 5, 4, 6 ]
[ 4, 4, 3 ]
[ "iclr_2018_Hki-ZlbA-", "iclr_2018_Hki-ZlbA-", "iclr_2018_Hki-ZlbA-" ]
iclr_2018_SyqAPeWAZ
CNNs as Inverse Problem Solvers and Double Network Superresolution
In recent years Convolutional Neural Networks (CNN) have been used extensively for Superresolution (SR). In this paper, we use inverse problem and sparse representation solutions to form a mathematical basis for CNN operations. We show how a single neuron is able to provide the optimum solution for inverse problem, given a low resolution image dictionary as an operator. Introducing a new concept called Representation Dictionary Duality, we show that CNN elements (filters) are trained to be representation vectors and then, during reconstruction, used as dictionaries. In the light of theoretical work, we propose a new algorithm which uses two networks with different structures that are separately trained with low and high coherency image patches and show that it performs faster compared to the state-of-the-art algorithms while not sacrificing from performance.
rejected-papers
This paper addresses the question of how to solve image super-resolution, building on a connection between sparse regularization and neural networks. Reviewers agreed that this paper needs to be rewritten, taking into account recent work in the area and significantly improving the grammar. The AC thus recommends rejection at this time.
train
[ "rkDK2NwgG", "rke8ggtxG", "rkHX_Bjlf", "SysQi-Fff", "Hk7WiZFGG", "HJincWtMf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper proposes an understanding of the relation between inverse problems, CNNs and sparse representations. Using the ground work for each proposes a new competitive super resolution technique using CNNs. Overall I liked authors' endeavors bringing together different fields of research addressing similar issues. However, I have significant concerns regarding how the paper is written and final section of the proposed algorithm/experiments etc. \n\nIntroduction/literature review-> I think paper significantly lacks literature review and locating itself where the proposed approach at the end stands in the given recent SR literature (particularly deep learning based methods) --similarities to other techniques, differences from other techniques etc. There have been several different ways of using CNNs for super resolution, how does this paper’s architecture differs from those? Recent GAN based methods are very promising and how does the proposed technique compares to them? \n\nNotation/readability -> I do respect the author’s mentioning different research field’s notations and understand the complication of building a single framework. However I still think that notations could be a lot more simplified—to make them look in the same page. It is very confusing for readers even if you know the mentioned sub-fields and their notations. Figure 1 was very useful to alleviate this problem. More visuals like figure 1 could be used for this problem. For example different network architecture figures (training/testing for CNNs) could be used to explain in a compact way instead of plain text. \n\nSection 3-> I liked the way authors try to use the more generalized Daubechies et. al. However I do not understand lots of pieces still. For example using the low resolution image patches as a basis—more below. In the original solution Daubechies et. al. maps data to the orthonormal Hilbert space, but authors map to the D (formed by LR patches). How does this affect the provability? \n\nRepresentation-dictionary duality concept -> I think this is a very fundamental piece for the paper and don’t understand why it is in the appendix. Using images as D in training and using filters as D in scoring/testing, is very unintuitive to me. Even after reading second time. This requires better discussion and examples. Comparison/discussion to other CNN/deep learning usage for super-resolution methods is required exactly right here.\n\nFinal proposed algorithm -> Splitting the data for high and low coherence makes sense however coherence is a continues variable. Why to keep the quantization at binary? Why not 4,8 or more? Could this be modeled in the network?\n\nResults -> I understand the numerical results and comparisons to the Kim et. Al—and don’t mind at all if they are on-par or slightly better or worse. However in super-resolution paper I do expect a lot more visual comparisons. There has been only Figure 5. Authors could use appendix for this purpose. Also I would love to understand why the proposed solution is significantly faster. This is particularly critical in super-resolution as to apply the algorithms to videos and reconstruction time is vital.\n", "This paper discusses using neural networks for super-resolution. The positive aspects of this work is that the use of two neural networks in tandem for this task may be interesting, and the authors attempt to discuss the network's behavior by drawing relations to successful sparsity-based super-resolution. Unfortunately I cannot see any novelty in the relationship the authors draw to LASSO style super-resolution and dictionary learning beyond what is already in the literature (see references below), including in one reference that the authors cite. In addition, there are a number of sloppy mistakes (e.g. Equation 10 as a clear copy-paste error) in the manuscript. Given that much of the main result seems to already be known, I feel that this work is not novel enough at this time. \n\nSome other minor points for the authors to consider for future iterations of this work:\n\n- The authors mention the computational burden of solving L1-regularized optimizations. A lat of work has been done to create fast, efficient solvers in many settings (e.g. homotopy, message passing etc.). Are these methods still insufficient in some applications? If so, which applications of interest are the authors considering?\n\n- In figure 1, it seems that under \"superresolution problem\": 'f' should be 'High res data' and 'g' should be 'Low res data' instead of what is there. I'm also not sure how this figure adds to the information already in the text.\n\n- In the results, the authors mention how some network features represented by certain neurons resemble the training data. This seems like over-training and not a good quality for generalization. The authors should clarify if, and why, this might be a good thing for their application. \n\n- Overall a heavy editing pass is needed to fix a number of typos throughout.\n\nReferences:\n\n[1] K. Gregor and Y. LeCun , “Learning fast approximations of sparse coding,” in Proc. Int. Conf. Mach. Learn., 2010, pp. 399–406.\n[2] P. Sprechmann, P. Bronstein, and G. Sapiro, “Learning efficient structured sparse models,” in Proc. Int. Conf. Mach. Learn., 2012, pp. 615–622.\n[3] M. Borgerding, P. Schniter, and S. Rangan, ``AMP-Inspired Deep Networks for Sparse Linear Inverse Problems [pdf] [arxiv],\" IEEE Transactions on Signal Processing, vol. 65, no. 16, pp. 4293-4308, Aug. 2017.\n[4] V. Papyan*, Y. Romano* and M. Elad, Convolutional Neural Networks Analyzed via Convolutional Sparse Coding, accepted to Journal of Machine Learning Research, 2016. ", "The method proposes a new architecture for solving image super-resolution task. They provide an analysis that connects aims to establish a connection between how CNNs for solving super resolution and solving sparse regularized inverse problems.\n\nThe writing of the paper needs improvement. I was not able to understand the proposed connection, as notation is inconsistent and it is difficult to figure out what the authors are stating. I am willing to reconsider my evaluation if the authors provide clarifications.\n\nThe paper does not refer to recent advances in the problem, which are (as far as I know), the state of the art in the problem in terms of quality of the solutions. This references should be added and the authors should put their work into context.\n\n1) Arguably, the state of the art in super resolution are techniques that go beyond L2 fitting. Specifically, methods using perceptual losses such as:\n\nJohnson, J. et al \"Perceptual losses for real-time style transfer and super-resolution.\" European Conference on Computer Vision. Springer International Publishing, 2016.\n\nLedig, Christian, et al. \"Photo-realistic single image super-resolution using a generative adversarial network.\" arXiv preprint arXiv:1609.04802 (2016).\n\nPSNR is known to not be directly related to image quality, as it favors blurred solutions. This should be discussed.\n\n2) The overall notation of the paper should be improved. For instance, in (1), g represents the observation (the LR image), whereas later in the text, g is the HR image. \n\n3) The description of Section 2.1 is quite confusing in my view. In equation (1), y is the signal to be recovered and K is just the downsampling plus blurring. So assuming an L1 regularization in this equation assumes that the signal itself is sparse. Equation (2) changes notation referring y as f. \n\n4) Equation (2) seems wrong. The term multiplying K^T is not the norm (should be parenthesis).\n\n5) The first statement of Section 2.2. seems wrong. DL methods do state the super resolution problem as an inverse problem. Instead of using a pre-defined basis function they learn an over-complete dictionary from the data, assuming that natural images can be sparsely represented. Also, this section does not explain how DL is used for super resolution. The cited work by Yang et al learns a two coupled dictionaries (one for LR and HL), such that for a given patch, the same sparse coefficients can reconstruct both HR and LR patches. The authors just state the sparse coding problem.\n\n6) Equation (10) should not contain the \\leq \\epsilon.\n\n7) In the second paragraph of Section 3, the authors mention that the LR image has to be larger than the HR image to prevent border effects. This makes sense. However, with the size of the network (20 layers), the change in size seems to be quite large. Could you please provide the sizes? When measuring PSNR, is this taken into account? \n\n8) It would be very helpful to include an image explaining the procedure described in the second paragraph of Section 3.\n\n9) I find the description in Section 3 quite confusing. The authors relate the training of a single filter (or neuron) to equation (7), but they define D, that is not used in all of Section 2.1. And K does not show in any of the analysis given in the last paragraph of page 4. However, D and K seem two different things (it is not just one for the other), see bellow.\n\n10) I cannot understand the derivation that the authors do in the last paragraph of page 4 (and beginning of page 5). What is phi_l here? K in equation (7) seems to match to D here, but D here is a collection of patches and in (7) is a blurring and downsampling operator. I cannot review this section. I will wait for the author's response clarifications.\n\n11) The authors describe a change in roles between the representations and atoms in the training and testing phase respectively. I do not understand this. If I understand correctly, the final algorithm, the authors train a CNN mapping LR to HR images. The network is used in the same way at training and testing.\n\n12) It would be useful to provide more details about the training of the network. Please describe the training set used by Kim et al. Are the two networks trained independently? One could think of fine-tuning them jointly (including the aggregation).\n\n13) The authors show the advantage of separating networks on a single image, Barbara. It would be good to quantify this better (maybe in terms of PSNR?). This observation might be true only because the training loss, say than the works cited above. Please comment on this.\n\n14) In figures 3 and 4, the learned filters are those on the top (above the yellow arrow). It is not obvious to me that the reflect the predominant structure in the data. (maybe due to the low resolution).\n\n15) This work is related to (though clearly different) that of LISTA (Learned ISTA) type of networks, proposed in:\n\nGregor, K., & LeCun, Y. (2010). Learning fast approximations of sparse coding. In Proceedings of the 27th International Conference on Machine Learning (ICML) \n\nWhich connect the network architecture with the optimization algorithm used for solving the sparse coding problem. Follow up works have used these ideas for solving inverse problems as well.\n", "Thank you very much for the detailed review of the manuscript. \n\nWe have revisited the manuscript to reflect all the reviewers’ comments. The proposed Representation Dictionary Duality concept is explained in detail and the notation inconsistencies throughout the text are corrected. In addition the current literature is updated and the differences between the proposed understanding and the literature is made clear.\n\nIntroduction/literature review-> We have referred to the Generative Networks in the revised manuscript.\n\nNotation/readability -> We fixed the notations together with few typos. We added a figure detailing the referenced CNN algorithm SRCNN (Dong et. al.). Also a figure for CNN training procedure is added.\n\nSection 3-> In Daubechies et. al. from our references, the nature of matrix K is defined as a bounded operator between two hilbert spaces. Boundedness is defined according to the formula: for any given vector f from a Hilbert space, if the inequality ||Kf|| \\leq C||f|| is satisfied, where C is a constant, then the operator is bounded. The iterative shrinkage algorithm we have referenced from Daubechies et. al. have addressed this issue directly, for cases when the null space of K for a vector f is not zero and its inversion is ill-posed or even ill-conditioned. We have shown that a neuron filter solves the same equation during training and since a library D also satisfies boundedness assumption we know that it will reach to the optimum solution. We now made this clearer in the text.\n\nRepresentation-dictionary duality concept -> We have moved the appendix A into the text. We assert that, CNN operates as a layered DLB during training and during testing. We have shown that the mechanism by which the CNN learns is through solving an inverse problem. The inverse problem constitutes a bounded operator, matrix D, which is composed of LR patches. Even though the matrix D is different in structure from conventional inverse problem operators, it satisfies the constraints to be used as an operator. The cost function that is minimized by CNN training yields a representation vector as the neuron filter, for which the dictionary is matrix D and the target is HR image patch. Neuron parameters (filters) being the representation vectors instead of an output from a network is a new understanding in the literature. Resulting representation vectors (filters) from a layer of neuron filters turn into a dictionary upon which the reconstruction of HR image is carried out during testing (scoring) phase. This is the core understanding of RDD concept. Using RDD we are able to demystify how a CNN is able to learn and apply reconstruction of HR images for SR problem.\n\nFinal proposed algorithm -> We have used strength, coherence and angle information to divide data into 38 networks initially. We have discovered that networks that are trained with low strength data (which are almost flat patches) won’t converge to a meaningful state. We couldn’t handle the separation of angle information while aggregating all the results. Also this was not a feasible network structure to be implemented for a real time, possible video application. So we reduced to using two networks with low and high coherence. The reviewer is absolutely right in asking why 4 or 8 networks have not been used. This was simply due to lack of time. We will strongly consider doing an analysis on this in near future.\n\nResults -> We ran out of space so we had to get rid of all redundant information. We have now added a page of comparison in the appendices. The proposed solution is faster because splitting the data enabled us to train lighter networks, even though one of the networks is as long as the original reference paper (20 layers). We have touched on the subject briefly on chapter 3. We have now added more discussion as to why the proposed solution is faster. And the sole reason we are trying to speed up the algorithm is because we have real time video superresolution application in our future plans. We have not mentioned this in the text plainly because we have not done anything to address multiframe SR yet.\n\n", "Thank you very much for the detailed review of the manuscript. \n\nWe have revisited the manuscript to reflect all the reviewers’ comments. The proposed Representation Dictionary Duality concept is explained in detail and the notation inconsistencies throughout the text are corrected. In addition the current literature is updated and the differences between the proposed understanding and the literature is made clear.\n\n-We have highlighted the differences of our understanding from that of Papyan et. al. We have not included more foundational papers including Gregor et. al. inside the text plainly to simplify the text. That was a clear mistake and we have now included references into the revised paper. To discuss the differences of our work from what is already published, we highlight few points: \n--Gregor et. al. have used ISTA algorithm and they have successfully implemented iterative algorithm with a time unfolded recursive neural network, which can be seen as a feed-forward network. Then the architecture is fine-tuned with experimental results\n--Bronstein et. al. have worked on a shift of understanding in that, what they present with a neural network is not a regressor that is approximating an iterative algorithm, but itself is a full featured sparse coder. \n-Our work diverges from theirs in showing how a convolutional neural network is able to learn image representation and reconstruction for SR problem. We have united inverse problem approaches, Deep Learning Based and Dictionary Based methods in a representation-dictionary duality concept. We have showed that during training, neuron filters learn from input images as if the input patches constituted a dictionary for representation. Therefore different from literature the neuron parameters (filters) become representations themselves. And we show that during testing (scoring) learned filters become the dictionaries for reconstruction. This is now made clearer in the text.\n\n-L1 norm minimization is not the crucial part of our work since only capture the mathematical background and optimality of the solutions. We were only repeating how L2 norm minimization based algorithms have defended their reasoning from changing from L1 norm to L2 norm. We edited this part.\n\n-Figure 1 is not wrong, but previous notation changes could have confused the reviewers and we fixed this in revised paper. The f is high res data that is blurred and downsampled with K. The g is the observation therefore we are trying to estimate highres data by estimating f. This figure is used to sum up the different parts that we have brought together. We hoped it would be useful in understanding the crux of the paper.\n\n-Describing the results as “resembling the training data” was an unfortunate choice of words. The purpose of the experiment was to visualize RDD concept which really states that the Network learns predominant features from the training set, not the images themselves. Since we have reduced the training set to a narrow orientation single edged image database, first layer filters tend to be oriented in the same direction which is a visualization of RDD. This does not correspond to resemblance of filters to the data set itself. We have corrected this in the text.\n\n-We corrected the typos.", "Thank you very much for the detailed review of the manuscript. \nWe have revisited the manuscript to reflect all the reviewers’ comments. The proposed Representation Dictionary Duality (RDD) concept is explained in detail and the notation inconsistencies throughout the text are corrected. In addition the current literature is updated and the differences between the proposed understanding and the literature is made clear.\n1) As the reviewer suggests Generative Network (GN) based algorithms do not depend (solely) on PSNR metric. Due to the lack of MSE control, the output is not loyal to the input image. Since textures are created from input images, seemingly randomly, this might cause problems in video streams. Since it is trivial to add Perceptual Loss (PL) minimization to the training procedure, in the future we plan to add PL and conduct experiments.\n2) We have modified the text to be more comprehensible. We have used the variables g, f and D throughout the text, we have put subscript L for learning (training), and subscript R for reconstruction (testing) phase.\n3) Similar to 2) we changed section 2.1 to be more comprehensible. We have referred all variables as f.\n4) The reviewer is correct, not using parenthesis was a typo, thanks for pointing out. It is corrected in the revised text.\n5) What we meant by “instead of approaching the problem as inverse problem” was to draw attention to the difference of solution approaches of inverse problem solutions and DL based solutions. To avoid misunderstandings we have named the subsections as “Analytic Approaches” and “Data Driven Approaches”. We described dictionary based learning in revised manuscript. Also we added explanations on how Yang et. al. have used LR and HR library for reconstruction.\n6) The reviewer is correct, this was a typo that we corrected in the revised text.\n7) We have discussed the effect of size mismatch in the training procedure. Residual learning which we have borrowed from Kim et. al. automatically zero pads the input boundaries and even the outer pixels turn out to be unspoiled. This is added into the text.\n8) We added a compact image detailing the training of a neural network in appendix.\n9,10) In Daubechies et. al. from our references, the nature of matrix K is defined as a bounded operator between two hilbert spaces. Boundedness is defined according to the formula: for any given vector f from a Hilbert space, if the inequality ||Kf|| \\leq C||f|| is satisfied, where C is a constant, then the operator is bounded. Library D does not violate this assumption, we have added more explanation into the text.\n11)The RDD concept is tool for explaining how we have incorporated inverse problem and sparse representation mathematics into the CNN training/testing procedure. We have shown that the method, by which the CNN learns, is through solving an inverse problem. The inverse problem constitutes a bounded operator, matrix D, which is composed of LR patches. Even though the matrix D is different in structure from conventional inverse problem operators, it satisfies the constraints to be used as an operator. The cost function that is minimized by CNN training yields a representation vector as the neuron filter, for which the dictionary is matrix D and the target is HR image patch. Neuron parameters (filters) being the representation vectors instead of an output from a network is a new understanding in the literature. Resulting representation vectors (filters) from a layer of neuron filters turn into a dictionary upon which the reconstruction of HR image is carried out during testing phase. This is the core understanding of RDD concept. We moved the explanations given in appendix A into the text.\n12) For training same 291 images from Kim et. al. have been used in similar fashion, with different rotations and scales. Then we have separated images into two sets by using coherence values from LR patches. We added this information into the text. We will strongly consider jointly optimizing two networks in near future since we already had a goal of finding a better aggregation method.\n13) For VDSR algorithm Barbara image had 26.2078 dB PSNR and 0.8039 SSIM values whereas our DNSR achieved 26.6600 dB PSNR and 0.8091 SSIM. Cross entropy loss had a minor effect in this improvement.\n14) Filters might not appear predominant due to the residual learning of the network or because of instanced filters’ size (3x3).\n15) We have used a foundational paper for mathematical background (Daubechies et. al. 2004) and we have used a state of the art paper covering all previous work including Gregor et. al.’s work (Papyan et. al. 2016,2017). We commented on Gregor et. al.’s paper inside the text and highlight the differences from our approach in revised text. Mainly we show that trained neuron filters become the representation vectors." ]
[ 6, 3, 4, -1, -1, -1 ]
[ 2, 5, 4, -1, -1, -1 ]
[ "iclr_2018_SyqAPeWAZ", "iclr_2018_SyqAPeWAZ", "iclr_2018_SyqAPeWAZ", "rkDK2NwgG", "rke8ggtxG", "rkHX_Bjlf" ]
iclr_2018_HklZOfW0W
UPS: optimizing Undirected Positive Sparse graph for neural graph filtering
In this work we propose a novel approach for learning graph representation of the data using gradients obtained via backpropagation. Next we build a neural network architecture compatible with our optimization approach and motivated by graph filtering in the vertex domain. We demonstrate that the learned graph has richer structure than often used nearest neighbors graphs constructed based on features similarity. Our experiments demonstrate that we can improve prediction quality for several convolution on graphs architectures, while others appeared to be insensitive to the input graph.
rejected-papers
This paper addresses the problem of learning neural graph representations, based on graph filtering techniques in the vertex domain. Reviewers agreed on the fact that this paper has limited interest in its current form, and has serious grammatical issues. The AC thus recommends rejection at this time.
train
[ "HJb3ygfxf", "rk9JKKwxM", "BkuAX_h-M", "rJgKwD67M", "HkzgwPTmf", "ByT1_Pa7G", "HyeSDv6Qz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Learning adjacency matrix of a graph with sparsely connected undirected graph with nonnegative edge weights is the goal of this paper. A projected sub-gradient descent algorithm is used. The UPS optimizer by itself is not new.\n\nGraph Polynomial Signal (GPS) neural network is proposed to address two shortcomings of GSP using linear polynomial graph filter. First, a nonlinear function sigma in (8) is used, and second, weights are shared among neighbors of every data points. There are some concerns about this network that need to be clarified:\n1. sigma is never clarified in the main context or experiments\n2. the shared weights should be relevant to the ordering of neighbors, instead of the set of neighbors without ordering, in which case, the sharing looks random.\n3. another explanation about the weights as the rescaling to matrix A needs to further clarified. As authors mentioned that the magnitude of |A| from L1 norm might be detrimental for the prediction. What is the disagreement between L1 penalty and prediction quality? Why not apply these weights to L1 norm as a weighted L1 norm to control the scaling of A?\n4. Authors stated that the last step is to build a mapping from the GPS features into the response Y. They mentioned that linear fully connected layer or a more complex neural network can be build on top of the GPS features. However, no detailed information is given in the paper. In the experiments, authors only stated that “we fit the GPS architecture using UPS optimizer for varying degree of the neighborhood of the graph”, and then the graph is used to train existing models as the input of the graph. Which architecture is used for building the mapping ?\n\nIn the experimental results, detailed definition or explanation of the compared methods and different settings should be clarified. For example, what is GPS 8, GCN_2 Eq. 9 in Table 1, and GCN_3 9 and GPS_1, GPS_2, GPS_3 and so on. More explanations of Figure 2 and the visualization method can be great helpful to understand the advantages of the proposed algorithm. \n", "There are many language issues rendering the text hard to understand, e.g.,\n-- in the abstract: \"several convolution on graphs architectures\"\n-- in the definitions: \"Let data with N observation\" (no verb, no plural, etc).\n-- in the computational section: \"Training size is 9924 and testing is 6695. \"\nso part of my negative impression may be pure mis-understanding of what\nthe authors had to say. \n\nStill, the authors clearly utilise basic concepts (c.f. \"utilize eigenvector \nbasis of the graph Laplacian to do filtering in the Fourier domain\") in ways\nthat do not seem to have any sensible interpretation whatsoever, even allowing\nfor the mis-understanding due to grammar. There are no clear insight, \nno theorems, and an empirical evaluation on an ill-defined problem in \ntime-series forecasting. (How does it relate to graphs? What is the graph \nin the time series or among the multiple time series? How do the authors\nimplement the other graph-related approaches in this problem featuring\ntime series?) My impression is hence that the only possible outcome is\n\nrejection.", "The authors develop a novel scheme for backpropagating on the adjacency matrix of a neural network graph. Using this scheme, they are able to provide a little bit of evidence that their scheme allows for higher test accuracy when learning a new graph structure on a couple different example problems.\n\nPros: \n-Authors provide some empirical evidence for the benefits of using their technique.\n-Authors are fairly upfront about how, overall, it seems their technique isn't doing *too* much--null results are still results, and it would be interesting to better understand *why* learning a better graph for these networks doesn't help very much.\n\nCons: \n-The grammar in the paper is pretty bad. It could use a couple more passes with an editor.\n-For a, more or less, entirely empirical paper, the choices of experiments are...somewhat befuddling. Considerably more details on implementation, training time/test time, and even just *more* experiment domains would do this paper a tremendous amount of good.\n-While I mentioned it as a pro, it also seems to be that this technique simply doesn't buy you very much as a practitioner. If this is true--that learning better graph representations really doesn't help very much, that would be good to know, and publishable, but actually *establishing* that requires considerably more experiments.\n\nUltimately, I will have to suggest rejection, unless the authors considerably beef up their manuscript with more experiments, more details, and improve the grammar considerably.\n", "Thank you for your comments. We will improve the writing of the paper. Below are the answers to some of your questions.\n\n--> Still, the authors clearly utilise basic concepts (c.f. \"utilize eigenvector basis of the graph Laplacian to do filtering in the Fourier domain\") in ways that do not seem to have any sensible interpretation whatsoever, even allowing for the mis-understanding due to grammar\n\nGraph Laplacian is a symmetric matrix and therefore it has orthonormal eigenvectors. Collection of orthonormal vectors forms a basis. Hence, eigenvector basis is a set of eigenvectors. Filtering on a graph can be performed in the spectral (Fourier) domain via Eq. (10) in the paper. Note that the formula involves eigenvectors (i.e. eigenvector basis) of the graph Laplacian.\n\n\n--> There are no clear insight\n\nWe showed that it is possible to learn a graph by building neural network differentiable with respect to the graph adjacency matrix. Next, we analyzed importance of the input graph in various settings and found that for some of the recently proposed architectures input graph does not make any noticeable difference (i.e. random, kNN and graph learned via UPS all resulted in similar performance in the cases of ChebNet and ConvNet). This result seems a bit worrisome as one would not expect to see good performance with a random graph when a neural network is built to utilize the graph.\n\n\n--> Empirical evaluation on an ill-defined problem in time-series forecasting. (How does it relate to graphs? What is the graph in the time series or among the multiple time series? How do the authors implement the other graph-related approaches in this problem featuring time series?)\n\nConsider an example: we observe wind and precipitation measurements from various weather stations. Observations from a weather station can be used to predict tomorrow’s weather at another, spatially close, weather station. In this example graph can be constructed based on additional spatial information. In the application we consider such additional information is not available and hence learning the graph is important.", "We thank all the reviewers for their comments and questions. Individual responses are provided as comments to your reviews. Unfortunately, we have not yet finished the revision of the manuscript as we are working on a more methodological way to assess the role of the input graph in the cases of ChebNet and ConvNet. In the case of 20news groups data, we have experimented with different node degree distributions for generating a random graph (at fixed sparsity level). Additionally, we considered an extreme case of a graph with a randomly chosen subset of vertices forming a fully connected component, while all other vertices are mutually disconnected. We observed that non of that (even the extreme scenario) altered the behavior of neither ChebNet nor ConvNet. This can be explained by the usage of too many learnable parameters in the ChebNet and ConvNet architectures, making it possible for them to adjust to any input graph in high-dimensional setting. We are currently working on exploring a lower dimensional scenario via simulation experiments. GPS architecture (with a fixed graph) performed noticeably worse in the extreme scenario of random graph generation. ", "Thank you for your comments. Below are the answers to some of your question\n\n--> 1. sigma is never clarified in the main context or experiments\n\nSigma is a ReLU in our experiments.\n\n--> 2. the shared weights should be relevant to the ordering of neighbors, instead of the set of neighbors without ordering, in which case, the sharing looks random.\n\nOrdering of neighbors is fixed to be in alignment with the order of vertices in the graph adjacency matrix. Computationally, this is easily achievable by taking the inner product of a row of a graph adjacency matrix and a weight vector (weight vector is shared across rows).\n\n--> 3. another explanation about the weights as the rescaling to matrix A needs to further clarified. As authors mentioned that the magnitude of |A| from L1 norm might be detrimental for the prediction. What is the disagreement between L1 penalty and prediction quality? Why not apply these weights to L1 norm as a weighted L1 norm to control the scaling of A?\n\nL1 penalty acts as a regularizer, so the optimization approach will favor a sparser graph at a cost of sacrificing some of the performance. Same phenomena is observed in linear regression - coefficients learned with LASSO penalty are biased and refitting the regression with only selected variables generally improves the predictive performance. If we apply weights to the L1 norm of the graph adjacency and try to optimize for these weights, it will blow up the objective function (\"optimal\" weights will go to minus infinity).\n\n--> 4. Authors stated that the last step is to build a mapping from the GPS features into the response Y. They mentioned that linear fully connected layer or a more complex neural network can be build on top of the GPS features. However, no detailed information is given in the paper. In the experiments, authors only stated that “we fit the GPS architecture using UPS optimizer for varying degree of the neighborhood of the graph”, and then the graph is used to train existing models as the input of the graph. Which architecture is used for building the mapping ?\n\nWe used linear mapping from GPS features to Y in the experiments.\n\n--> In the experimental results, detailed definition or explanation of the compared methods and different settings should be clarified. For example, what is GPS 8, GCN_2 Eq. 9 in Table 1, and GCN_3 9 and GPS_1, GPS_2, GPS_3 and so on. More explanations of Figure 2 and the visualization method can be great helpful to understand the advantages of the proposed algorithm.\n\nIn GPS 8 and GCN 9, 8 and 9 are the corresponding equation references. Subscript numbers correspond to the number of layers for the GCN and maximum degree of the adjacency matrix polynomial for the GPS. We will improve the clarity of the experimental section and add the requested details.", "Thank you for you comments.\n\n--> If this is true--that learning better graph representations really doesn't help very much, that would be good to know, and publishable, but actually *establishing* that requires considerably more experiments.\n\nWe agree with your opinion and are working on additional simulated and real data experiments to investigate the observed phenomena. We summarized the findings in our general response comment.\n\nWe will revise the manuscript to improve the grammar." ]
[ 6, 3, 4, -1, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2018_HklZOfW0W", "iclr_2018_HklZOfW0W", "iclr_2018_HklZOfW0W", "rk9JKKwxM", "iclr_2018_HklZOfW0W", "HJb3ygfxf", "BkuAX_h-M" ]
iclr_2018_H11lAfbCW
On Characterizing the Capacity of Neural Networks Using Algebraic Topology
The learnability of different neural architectures can be characterized directly by computable measures of data complexity. In this paper, we reframe the problem of architecture selection as understanding how data determines the most expressive and generalizable architectures suited to that data, beyond inductive bias. After suggesting algebraic topology as a measure for data complexity, we show that the power of a network to express the topological complexity of a dataset in its decision boundary is a strictly limiting factor in its ability to generalize. We then provide the first empirical characterization of the topological capacity of neural networks. Our empirical analysis shows that at every level of dataset complexity, neural networks exhibit topological phase transitions and stratification. This observation allowed us to connect existing theory to empirically driven conjectures on the choice of architectures for a single hidden layer neural networks.
rejected-papers
This paper attempts to connect the expressivity of neural networks with a measure of topological complexity. The authors present some empirical results on simplified datasets. All reviewers agreed that this is an intriguing line of research, but that the current manuscript is still presenting preliminary results, and that further work is needed before it can be published.
train
[ "SJHp-7Klz", "B19Fsy5gM", "Sy4l2B9gG", "Sk2OMTrMz", "Sk4KcpHMf", "BkmNYaBGz", "rkA_SaHMz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Paper Summary:\n\nThis paper looks at empirically measuring neural network architecture expressivity by examining performance on a variety of complex datasets, measuring dataset complexity with algebraic topology. The paper first introduces the notion of topological equivalence for datasets -- a desirable measure to use as it is invariant to superficial differences such as rotation, translation and curvature. The definition of homology from algebraic topology can then be used as a robust measure of the \"complexity\" of a dataset. This notion of difficulty focuses roughly on determining the number of holes of dimension n (for varying n) there are in the dataset, with more holes roughly leading to a more complex connectivity pattern to learn. They provide a demonstration of this on two synthetic toy datasets in Figure 1, training two (very small -- 12 and 26 neuron) single hidden layer networks on these two datasets, where the smaller of the two networks is unable to learn the data distribution of the second dataset. These synthetic datasets have a well defined data distribution, and for an empirical sample of N points, a (standard) method of determining connectivity by growing epsilon balls around each datapoint in section 2.3.\n\nThe authors give a theoretical result on the importance of homology: if a binary classifier has support homology not equal to the homology of the underlying dataset, then there is at least one point that is misclassified by the classifier. Experiments are then performed with single hidden layer networks on synthetic datasets, and a phase transition is observed: if h_phase is the number of hidden units where the phase transition happens, and h' < h < h_phase, h' has higher error and takes longer to converge than h. Finally, the authors touch on computing homology of real datasets, albeit with a low dimensional projection (e.g. down to 3 dimensions for CIFAR-10).\n\nMain Comments\n\nThe motivation to consider algebraic topology and dataset difficulty is interesting, but I think this method is ultimately ill suited and unable to be adapted to more complex and interesting settings. In particular, the majority of experiments and justification of this method comes from use on a low dimensional manifold with either known data distribution, or with a densely sampled manifold. (The authors look at using CIFAR-10, but project this down to 3 dimensions -- as current methods for persistent homology cannot scale -- which somewhat invalidates the goal of testing this out on real data.) This is an important and serious drawback because it seems unlikely that the method described in Figure 3 of determining the connectivity patterns of a dataset are likely to yield insightful results in a high dimensional space with very few datapoints (in comparison to 2^{dimension}), where distance between datapoints is unlikely to have any nice class related correspondence.\n\nFurthermore, while part of the motivation of this paper is to use dataset complexity measured with topology to help select architectures, experiments demonstrating that this might be useful are very rudimentary. All experiments only look at single hidden layers, and the toy task in Figure 1 and in section 3.2.1 and Figure 5 use extremely small networks (hidden size 12-26). It's hard to be convinced that these results necessarily generalize even to other larger hidden layer models. On real datasets, exploring architectures does not seem to be done at all (Section 4).\n\n\nMinor Comments\nSome kind of typo in Thm 1? (for all f repeated twice)\nSmall typos (missing spaces) in related work and conclusion\nHow is h_phase determined? Empirically? (Or is there a construction?)\n\nReview Summary:\n\nThis paper is not ready to be accepted.", "The authors propose to use the homology of the data as a measurement of the expressibility of a deep neural network. The paper is mostly experimental. The theoretical section (3.1) is only reciting existing theory (Bianchini et al.). Theorem 3.1 is not surprising either: it basically says spaces with different topologies differ at some parts. \n\nAs for the experiments, the idea is tested on synthetic and real data. On synthetic data, it is shown that the number of neurons of the network is correlated with the homology it can express. On real data, the tool of persistent homology is applied. It is observed that the data in the final layer do have non-trivial signal in terms of persistent homology.\n\nI do like the general idea of the paper. It has great potentials. However, it is much undercooked. In particular, it could be improved as follows:\n\n* 1) the main message of the paper is unclear to me. Results observed in the synthetic experiments seem to be a confirmation of the known results by Bianchini et al.: the Betti number a network can express is linear to the number of hidden units, h, when the input dimension n is a constant. \n\nTo be convinced, I would like to see much stronger experimental evidence: Reporting results on a single layer network is unsettling. It is known that the network expressibility is highly related to the depth (Eldan & Shamir 2016). So what about networks with more layers? Is the stratification observation statistically significant? These experiments are possible for synthetic data. \n\n* 2) The usage of persistent homology is not well justified. A major part of the paper is devoted to persistent homology. It is referred to as a robust computation of the homology and is used in the real data experiments. However, persistent homology itself was not originally invented to recover the homology of a fixed space. It was intended to discover homology groups at all different scales (in terms of the function value). Even with the celebrated stability theorem (Cohen-Steiner et al. 2007) and statistical guarantees (Chazal et al. 2015), the relationship between the Vietoris-Rips filtration persistent homology and the homology of the classifier region/boundary is not well established. To make a solid statement, I suggest authors look into the following papers\n\nHomology and robustness of level and interlevel sets\nP Bendich, H Edelsbrunner, D Morozov, A Patel, Homology, Homotopy and Applications 15 (1), 51-72, 2013\n\nHerbert Edelsbrunner, Michael Kerber: Alexander Duality for Functions: the Persistent Behavior of Land and Water and Shore. Proceedings of the 28th Annual Symposium on Computational Geometry, pp. 249-258 (SoCG 2012)\n\nThere are also existing work on how the homology of a manifold or stratified space can be recovered using its samples. They could be useful. But the settings are different: in this problem, we have samples from the positive/negative regions, rather than the classification boundary. \n\nFinally, the gap in concepts carries to experiments. When persistent homology of different real data are reported. It is unclear how they reflect the actually topology of the classification region/boundary. There are also significant amount of approximation due to the natural computational limitation of persistent homology. In particular, LLE and subsampling are used for the computation. These methods can significantly hurt persistent homology computation. A much more proper way is via the sparsification approach. \n\nSimBa: An Efficient Tool for Approximating Rips-Filtration Persistence via Simplicial Batch-Collapse\nT. K. Dey, D. Shi and Y. Wang. Euro. Symp. Algorithms (ESA) 2016, 35:1--35:16\n\n* 3) Finally, to support the main thesis, it is crucial to show that the topological measure is revealing information existing ones do not. Some baseline methods such as other geometric information (e.g., volume and curvature) are quite necessary.\n\n* 4) Important papers about persistent homology in learning could be cited:\n\nUsing persistent homology in deep convolutional neural network:\n\nDeep Learning with Topological Signatures\nC. Hofer, R. Kwitt, M. Niethammer and A. Uhl, NIPS 2017\n\nUsing persistent homology as kernels:\n\nSliced Wasserstein Kernel for Persistence Diagrams\nMathieu Carrière, Marco Cuturi, Steve Oudot, ICML 2017.\n\n* 5) Minor comments:\n\nSmall typos here and there: y axis label of Fig 5, conclusion section.\n\n", "General comments:\n\nThe paper is largely inspired by a recent work of Bianchini et al. (2014) on upper bounds of Betti number sums for decision super-level sets of neural networks in different architectures. It explores empirically the relations between Betti numbers of input data and hidden unit complexity in a single hidden layer neural network, in a purpose of finding closer connections on topological complexity or expressibility of neural networks. \n\nThey report the phenomenon of phase transition or turning points in training error as the number of hidden neurons changes in their experiment. The phenomenon of turning points has been observed in many experiments, where usually researchers investigate it through the critical points of training loss such as local optimality and/or saddle points. For the first time, the paper connects the phenomenon with topological complexity of input data and decision super-level sets, as well as number of hidden units, which is inspiring. \n\nHowever, a closer look at the experimental study finds some inconsistencies or incompleteness which deserves further investigations. The following are some examples. \n\nThe paper tries to identify a phase transition in number of hidden units, h_phase(D_2) = 10 from the third panel of Figure 4. However, when h=12 hidden units, the curve is above h=10 rather than below it in expectation. Why does the order of errors disagree with the order of architectures if the number of hidden neurons is larger then h_phase?\n\nThe author conjecture that if b0 = m, then m+2 hidden neurons are sufficient to get 0 training error.\nBut the second panel of fig4 seems to be a counterexample of the conjecture. In fact h_phase(D_0 of b_0=2)=4 and h_phase (D_1 of b_0 = 3) = 6, as pointed out by the paper, has a mismatch on such a numerical conjecture. \n\nIn Figure 5, the paper seems to relate the homological complexities of data to the hidden dimensionality in terms of zero training error. What are the relations between the homological complexities of data and homological complexities of decision super-level sets of neural networks in training? Is there any correspondence between them in terms of topological transitions. \n\nThe study is restricted to 2-dimensional synthetic datasets. Although they applied topological tools to low-dimensional projection of some real data, it's purely topological data analysis. They didn't show any connection with the training or learning of neural networks. So this part is just preliminary but incomplete to the main topic of the paper.\n\nThe authors need to provide more details about their method and experiments. For example, The author didn't show from which example fig6 is generated. For other figures appended at the end of the paper, there should also be detailed descriptions of the underlying experiments.\n\n\nSome Details:\n\nLines in fig4 are difficult to read, there are too many similar colors. Axis labels are also missing.\n\nIn fig5, the (4, 0)-item appears twice, but they are different. They should be the same, but they are not.\nAny mistake here?\n\nFig6(a) has the same problem as fig4. Besides, the use of different x-axis makes it difficult to compare with fig4.\nfig6(b) needs a color bar to indicate the values of correlations. \n\nSome typos, e.g. Page 9 Line 2, 'ofPoole' should be 'of Poole'; Line 8, 'practical connectio nbetween' should be 'practical connection between'; line 3 in the 4th paragraph of page 9, 'the their are' seems to be 'there are'. Spell check is recommended before final version. \n", "We thank the reviewer for their valuable feedback and useful comments, and in particular we intend on addressing several points therein.\n\n>>>>> This is an important and serious drawback because it seems unlikely that the method described in Figure 3 of determining the connectivity patterns of a dataset are likely to yield insightful results in a high dimensional space with very few datapoints (in comparison to 2^{dimension}), where distance between datapoints is unlikely to have any nice class related correspondence. <<<<<\n\nFrom a conceptual standpoint, there is no reason why persistent homology would be unable to determine the topology of a dataset in the high-dimensional setting. Specifically our method applies persistent homology directly to the individual classes of a dataset, and so the concern of distance sparsity invalidating class related correspondence does not apply. Furthermore, computation of persistent homology on high dimensional datasets via low-dimensional isometric embedding is equivalent to that on the original dataset; in other words, we present our methodology on CIFAR-10 as a template for how such computations can be done practically on real-world datasets without invalidating the integrity of the topological complexity computed. Relative to the latent space, the topological dimension of each individual class in CIFAR-10 is actually quite small in comparison to the ambient space\n\n\nThe reviewer points out several concerns which we intend on addressing in a revision of the paper. First, the latent dimension given in the isometric embedding of CIFAR-10 is far too low in the first revision of the paper. Second, we intend on giving a more substantial characterization of larger hidden layer models as well as extending the empirical analysis to decision boundary (not just region) complexity. Lastly, we intend on taking results of computations on the various real world datasets given and suggesting and testing various architectures given by our characterization.", "We would like thank the reviewer for their insightful comments and perspective. In particular, there were several concerns as to the completeness and consistency of the analysis given in the submission that we would like to address and correct.\n\nOver the course of developing the manuscript several figures were rendered at different levels of completion of the main experiments. The different (sub)figures contain an accurate reflection of the data collected at their time of rendering, but as a result of the substantial duration of some of these experiments our oversight has led to the mentioned inconsistencies. In order to maintain a general level of transparency and rigor in our work, we will promptly rerun the major experiments of the paper, publish all of our code on Github, rerender the given figures, and provide a full addendum to the paper which exactly indicates our experimental methodology.\n\n\n\n>>>>> The author conjecture that if b0 = m, then m+2 hidden neurons are sufficient to get 0 training error. But the second panel of fig4 seems to be a counterexample of the conjecture. In fact h_phase(D_0 of b_0=2)=4 and h_phase (D_1 of b_0 = 3) = 6, as pointed out by the paper, has a mismatch on such a numerical conjecture. <<<<<\n\nIn our experiment we intended to present empirically driven conjectures on a lower bound for the homological expressivity of networks. We thank the reviewer for pointing out a flaw in the statement of conjecture: in particular, the statement should be, there exists a dataset $\\mathcal{D}$ with $H_0(\\scriptd)$ = \\mathbb{Z}^m$ so that a single hidden layer neural network with $h =m +2$ hidden units converges to zero error on $\\mathcal{D}$. In the case of the given dataset in the second panel of figure 4, although it satisfies this homological property, it indeed needs 6 hidden units. However, the following is a construction of a dataset which satisfies the existence claim in the conjecture. Take $D$ such that 3 horizontally separated columns of data points extend vertically in $\\mathbb{R}^2, then the superposition of two “cylindrical” bumps and one half space suffice to cover this dataset. We amended our conjecture to include similar such constructions. Despite the existence of datasets satisfying the foregoing constraints, we intend on changing the statement to one of the existence of neural networks that express the given homology and not those which train to zero error.\n\n\n>>>>> In fig5, the (4, 0)-item appears twice, but they are different. <<<<<\n\nThis was an oversight in rendering the second, third, and fourth panel at a point in the experiment before its completion. As aforementioned, we will rerun the main experiments and update the paper with the proper renderings.\n\n\n>>>> The paper tries to identify a phase transition in number of hidden units, h_phase(D_2) = 10 from the third panel of Figure 4. However, when h=12 hidden units, the curve is above h=10 rather than below it in expectation. Why does the order of errors disagree with the order of architectures if the number of hidden neurons is larger then h_phase? <<<<<\n\nAlthough we did not reserve enough space in the manuscript to mention this, this isn’t just an inconsistency of Figure 4, but something we noticed ubiquitously as we increased the homological complexity of the dataset; that before the phase transition point there appears to be a strict order of architectures, after this it seems to give out to noise. In order to further investigate this phenomena, we intend on increasing the number of networks trained and the variation in the datasets given in a final draft of the work.\n\n\n>>>>> In Figure 5, the paper seems to relate the homological complexities of data to the hidden dimensionality in terms of zero training error. What are the relations between the homological complexities of data and homological complexities of decision super-level sets of neural networks in training? Is there any correspondence between them in terms of topological transitions. <<<<<\n\nWe should add to the paper that Figure 5 gives the final testing error of neural networks with respect to homological complexity of datasets. The testing error gives a full representation of a network to express the homological complexity of the dataset in the homology of its own decision boundary; that is, at the end of training the networks express the homology of the given dataset.\n\n\n>>>>> Although they applied topological tools to low-dimensional projection of some real data, it's purely topological data analysis. <<<<<\n\nWe agree that the analysis of real data is incomplete as network architectures with networks at a predicted h_phase point were not tested, and we fully intend on completing this analysis in the final draft of the work. There is however some merit in applying TDA to standard benchmark datasets in order to demonstrate the existence of non-trivial topological features therein.\n", ">>>>> Persistent homology itself was not originally invented to recover the homology of a fixed space. It was intended to discover homology groups at all different scales (in terms of the function value). <<<<<\n\nIn a morse theoretic sense, the reviewer’s definition of persistent homology is absolutely correct. However in the seminal work of Zomorodian and Carlsson, persistence homology is motivated in three contexts, recovering the static homology of a space from its point cloud, recovering the static homology of a space from a point cloud sampled from a distribution concentrated on a static space, and the reviewer’s given motivation, recovering homology of submanifolds according to excursion sets of a Morse function. It’s fair to say our use of persistent homology falls squarely in the first two categories.\n\nFor example, “Persistence complexes arise naturally whenever one is attempting to study topological invariants of a space computationally. Often, our knowledge of this space is limited and imprecise. Consequently, we must utilize a multiscale approach to capture the connectivity of the space, giving us a persistence complex.” (Zomorodian and Carlsson, 2005)\n\nIn particular, immediately following this statement they contextualize persistent homology in the context of trying to estimate the static topological invariants of a fixed space X from point cloud samples: “Example 1.1 (point cloud data) Suppose we are given a finite set of points X from a subspace Y ∈ R^n. We call X point cloud data or PCD for short. It is reasonable to believe that if the sampling is dense enough, we should be able to compute the topological invariants of Y directly from the PCD. To do so, we may either compute the Cech ˇ complex, or approximate it via a Rips complex [15]. [...].” (Zomorodian and Carlsson, 2005)\n\n>>>>> The usage of persistent homology is not well justified. [...] Even with the celebrated stability theorem (Cohen-Steiner et al. 2007) and statistical guarantees (Chazal et al. 2015), the relationship between the Vietoris-Rips filtration persistent homology and the homology of the classifier region/boundary is not well established. <<<<<\n\nThe one-versus-all setting studied in this work is limited of course in the case where the geometric complexity of the decision boundary is simpler than that of the individual classes, themselves. However, for the case of finding sufficiently powerful architectures we feel that this is a good starting point, as in the worst case, the decision boundary will inherit the complexity of the individual classes. Moreover, in the generative and unsupervised setting, capturing the support of the distributions of each class is a necessity, and therefore the use of persistent homology on individual classes directly applies.\n\nOn the other hand, there has been recent work on directly building simplicial complexes between multiple classes which empirically characterizes the decision boundary sufficiently to aid in optimal kernel selection (Varshney and Ramamurthy, 2015). We will provide a comparison between the foregoing method and ours for characterizing learned decision boundary topology in a final version of this work.\n\n\n\n>>>>> 3) Finally, to support the main thesis, it is crucial to show that the topological measure is revealing information existing ones do not. Some baseline methods such as other geometric information (e.g., volume and curvature) are quite necessary. <<<<<\n\nTo our knowledge, our work gives the first relationship between a computable measure of data complexity and the learnability of architectures with respect to that measure. If the reviewer knows of any other such similar baselines to which we can compare, we would greatly appreciate if references could be provided. From a theoretical perspective, it is clear that topology reveals geometric information that volume and curvature do not, so we feel that if a baseline is to be made, then the comparison would be strictly empirical. However, without other related literature relating geometric information to architecture optimality, we believe the theoretical results of Bianchini et al. are a good baseline.\n\n\n#################\nReferences:\n\nBasu, S., 1996, July. On bounding the Betti numbers and computing the Euler characteristic of semi-algebraic sets. In Proceedings of the twenty-eighth annual ACM symposium on Theory of computing (pp. 408-417). ACM.\n\nZomorodian A, Carlsson G. Computing persistent homology. Discrete & Computational Geometry. 2005 Feb 1;33(2):249-74.\n\nVarshney, Kush R., and Karthikeyan Natesan Ramamurthy. \"Persistent topology of decision boundaries.\" Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. IEEE, 2015.", "We thank the reviewer for the valuable feedback and numerous suggestions for improvement. We will address and further inquire into each point individually.\n\n\n>>>>>The authors propose to use the homology of the data as a measurement of the expressibility of a deep neural network. The paper is mostly experimental. The theoretical section (3.1) is only reciting existing theory (Bianchini et al.). Theorem 3.1 is not surprising either: it basically says spaces with different topologies differ at some parts. <<<<<\n\nWhile we agree that those with some background in topology (including the authors) would view Theorem 3.1 as following trivially, the background and motivation of this paper is aimed towards the deep learning community at large. To illustrate that topology is at least a meaningful minimality condition for the expressivity of neural architectures, we think that the proposition is an important conceptual stepping stone. There’s a tradeoff between highlighting conceptual importance and rigorous underpinnings to readers unversed in topology and acknowledging simplicity and triviality to those who are. In this case, we felt the former to be more useful to the general audience.\n\n\n>>>>> * 1) the main message of the paper is unclear to me. Results observed in the synthetic experiments seem to be a confirmation of the known results by Bianchini et al.: the Betti number a network can express is linear to the number of hidden units, h, when the input dimension n is a constant. <<<<<\n\nIn this work, we wish to give an exact empirical characterization of each individual Betti number, demonstrate that topological phenomena during training with stochastic gradient descent, and most importantly show that there is potential to use topology as a measure of data complexity to give a reasonable range of trial architectures for architecture search or human-aided architecture selection.\n\nWe would like to highlight that in Bianchini’s work, it is only shown that there is an exact linear relationship on the sum of Betti numbers for the arctan activation. Furthermore, the bounds given in Bianchini’s work are a result of landmark work on bounding the sum of Betti numbers of sub-algebraic sets (Basu, 1999), which then led to the Pfefferian bounds used. From a theoretical perspective (Basu, 1999), actually gives a bounding method for individual Betti numbers in the long exact Mayer-Vietoris sequence, but subsumes those methods by a bound on the sum. \n\nIn the context of architecture selection, bounding each individual Betti number is a crucial next step in developing the pipeline from persistent homology to minimal architecture. Furthermore, we feel an illustration of the empirical study of topological expressivity is essential in gauging the practical utility of these bounds.\n\n>>>>> To be convinced, I would like to see much stronger experimental evidence: Reporting results on a single layer network is unsettling. It is known that the network expressibility is highly related to the depth (Eldan & Shamir 2016). So what about networks with more layers? Is the stratification observation statistically significant? These experiments are possible for synthetic data. <<<<<\n\nWe thank the reviewer for this suggestion. We hope to complete experiments with more layers and different activation functions in the final draft of the manuscript. As to the stratification observation, we will add $p$ values for appropriate hypothesis tests." ]
[ 3, 4, 4, -1, -1, -1, -1 ]
[ 5, 5, 5, -1, -1, -1, -1 ]
[ "iclr_2018_H11lAfbCW", "iclr_2018_H11lAfbCW", "iclr_2018_H11lAfbCW", "SJHp-7Klz", "Sy4l2B9gG", "rkA_SaHMz", "B19Fsy5gM" ]
iclr_2018_Bk7wvW-C-
Exploring Asymmetric Encoder-Decoder Structure for Context-based Sentence Representation Learning
Context information plays an important role in human language understanding, and it is also useful for machines to learn vector representations of language. In this paper, we explore an asymmetric encoder-decoder structure for unsupervised context-based sentence representation learning. As a result, we build an encoder-decoder architecture with an RNN encoder and a CNN decoder, and we show that neither an autoregressive decoder nor an RNN decoder is required. We further combine a suite of effective designs to significantly improve model efficiency while also achieving better performance. Our model is trained on two different large unlabeled corpora, and in both cases transferability is evaluated on a set of downstream language understanding tasks. We empirically show that our model is simple and fast while producing rich sentence representations that excel in downstream tasks.
rejected-papers
here, yet another sentence representation method is proposed. i agree with R1 and R3 that this does not contribute significantly to be a full-length conference paper.
train
[ "SyU_UK2lf", "Skrfq_Jlz", "Hy4cMGVlf", "r1OWJupQz", "HJXUns5QG", "HyeAFeUGM", "BJBaulIMG", "ryqHPgUMG", "B1hPJJd0-", "rJBQsOt0b", "rknG_y_Cb", "rJtgBTDCZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "public", "public" ]
[ "\n-- updates to review: --\n\nThanks for trying to respond to my comments. I find the new results very interesting and fill in some empirical gaps that I think were worth investigating. I'm now more confident that this paper is worth publishing and I increased my rating from 6 to 7. \n\nI admit that this is a pretty NLP-specific paper, but to the extent that ICLR has core NLP papers (I think it does have some), I think the paper is a reasonable fit for ICLR. It might feel more at home at a *ACL conference though. \n\n-- original review is below: --\n\nThis paper is about modifications to the skip-thought framework for learning sentence embeddings. The results show performance comparable to or better than skip-thought while decreasing training time. \n\nI think the overall approach makes sense: use an RNN encoder because we know it works well, but improve training efficiency by changing the decoder to a combination of feed-forward and convolutional layers. \n\nI think it may be the case that this works well because the decoder is not auto-regressive but merely predicts each word independently. This is possible because the decoder will not be used after training. So all the words can be predicted all at once with a fixed maximum sentence length. In typical encoder-decoder applications, the decoder is used at test time to get predictions, so it is natural to make it auto-regressive. But in this case, the decoder is thrown away after training, so it makes more sense to make the decoder non-auto-regressive. I think this point should be made in the paper. \n\nAlso, I think it's worth noting that an RNN decoder could be used in a non-auto-regressive architecture as well. That is, the sentence encoding could be mapped to a sequence of length 30 as is done with the CNN decoder currently; then a (multi-layer) BiLSTM could be run over that sequence, and then a softmax classifier can be attached to each hidden vector to predict the word at that position. It would be interesting to compare that BiLSTM decoder with the proposed CNN decoder and also to compare it to a skip-thought-style auto-regressive RNN decoder. This would let us understand whether the benefit is coming more from the non-auto-regressive nature of the decoder or from the CNN vs RNN differences. \n\nThat is, it would make sense to factor the decision of decoder design along multiple axes. One axis could be auto-regressive vs predict-all-words. Another axis could be using a CNN over the sequence of word positions or an RNN over the sequence of word positions. For auto-regressive models, another axis could be train using previous ground-truth word vs train using previous predicted word. Skip-thought corresponds to an auto-regressive RNN (using the previous ground-truth word IIRC). The proposed decoder is a predict-all-words CNN. It would be natural to also experiment with an auto-regressive CNN and a predict-all-words RNN (like what I described in the paragraph above). The paper is choosing a single point in the space and referring to it as a \"CNN decoder\" whereas there are many possible architectures that can be described this way and I think it would strengthen the paper to increase the precision in discussing the architecture and possible alternatives. \n\nOverall, I think the architectural choices and results are strong enough to merit publication. Adding any of the above empirical comparisons would further strengthen the paper. \n\nHowever, I did have quibbles with some of the exposition and some of the claims made throughout the paper. They are detailed below:\n\nSec. 2:\n\nIn the \"Decoder\" paragraph: please add more details about how the words are predicted. Are there final softmax layers that provide distributions over output words? I couldn't find this detail in the paper. What loss is minimized during training? Is it the sum of log losses over all words being predicted?\n\nSec. 3:\n\nSection 3 does not add much to the paper. The motivations there are mostly suggestive rather than evidence-based. Section 3 could be condensed by about 80% or so without losing much information. Overall, the paper has more than 10 pages of content, and the use of 2 extra pages beyond the desired submission length of 8 should be better justified. I would recommend adding a few more details to Section 2 and removing most of Section 3. I'll mention below some problematic passages in Section 3 that should be removed.\n\nSec. 3.2:\n\"...this same constraint (if using RNN as the decoder) could be an inappropriate constraint in the decoding process.\" What is the justification or evidence for this claim? I think the claim should be supported by an argument or some evidence or else should be removed. If the authors intend the subsequent paragraphs to justify the claim, then see my next comments. \n\nSec. 3.2:\n\"The existence of the ground-truth current word embedding potentially decreases the tendency for the decoder to exploit other information from the sentence representation.\"\nBut this is not necessarily an inherent limitation of RNN decoders since it could be addressed by using the embedding of the previously-predicted word rather than the ground-truth word. This is a standard technique in sequence-to-sequence learning; cf. scheduled sampling (Bengio et al., 2015). \n\nSec. 3.2: \n\"Although the word order information is implicitly encoded in the CNN decoder, it is not emphasized as it is explicitly in the RNN decoder. The CNN decoder cares about the quality of generated sequences globally instead of the quality of the next generated word. Relaxing the emphasis on the next word, may help the CNN decoder model to explore the contribution of context in a larger space.\"\nAgain, I don't see any evidence or justification for these arguments. Also see my discussion above about decoder variations; these are not properties of RNNs vs CNNs but rather properties of auto-regressive vs predict-all-words decoders. \n\nSec. 5.2-5.3:\nThere are a few high-level decisions being tuned on the test sets for some of the tasks, e.g., the length of target sequences in Section 5.2 and the number of layers and channel size in Section 5.3. \n\nSec. 5.4:\nWhen trying to explain why an RNN encoder works better than a CNN encoder, the paper includes the following: \"We stated above that, in our belief, explicit usage of the word order information will augment the transferability of the encoder, and constrain the search space of the parameters in the encoder. The results match our belief.\"\nI don't think these beliefs are concrete enough to be upheld or contradicted. Both encoders explicitly use word order information. Can you provide some formal or theoretical statement about how the two encoders treat word order differently? I fear that it's only impressions and suppositions that lead to this difference, rather than necessarily something formal. \n\nSec. 5.4:\nIn Table 1, it is unclear why the \"future predictor\" model is the one selected to be reported from Gan et al (2017). Gan et al has many settings and the \"future predictor\" setting is the worst. An explanation is needed for this choice. \n\nSec. 6.1: \n\nIn the \"BYTE m-LSTM\" paragraph:\n\n\"Our large RNN-CNN model trained on Amazon Book Review (the largest subset of Amazon Review) performs on par with BYTE m-LSTM model, and ours works better than theirs on semantic relatedness and entailment tasks.\" I'm not sure this \"on par\" assessment is warranted by the results in Table 2. BYTE m-LSTM is better on MR by 1.6 points and better on CR by 4.7 points. The authors' method is better on SUBJ by 0.7 and better on MPQA by 0.5. So on sentiment tasks, BYTE m-LSTM is clearly better, and on the other tasks the RNN-CNN is typically better, especially on SICK-r. \n\n\nMore minor things are below:\n\nSec. 1:\nThe paper contains this: \"The idea of learning from the context information was first successfully applied to vector representation learning for words in Mikolov et al. (2013b)\"\n\nI don't think this is accurate. When restricting attention to neural network methods, it would be more correct to give credit to Collobert et al. (2011). But moving beyond neural methods, there were decades of previous work in using context information (counts of context words) to produce vector representations of words. \n\ntypo: \"which d reduces\" --> \"which reduces\"\n\nSec. 2:\nThe notation in the text doesn't match that in Figure 1: w_i^1 vs. w_1 and h_i^1 vs h_1. \n\nInstead of writing \"non-parametric composition function\", describe it as \"parameter-free\". \"Non-parametric\" means that the number of parameters grows with the data, not that there are no parameters. \n\nIn the \"Representation\" paragraph: how do you compute a max over vectors? Is it a separate max for each dimension? This is not clear from the notation used.\n\nSec. 3.1:\ninappropriate word choice: the use of \"great\" in \"a great and efficient encoding model\"\n\nSec. 3.2:\ninappropriate word choice: the use of \"unveiled\" in \"is still to be unveiled\"\n\nSec. 3.4:\nTying input and output embeddings can be justified with a single sentence and the relevant citations (which are present here). There is no need for speculation about what may be going on, e.g.: \"the model learns to explore the non-linear compositionality of the input words and the uncertain contribution of the target words in the same space\".\n\nSec. 4:\nI think STS14 should be defined and cited where the other tasks are described. \n\nSec. 5.3:\ntypo in Figure 2 caption: \"and and\"\n\n\nSec. 6.1: \n\nIn the \"Skip-thought\" paragraph:\n\ninappropriate word choice: \"kindly\"\n\nThe description that says \"we cut off a branch for decoding\" is not clear to me. What is a \"branch for decoding\" in this context? Please modify it to make it more clear. \n\n\nReferences:\n\nBengio S, Vinyals O, Jaitly N, Shazeer N. Scheduled sampling for sequence prediction with recurrent neural networks. NIPS 2015.\n\nCollobert R, Weston J, Bottou L, Karlen M, Kavukcuoglu K, Kuksa P. Natural language processing (almost) from scratch. Journal of Machine Learning Research 2011.\n", "Update:\n\nI'm going to change my review to a 6 to acknowledge the substantial improvements you've made—I no longer fear that there are major errors in the paper, but this paper is still solidly borderline, and I'm not completely convinced that any new claim is true. The evidence presented for the main claim—that you can get by without an autoregressive decoder when pretraining encoders—is somewhat persuasive, but isn't as unequivocal as I'd hope, and even if the claim is true, it is arguably too limited a claim for an ICLR main conference paper. As R1 says, a *ACL short paper would be more appropriate. The writing is also still unclear in places.\n\n----\n\nThis paper presents a new RNN encoder–CNN decoder hybrid design for use in pretraining reusable sentence encoders on Kiros's SkipThought objective. The task is interesting and important, and the results are generally good: The new model outperforms SkipThought, and all other prior models for training sentence encoders on unlabeled data. However, some of the design choices seem a bit odd, and I have a large number of minor concerns about the paper. I'd like to see the authors' replies and the other reviews before I can confidently endorse this paper as correct.\n\n\nNon-autoregressive decoding with a CNN strikes me as a somewhat ill-posed problem, even for in this case where you don't actually use the decoder in the final application of your model. At each position, you're training your model to predict a distribution over all words that could appear at the beginning/tenth position/twentieth position in sentences on some topic. I'd appreciate some more discussion of why this should or shouldn't hurt performance. I'd be less concerned about this if the results supporting the use of the CNN decoder were a bit more conclusive: while they are better on average across your smaller experiments, your largest experiment (2400D) shows them roughly tied.\n\nYour paper opens with the line \"Context information plays an important role in human language understanding.\" This sounds like it's making an empirical claim that your paper doesn't support, but it's so vague that it's hard to tell exactly what that claim is. Please clarify this or remove it.\n\nThis sentence is quite inaccurate: \"The idea of learning from the context information was first successfully applied to vector representation learning for words in Mikolov et al. (2013b) and learning from the occurrence of words also succeeded in Pennington et al. (2014).\" Turney and Pantel 2010 ( https://www.jair.org/media/2934/live-2934-4846-jair.pdf ) offer a survey of the substantial prior work that existed at that time.\n\nThe \"Neighborhood Hypothesis\" is given quite a lot of importance, given that it's a fairly small empirical effect without any corresponding theory. The fact that it's emphasized so heavily makes me suspect that I can guess the author of the paper. I'd tone down that part of your framing.\n\nI would appreciate some more analysis of which of the non-central tricks that you describe in section 3 help. For example, max pooling seems reasonable, but you report yourself that mean pooling generally works much better in prior work. Without an explicit experiment, it's not clear why you'd add a mean pooling component.\n\nIt seems misleading to claim that your CNN is modeled after AdaSent, as that model uses a number of layers that varies with the length of the sentence (and differs from yours in a few other less-important ways). Please correct or clarify.\n\nThe use of “†” in table to denote models that predict the next sentence in a sequence doesn't make sense. It should apply to all of your models if I understand correctly. Please clarify.\n\nYou could do a better job at table placement and formatting. Table 3 is in the wrong section, for example.\n\nYou write that: \"Our proposed RNN-CNN model gets similar result on SNLI as Skip-thought, but with much less training time.\" This seems to be based on a comparison between your model run on your hardware and their model run on their (presumably older) hardware, and possibly also with their older version of CuDNN. If that's right, you should tone down this claim or offer some more evidence.", "The authors build on the work of Tang et al. (2017), who made a minor change to the skip-thought model by decoding only the next sentence, rather than the previous one also. The additional minor change in this paper is to use a CNN, rather than RNN, decoder.\n\nI am sympathetic to the goals of the work, and believe this sort of work should be carried out, but I see the contribution as too minor to constitute a paper at the conference track of a leading international conference such as ICLR. Given the incremental nature of the work, I think this would be a good fit for something like a short paper at *ACL.\n\nI found the more theoretical motivation of the CNN decoder not terribly convincing, and somewhat post-hoc. I feel as though analogous arguments could just as easily be made for an RNN decoder. Ultimately I see these questions - such as CNN vs. RNN for the decoder - as empirical ones.\n\nFinally, the authors have admirably attempted a thorough comparison with existing work, in the related work section, but this section takes up a large chunk of the paper at the end, and again I would have preferred this section to be much shorter and more concise.\n\nSummary: worthwhile empirical goal, but the paper could have been easily written using half as much space.\n", "We revised our paper, and it has been updated now. The revision is based on the reviewers’ suggestions. \n\n1/ Additional experiments, suggested by Reviewer2 and Reviewer3, were included to strengthen our original claim, and they are in Section 3 (Architecture Design) now. \n\n2/ We summarized the effect of varying model architecture in Section 3 (Architecture Design), and moved the original Table for quantitative results to the supplementary.\n\n3/ All reviewers recommended to reduce the length of the original Motivation section and Related work section to fit the paper into 8 pages, so we revised these 2 sections to make them concise and precise.\n\n4/ We didn’t change much in the Abstract, the Introduction, and the Conclusion\n\nOverall, the revision didn’t change our main claim, and the additional experiments and compressed paper make our paper even clearer and more concise. \n", "Thanks for the additional experimental results and the clarifications!", "(I) We chose to focus on the encoder-decoder model for learning vector representations of sentences in an unsupervised fashion, and after training, only the encoder will be used to produce a vector for a given input sentence. Since the decoder won’t be applied after training, and faster training time of the models is generally preferred, it makes sense to use a non-autoregressive decoder. Thus, we proposed to use a predict-all-words CNN decoder.\n\nAs suggested by Reviewer 2 at the top of this page, we conducted experiments to support our choice of the decoder and had 2 findings.\n\n1/ In terms of learning good sentence representations, for an autoregressive model, RNN or CNN, as the decoder in an encoder-decoder model, it is not necessary to input the ground-truth words to the decoder during training.\n2/ The model with an autoregressive decoder works roughly the same as the model with a predict-all-words decoder in learning sentence representations\n\nThe 2 findings from our experiments show that using a predict-all-word CNN as the decoder works as well as an autoregressive RNN as the decoder. In addition, a predict-all-word CNN decoder runs fast during training.\n\n\n(II) Mean+Max Pooling vs Max Pooling\n\nWe didn’t report that the mean pooling works better than the max pooling in our paper, and we also agree that the max pooling is reasonable. \n\nIn our paper, our claim is that a combination of max pooling and mean pooling works better than max pooling, which was inspired by Chen et al., 2016[1]. In order to consolidate the claim, we conducted experiments to compare “max pooling” and “mean+max pooling”. The results are presented in the Table below:\n\n\nEncoder (TrainHrs) | SICK-r SICK-E STS14 | MSRP (ACC/F1) | SST TREC\n\n 600D Representation\nRNN-max (21hrs) | 0.8365 82.6 0.50/0.47 | 73.3 / 81.5 | 79.1 82.2\n\n 1200D Representation\nRNN-max (28 hrs) | 0.8485 83.2 0.47/0.44 | 72.9 / 80.8 | 82.2 86.6\nRNN-mean+max (21 hrs) | 0.8530 82.6 0.58/0.56 | 75.6 / 82.9 | 82.8 89.2\n\n\nAs we can see, the model trained with mean+max pooling generally works better than it with max pooling only. We also evaluated the model, which is trained with only max pooling over time, with mean+max pooling during testing (with no additional weight training), and it boosts the performance on the unsupervised evaluation task, and also gets slightly better results on all supervised evaluation tasks, which also supports our claim that mean+max pooling works better than max pooling.\n\n\n(III) Clarification of the CNN decoder\n\nOur CNN decoder is not modeled after AdaSent, and it is a stack of 3 convolutional layers. \n\nIn section 3, we tried to compare whether to use an RNN encoder or a CNN encoder, and the CNN ENCODER we designed here is based on the design in Conneau et al., (2017) [2], since it is a modification of AdaSent, and it performs better than other RNN models except for the RNN-Max model on SNLI dataset. By consulting other papers, we confirmed that the CNN encoder we designed for comparison is also a good model.\n\n“†” is used to indicate the model that predicts the next sentence, and all of our other models are learned to predict next 30 words, not necessarily a sentence. In Section 3, we compared the performance of our model that predicts the next sentence, and that of our model that predicts the next contiguous 30 words. The results showed that there is no significant difference between the 2 models.\n\n\n(V) Our implementation of Skip-thought on the same GPU machine\n\nWe reimplemented the skip-thought model in PyTorch, and ran it on the same GPU machine used for this paper. The training of our implemented Skip-thought model took approximately 3 times longer than is needed for our model. \n\nOur understanding of the reason why the Skip-thought model is slow is that 1) it has 2 decoders to decode the previous sentence and the next one respectively, 2) the 2 autoregressive RNN decoders run slowly during training, 3) the size of the RNN decoders is fairly large.\n\nWe addressed these 3 issues by proposing a predict-all-words CNN decoder, which is a non-autoregressive decoder. \n\n[1]Chen, Qian et al. “Enhancing and Combining Sequential and Tree LSTM for Natural Language Inference.” CoRR abs/1609.06038 (2016): n. pag.\n[2] Conneau, Alexis et al. “Supervised Learning of Universal Sentence Representations from Natural Language Inference Data.” EMNLP (2017).", "We agree that carrying out research in learning sentence representations with the encoder-decoder model is important, and proposing new models is even more exciting, but we still wanted to argue that, analyzing previously proposed models and building efficient learning algorithms based on previous findings are also important.\n\nOur paper differs from existing works in the following important ways:\n\n1/ We aim to propose an efficient encoder-decoder model for learning sentence representations in an unsupervised fashion, and the prior work didn’t pay much attention to the running time. In our paper, the running time is also a consideration for model selection.\n\n2/ The proposed model in our paper is not a simple modification from the Skip-thought model or the model proposed in Tang et al., (2017). We have a focus on simplifying the decoder.\n\n3/ Our paper, with new experiments suggested by Reviewers 2 and 3, has a unique axis for comparison. Our findings suggest that for learning sentence representations with an encoder-decoder model, it is not necessary to use an autoregressive model for decoding.\n\n4/ In addition, the 3x speedup can greatly increase the sizes of models that can be run, the number\nof experiments that can be done, and also makes the model accessible to those without the best computational resources. The recent popularity of deep learning and LSTM demonstrates the power that making an algorithm easier and faster to run can have.\n\nThe overall goal of our paper is to propose an efficient encoder-decoder model for learning sentence representation in an unsupervised fashion. Since only the encoder will be used after training, it makes sense to simplify the decoder to make it run faster, and help the model perform even better.\n\nThe key difference between the previously proposed encoder-decoder models and our model lies in the choice of decoder, and as you pointed out, most of the previous models adopted an autoregressive RNN model for decoding, while we proposed to use a predict-all-words CNN for decoding. The autoregressive models, including RNNs and CNNs, are good at generating sequential data, such as language and voice, but in our case, the quality of the generated sequences after training is not our main focus, since we care about the quality of the learned sentence representations. Thus, it is not necessary to use an autoregressive model for decoding.\n\nBased on this idea (and now backed by new experiments suggested by Reviewers 2 and 3), we proposed to use a predict-all-words CNN decoder instead of an autoregressive RNN decoder.\n\n(The experimental design is described in our reply to Reviewer 2, and also will be included in our updated paper.)\n\nBriefly, the results show that, for learning sentence representations with an encoder-decoder model, \n\n1) if we stick to using an autoregressive decoder, including RNNs and CNNs, it is not necessary to input the ground-truth words to the decoder during training, and the performance on downstream tasks stays roughly the same for RNN decoder, and gets slightly better for CNN decoder;\n\n2) the model with an autoregressive decoder performs similarly with that with a predict-all-words decoder.\n\nThese 2 findings actually support our choice of using a predict-all-words CNN as the decoder, and it brings the model higher training efficiency and strong transferability.\n\nIn our paper, we also develop tricks which boost the performance on the downstream tasks, such as mean+max pooling, and weight tying between word embedding and prediction layer.\n\nWe agree that our comparison should be more comprehensive and reasonable, and the writing should be more concise. We will update our paper very soon.\n\nOverall, we think that our paper has its own unique theoretical considerations and empirical contributions with the suggestions from all 3 reviewers, and it should be solid and comprehensive to merit a publication at ICLR conference.\n", "(I) Autoregressive models vs. Predict-all-words models\n\nBased on your suggestion, we conducted experiments to empirically justify our choice of the decoder. In our experiments, we have 2 findings in terms of learning sentence representations:\n\n1) In an encoder-decoder model with an autoregressive RNN or CNN as the decoder, it is not necessary to input the correct words to the decoder.\n2) The model with an autoregressive decoder works roughly the same as the model with a predict-all-words decoder.\n\nThe experimental design is described in detail below:\n\n1} We compared 3 autoregressive decoding settings: 1) using ground-truth words (Baseline), 2) using previously predicted words (Always Sampling), and 3) using uniformly sampled words from the dictionary (Uniform Sampling). The 3 decoding settings were inspired by Bengio et al. 2015[1]. The results are presented in the table below:\n\nGenerally, 3 decoding settings didn’t make much of a difference in terms of the performance on downstream tasks, with RNN OR CNN as the decoder. The results tell us that, in terms of learning good sentence representations, the autoregressive decoder doesn’t require the ground-truth words as the inputs. \n\n\nModel | SICK-r SICK-E STS14 | MSRP (ACC/F1) | SST TREC\n\n auto-regressive RNN as decoder\n\nBaseline | 0.8530 82.6 0.51/0.50 | 74.1 / 81.7 | 82.5 88.2\nAlways Sampling | 0.8576 83.2 0.55/0.53 | 74.7 / 81.3 | 80.6 87.0\nUniform Sampling| 0.8559 82.9 0.54/0.53 | 74.0 / 81.8 | 81.0 87.4\n\n auto-regressive CNN as decoder\n\nBaseline | 0.8510 82.8 0.49/0.48 | 74.7 / 82.8 | 81.4 82.6\nAlways Sampling | 0.8535 83.3 0.53/0.52 | 75.0 / 81.7 | 81.4 87.6\nUniform Sampling | 0.8568 83.4 0.56/0.54 | 74.7 / 81.4 | 83.0 88.4\n\n predict-all-words RNN as decoder\n\nRNN | 0.8508 82.8 0.58/0.55 | 74.2 / 82.8 | 81.6 88.8\n\n predict-all-words CNN as decoder\n\nCNN | 0.8530 82.6 0.58/0.56 | 75.6 / 82.9 | 82.8 89.2\n\n\n2} The predict-all-words CNN decoder is described in our paper, which is a stack of 3 convolutional layers, and all words are predicted once at the output of the decoder. The predict-all-words RNN decoder is built based on our CNN decoder. To keep the number of params roughly the same, we replaced the last 2 conv layers with a bidirectional GRU.\n\nThe results are also presented in the table above. The performance of the predict-all-words RNN/CNN decoder does not significantly differ from that of any one of the autoregressive RNN/CNN decoders. \n\nWe aim to propose a new model with high training efficiency and strong transferability, thus a predict-all-words CNN decoder is our top choice.\n\n\n(II) Clarifications \n\nA1) The softmax layer is used to provide a word distribution at every position, and a sum of log losses is calculated for all words in the next sentence.\n\nA2) To avoid overfitting, we picked 3 tasks (SICK-r, SICK-E and STS14) instead of all tasks to tune the high-level decisions of our model. However, the first model (row 1 in Table 1) we built works the best, and the following hyperparameter tuning (row 2,3,4,7 and 8) doesn’t boost perform.\n\nIn row 7, we added another conv layer in the decoder, and it gave us slightly better performance. However, the training efficiency is also a concern, so it is not worth sacrificing the training efficiency for a slight performance gain.\n\nA3) In our design, the model only encodes the current sentence and then decodes the next one, which is stated as the “future predictor” in Gan et al. (2017)[2]. In their design, the encoder is a CNN instead of an RNN.\n\nThey found that the “future predictor” worked badly overall, so they need to incorporate higher-level information to augment the “future predictor” model. In our model, we use an RNN for encoding, and it works much better than their “future predictor”.\n\nIn Table 2, we did include the best results in Gan et al. (2017)[2] to compare with our models (row 12 labeled “combine CNN-LSTM”, which is an ensemble of their proposed 2 models)\n\nA4) The Skip-thought model uses 2 decoders to decode the previous sentence and the next one. Compared with the skip-thought model, we only applied one decoder to decode the next sentence, and got better results than the skip-thought model.\n\n(III) Additional comments will be addressed in the updated paper.\n\n[1] Bengio, Samy et al. “Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks.” NIPS (2015).\n[2] Gan, Zhe et al. “Learning Generic Sentence Representations Using Convolutional Neural Networks.” EMNLP (2017).", "Thank you for your reply. It is a great question. Currently, we don’t have the results for your question, but it’ll be good to see the effect of the quantity of unlabeled training data.\n \nFor clarification, we trained models on BookCorpus (74 million) and Amazon Book Review (142 million), respectively, and they were both trained with the same number of iterations and batch size. The results indicate that the model trained on Amazon Book Review outperforms that on BookCorpus. We think that the performance boost was mostly brought by the domain matching between the evaluation tasks and the training data, not by the amount of training data in different corpora.\n \nWe’ll start our experiments on the effect of the quantity of unlabeled training data and get back with results very soon.", "We trained our small RNN-CNN model with 4 different amounts of unlabeled data from BookCorpus, which are 20%, 10%, 5%, and 2% of total data, respectively. All the models were evaluated on the SICK-r, and SICK-Entailment (supervised) and STS14 (unsupervised). The results are presented in the table below:\n \nPercentage sick-r sick-E(%) sts14(Pearson/Spearman)\n2% 0.8347 81.4 0.59/0.57\n5% 0.8367 81.1 0.60/0.58\n10% 0.8415 81.7 0.60/0.58\n20% 0.8528 82.1 0.59/0.56\n100% 0.8530 82.6 0.58/0.56\n \nWe are not able to copy the performance curve of each model during training to this discussion forum, but we will update this curve into our paper in the future revised version. Here, we report some interesting observations.\n\n1/ Longer training time helps all the models learn better representations for sentences, but the performance of each model converges after a certain number of iterations, which matches the Figure 2 in our paper. Therefore, there is no need to train the model for an unlimited time.\n\nUnexpectedly, the models trained with only 2% and 5% both have a slight performance drop on supervised evaluation tasks after training for a long time. However, all 4 models keep improving on unsupervised STS14 evaluation task during training until they converge.\n\n2/ Larger size of training data requires a longer time to converge, and it generally performs better than those with a smaller size of data.\n \nIt suggests that we need to train our model on Amazon Book Review for more iterations, and our model potentially is able to get even better results. As stated in Radford et al. (2017), their BYTE m-LSTM model was trained on Amazon Review for a month, while we only trained our model for around 33 hours, and we still got comparable results on classification tasks and better results on relatedness and entailment tasks.\n\n3/ When it comes to a large model (large dimension of the representation), more data and longer training time will result in better sentence representations.", "Thanks! It's certainly reasonable to just report one number (no other similar paper reports learning curves that I know of), but a learning curve would help to at least suggest an answer to an interesting question: Could you get even better results with another order of magnitude more data/training time?", "Just out of curiosity, do you have any results on how the quantity of unlabeled training data you use impacts model performance?" ]
[ 7, 6, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Bk7wvW-C-", "iclr_2018_Bk7wvW-C-", "iclr_2018_Bk7wvW-C-", "iclr_2018_Bk7wvW-C-", "ryqHPgUMG", "Skrfq_Jlz", "Hy4cMGVlf", "SyU_UK2lf", "rJtgBTDCZ", "rknG_y_Cb", "B1hPJJd0-", "iclr_2018_Bk7wvW-C-" ]
iclr_2018_ryHM_fbA-
Learning Document Embeddings With CNNs
This paper proposes a new model for document embedding. Existing approaches either require complex inference or use recurrent neural networks that are difficult to parallelize. We take a different route and use recent advances in language modeling to develop a convolutional neural network embedding model. This allows us to train deeper architectures that are fully parallelizable. Stacking layers together increases the receptive filed allowing each successive layer to model increasingly longer range semantic dependences within the document. Empirically we demonstrate superior results on two publicly available benchmarks. Full code will be released with the final version of this paper.
rejected-papers
there are two separate ideas embedded in this submission; (1) language modelling (with the negative sampling objective by mikolov et al.) is a good objective to use for extracting document representation, and (2) CNN is a faster alternative to RNN's, both of which have been studied in similar contexts earlier (e.g., paragraph vectors, CNN classifiers and so on, most of which were pointed out by the reviewers already.) Unfortunately reading this manuscript does not reveal too clearly how these two ideas connect to each other (and are separate from each other) and are related to earlier approaches, which were again pointed out by the reviewers. in summary, i believe this manuscript requires more work to be accepted.
train
[ "B1qFEzr4M", "HJmMNVDlz", "BkBvQaFez", "B1KZkIqxG", "BJ1ZUHtGf", "HyO-3YVGM", "Hk5UoFVzM", "S1uR9tEGG", "r1tz5yoeM", "SJOi664ez", "B1siqc_yz", "ry21MYLJG" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "public", "author", "public" ]
[ "The reported accuracies for doc2vec on IMDB are wrong, presumably a consequence of a suboptimal re-implementation. In the doc2vec paper, they report accuracy of 92.58%, significantly higher than your reported doc2vec accuracy, 88.73%, and the accuracy for the proposed method, 90.15%. Given this extremely poor implementation of a baseline on IMDB, I also doubt the accuracy of the AFFR results, where you only beat doc2vec by less than a percent. \n\nYou should compare against this paper from openAI: https://arxiv.org/pdf/1704.01444.pdf\n\nOn IMDB, using a single neuron from their embedding they get 92.3%, significantly better than your 90.15%. Using all the neurons, they get 92.88%. \n\nGiven that the reported results are actually very poor relative to state of the art, and that the authors did not conduct a proper evaluation of their proposed method, I strongly recommend rejection.", "This paper proposes a new model for the general task of inducing document representations (embeddings). The approach uses a CNN architecture, distinguishing it from the majority of prior efforts on this problem, which have tended to use RNNs. This affords obvious computational advantages, as training may be parallelized. \n\nOverall, the model presented is relatively simple (a good thing, in my view) and it indeed seems fast. I can thus see potential practical uses of this CNN based approach to document embedding in future work on language tasks. The training strategy, which entails selecting documents and then indexes within them stochastically, is also neat. Furthermore, the work is presented relatively clearly. That said, my main concerns regarding this paper are that: (1) there's not much new here, and, (2) the experimental setup may be flawed, in that it would seem model hyperparams were tuned for the proposed approach but not for the baselines; I elaborate on these concerns below.\n\nSpecific comments:\n---\n- It's hard to tease out exactly what's new here: the various elements used are all well known. But perhaps there is merit in putting the specific pieces together. Essentially, the novelty is using a CNN rather than an RNN to induce document embeddings. \n\n- In Section 4.1, the authors write that they report results for their after running \"parameter sweeps ...\" -- I presume that these were performed on a validation set, but the authors should say so. In any case, a very potential weakness here: were analagous parameter sweeps for this dataset performed for the baseline models? It would seem not, as the authors write \"the IMDB training data using the default hyper-parameters\" for skip-thought. Surely it is unfair comparison if one model has been tuned to a given dataset while others use only the default hyper-parameters? \n\n- Many important questions were left unaddressed in the experiments. For example, does one really need to use the gating mechanism borrowed from the Dauphin et al. paper? What happens if not? How big of an effect does the stochastic sampling of document indices have on the learned embeddings? Does the specific underlying CNN architecture affect results, and how much? None of these questions are explored. \n\n- I was left a bit confused regarding how the v_{1:i-1} embedding is actually estimated; I think the details here are insufficient in the current presentation. The authors write that this is a \"function of all words up to w_{i-1}\". This would seem to imply that at test time, prediction is not in fact parallelizable, no? Yet this seems to be one of the main arguments the authors make in favor of the model (in contrast to RNN based methods). In fact, I think the authors are proposing using the (aggregated) filter activation vectors (h^l(x)) in eq. 5, but for some reason this is not made explicit. \n\nMinor comments:\n\n- In Eq. 4, should the product be element-wise to realize the desired gating (as per the Dauhpin paper)? This should be made explicit in the notation.\n\n- On the bottom of page 3, the authors claim \"Expanding the prediction to multiple words makes the problem more difficult since the only way to achieve that is by 'understanding' the preceding sequence.\" This claim should either by made more precise or removed. It is not clear exactly what is meant here, nor what evidence supports it.\n\n- Commas are missing in a few. For example on page 2, probably want a comma after \"in parallel\" (before \"significantly\"); also after \"parallelize\" above \"Approach\".\n\n- Page 4: \"In contrast, our model addresses only requires\" --> drop the \"addresses\". ", "This paper uses CNNs to build document embeddings. The main advantage over other methods is that CNNs are very fast.\n\nFirst and foremost I think this: \"The code with the full model architecture will be released … and we thus omit going into further details here.\" is not acceptable. Releasing code is commendable, but it is not a substitute for actually explaining what you have done. This is especially true when the main contribution of the work is a network architecture. If you're going to propose a specific architecture I expect you to actually tell me what it is.\n\nI'm a bit confused by section 3.1 on language modelling. I think the claim that it is showing \"a direct connection to language modelling\" and that \"we explore this relationship in detail\" are both very much overstated. I think it would be more accurate to say this paper takes some tricks that people have used for language modelling and applies them to learning document embeddings.\n\nThis paper proposed both a model and a training objective, and I would have liked to see some attempt to disentangle their effect. If there is indeed a direct connection between embedding models and language models then I would have also expected to see some feedback effect from document embedding to language modeling. Does the embedding objective proposed here also lead to better language models?\n\nOverall I do not see a substantial contribution from this paper. The main claims seem to be that CNNs are fast, and can be used for NLP, neither of which are new.\n", "This paper proposes using CNNs with a skip-gram like objective as a fast way to output document embeddings and much faster compared to skip-thought and RNN type models.\n\nWhile the problem is an important one, the paper only compares speed with the RNN-type model and doesn't make any inference speed comparison with paragraph vectors (the main competing baseline in the paper). Paragraph vectors are also parallelizable so it's not obvious that this method would be superior to it. The paper in the introduction also states that doc2vec is trained using localized contexts (5 to 10 words) and never sees the whole document. If this was the case then paragraph vectors wouldn't work when representing a whole document, which it already does as can be seen in table 2.\n\nThe paper also fails to compare with the significant amount of existing literature on state of the art document embeddings. Many of these are likely to be faster than the method described in the paper. For example:\n\n\nArora, S., Liang, Y., & Ma, T. A simple but tough-to-beat baseline for sentence embeddings. ICLR 2017.\nChen, M. Efficient vector representation for documents through corruption. ICLR 2017.\n", "I am not convinced the application of CNNs to document modeling alone is an interesting novelty. CNNs have been previously applied to NLP in various ways, and the speed advantages have been noted, for example Neural Machine Translation in Linear Time https://arxiv.org/abs/1610.10099 and Attention is All You Need https://arxiv.org/abs/1706.03762 both note speed as an advantage.\n\nThese works do not address documents; however, \n\n1. Similar things have been done before, for example in http://www.datalab.uci.edu/papers/kdd2015_dimitris.pdf from 2015 which cites a 2014 paper for the architecture.\n\n2. The extension to documents here is just treating documents as really long sentences, which is not very substantial.\n\nI would still like to see some attempt to disentangle the contribution of the model and the objective. If it is indeed the case that multi-step predictions make language models perform worse then why should I expect them to make embedding models better? I think this claimed connection should either be explored and exploited, or if it cannot be exploited then it should be dropped.\n\nThe fact that this comment page already has more than one request for clarification on the model architecture suggests that sufficient details are not present in the paper.\n", "We would like to thank the reviewer for taking the time to review our work and for the insightful suggestions and comments. Below we address some of the main concerns that were brought up.\n\nRegarding the novelty, we believe that the novel aspect of our work is the end-to-end application of CNNs to document embedding. To the best of our knowledge CNNs have not been applied to unsupervised semantic learning before and most research has concentrated on RNNs. Our work demonstrates that with appropriate architecture and objective function CNNs can achieve comparable or better performance with 10x to 20x faster inference. \n\nAll parameter sweeps were done on the validation set and the best model was then tested on the test set. We made an extensive effort to tune the baselines and fully acknowledge that fair comparison is very important. By “default hyper parameters” we meant settings such as number of layers, activation functions and optimizer as these are integral parts of each proposed model. All other parameters were extensively tuned for each baseline using the same parameter sweeps as in our model. Furthermore, doc2vec results are taken from Mensil et al and correspond to a highly tuned version of this baseline.\n\nWe agree that further analysis of the proposed architecture would be informative and will included it in the revised draft. In short we observed the following: 1) gating activation function provided between 1% - 3% improvement over relu activations 2) stochastic sampling of prediction point for each document resulted in better generalization especially for datasets like IMDB where document lengths vary significantly 3) for CNN architecture we found that using more than 3 or 4 convolutional layers did not significantly improve performance and mostly resulted in slower training and inference runtimes.\n\nThe embedding for the subsequence v_{1:i-1} is obtained by passing word sequence w_1,...,w_{i-1} through the CNN. To deal with the variable length problem we apply max (or max k) in the last convolutional layer which always ensures that the activation that are passed to the fully connected layers have the same length. The activations of the last fully connected layer are then taken as the embedding for v_{1:i-1}. At test time we pass the full word sequence w_1,...,w_|D| through the CNN to get the embedding for the entire document. Not that unlike RNN which would require |D| sequential operations, CNN can process the entire sequence in parallel thus significantly accelerating inference. \n", "We would like to thank the reviewer for taking the time to review our work and for the insightful suggestions and comments. Below we address some of the main concerns that were brought up.\n\nRegarding the novelty, we believe that we have proposed the first CNN model for document embedding. While CNNs have been recently used for language modelling we are not aware of any CNN model for document embedding. Empirically we have demonstrated that our approach can match or outperform RNN models that are traditionally used for this task with 10x to 20x improvement in inference speed. As such we believe that our approach is novel and further explores a promising direction of using CNNs in place of RNNs for NLP tasks.\n\nWe understand that the connection to language modeling is unclear and will revise the draft accordingly. The main point that we are making is that the loss in Equation 4 reduces to language modelling loss if instead of h words forward we predict just one. So we are not just using some tricks, but rather show that the CNN language model of Dauphin et al can be generalized to document embedding by modifying the objective function and network architecture. While the embedding objective that we propose can be used to train a language model, we found that predicting more than one word forward does not improve language model accuracy and generally makes it worse. This is expected since language models always predict one word forward and our objective thus optimizes for a different task. We did however find that increasing the prediction window improves the quality of document embeddings since it forces the embedding model to model longer range semantic dependencies.\n\nFinally, we believe that we have provided sufficient details on model architecture including number of layers, layer size, activation function and optimization parameters (see Section 4). The details that were omitted are not critical for model understanding or reproducibility and given space constraints we opted to include further empirical results instead.", "We would like to thank the reviewer for taking the time to review our work and for the insightful suggestions and comments. Below we address some of the main concerns that were brought up.\n\nFirst, we do not compare with the speed of doc2vec since doc2vec requires optimization to be conducted during inference for each new document. This involves computing multiple gradient updates and applying them to the paragraph vector using an optimizer of choice. Regardless of the implementation, this procedure is an order of magnitude slower than making a single forward pass through an RNN/CNN. The doc2vec implementation that we have is at least 10x slower during inference than RNN. These findings are not new and have been discussed by authors of SkipThought and other related works. As such we do not believe that speed comparison with doc2vec is relevant here.\n\nSecond, we’d like to thank the reviewer for pointing out the two related works and will add them in the next revision of our draft. However, both papers propose models that represent documents as (weighted) averages of word vectors. We do compare with word2vec average (“Avg. word2vec” baseline) although it is the equal weight version, and in addition have conducted further experiments to compare with these two models. Chen at al reports IMDB accuracy of 88.3% (Table 1 in that paper), and we got an accuracy of 87.4% using the code released by Arora at al. Neither of these beat our approach. Furthermore, while average word vectors would be computationally faster than CNN, the temporal order of the words is completely lost. One can create many examples of documents with very similar word counts but drastically different meaning due to the order in which these words appear. For “global” inference tasks such as sentiment classification, word order is not particularly important since even bag-of-words models produce strong performance. However, for more complex tasks such as q&a it becomes critical, and we believe that our approach provides a principled way to do unsupervised document learning that fully preserves temporal aspects while being significantly faster than RNNs.", "\"The classifier is a feed forward neural network with a single hidden layer and a tanh activation function.\" What kind of hidden layer?", "So I'm just wondering when and where will the code be released?", "Hi Marc,\n\nThank you for taking the interest in our work. Below are some further details of our CNN model and the classifier, let us know if you have further questions. We are currently working on cleaning up and refactoring the code and aim to release it in the next few weeks. \n\nThe classifier is a feed forward neural network with a single hidden layer and a tanh activation function. We train the classifier for 500 epochs, with a batch size of 100 and a momentum optimizer, with a learning rate of 0.0008 and momentum value of 0.9. We compute the test classification accuracy after every epoch and take the highest attained value for each model.\n\nFor the CNN model we use dropout of 0.8 (prob to keep), 300-900 kernels in each convolutional layer, gating activation function and residual connections every other layer [see Dauphin et al ICML 2017 for analogous architecture]. Words are represented using a pre-trained word2vec model with 300 dimensions and we update word vectors together with CNN during training. We use mini-batches of size 100 and predict 10 words forward for each example in the mini-batch using 50 negative samples to balance the classification objective. All CNN models use Adam optimizer with a learning rate of 0.0003.", "Our team is currently considering reproducing your paper. However the details of this paper, which are vital for our reproduction, appear to be vague. For example, which \"shallow classifier\" do you use? Just wondering when you will reveal the details or the code.\n" ]
[ -1, 6, 4, 2, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ryHM_fbA-", "iclr_2018_ryHM_fbA-", "iclr_2018_ryHM_fbA-", "iclr_2018_ryHM_fbA-", "Hk5UoFVzM", "HJmMNVDlz", "BkBvQaFez", "B1KZkIqxG", "B1siqc_yz", "iclr_2018_ryHM_fbA-", "ry21MYLJG", "iclr_2018_ryHM_fbA-" ]
iclr_2018_ByUEelW0-
Modifying memories in a Recurrent Neural Network Unit
Long Short-Term Memory (LSTM) units have the ability to memorise and use long-term dependencies between inputs to generate predictions on time series data. We introduce the concept of modifying the cell state (memory) of LSTMs using rotation matrices parametrised by a new set of trainable weights. This addition shows significant increases of performance on some of the tasks from the bAbI dataset.
rejected-papers
the idea is interesting, but as pointed out the reviewers (and also agreed by the authors), the current manuscript lacks clear motivations, reasons underlying specific design choices and convincing empirical evaluation.
train
[ "SkYqiWteM", "HyoHt8YlG", "rJeml9qlf", "BJ0XIGzMz", "r1Jn2Mq0Z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public" ]
[ "The paper proposes to add a rotation operation in long short-term memory (LSTM) cells. It performs experiments on bAbI tasks and showed that the results are better than the simple baselines with original LSTM cells. There are a few problems with the paper.\n\nFirstly, the title and abstract discuss \"modifying memories\", but the content is only about a rotation operation. Perhaps the title should be \"Rotation Operation in Long Short-Term Memory\"?\n\nSecondly, the motivation of adding the rotation operation is not properly justified. What does it do that a usual LSTM cell could not learn? Does it reduce the excess representational power compared to the LSTM cell that could result in better models? Or does it increase its representational capacity so that some pattern is modeled in the new cell structure that was not possible before? This is not clear at all after reading the paper. Besides, the idea of using a rotation operation in recurrent networks has been explored before [3].\n\nFinally, the task (bAbI) and baseline models (LSTM from a Keras tutorial) are too weak. There have been recent works that nearly solved the bAbI tasks to perfection (e.g., [1][2][4][5], and many others). The paper presented a solution that is weak compared to these recent results.\n\nIn a summary, the main idea of adding rotation to LSTM cells is not properly justified in the paper, and the results presented are quite weak for publication in ICLR 2018.\n\n[1] Sainbayar Sukhbaatar, Jason Weston, Rob Fergus. End-to-end memory networks, NIPS 2015\n[2] Caiming Xiong, Stephen Merity, Richard Socher. Dynamic Memory Networks for Visual and Textual Question Answering, ICML 2016\n[3] Mikael Henaff, Arthur Szlam, Yann LeCun, Recurrent Orthogonal Networks and Long-Memory Tasks, ICML 2016 \n[4] Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio, Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes, ICLR 2017\n[5] Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun, Tracking the World State with Recurrent Entity Networks, ICLR 2017\n", "The paper proposes an additional transform in the recurrent neural network units. The transform allows for explicit rotations and swaps of the hidden cell dimensions. The idea is illustrated for LSTM units, where the transform is applied after the cell values are computed via the typical LSTM updates.\n\nMy first concern is the motivation. I think the paper needs a more compelling example where swaps and rotations are needed and cannot otherwise be handled via gates. In the proposed example, it's not clear to me why the gate is expected to be saturated at every time step such that it would require the memory swaps. Alternatively, experimentally showing that the network makes use of swaps in an interpretable way (e.g. at certain sentence positions) could strengthen the motivation.\n\nSecondly, the experimental analysis is not very extensive. The method is only evaluated on the bAbI QA dataset, which is a synthetic dataset. I think a language modeling benchmark and/or a larger scale question answering dataset should be considered.\n\nRegarding the experimental setup, how are the hyper-parameters for the baseline tuned? Have you considered training jointly (across the tasks) as well?\n\nAlso, is the setting the same as in Weston et al (2015)? While for many tasks the numbers reported by Weston et al (2015) and the ones reported here for the LSTM baseline are aligned in the order of magnitude, suggesting that some tasks are easier or more difficult for LSTMs, there are large differences in other cases, for task #5 (here 33.6, Weston 70), for task #16 (here 48, Weston 23), and so on.\n\nFinally, do you have an intuition (w.r.t. to swaps and rotations) regarding the accuracy improvements on tasks #5 and #18?\n\nSome minor issues:\n- The references are somewhat inconsistent in style: some have urls, others do not; some have missing authors, ending with \"et al\".\n- Section 1, second paragraph: senstence\n- Section 3.1, first paragraph: thorugh\n- Section 5: architetures", "Summary: This paper introduces a model that combines the rotation matrices with the LSTMs. They apply the rotations before the final tanh activation of the LSTM and before applying the output gate. The rotation matrix is a block-diagonal one where each block is a 2x2 rotations and those rotations are parametrized by another neural network that predicts the angle of the rotations. The paper only provides results on the bAbI task. \n\nQuestions:\nHave you compared against to the other parametrizations of the LSTMs and rotation matrices? (ablation study)\nHave you tried on other tasks?\nWhy did you just apply the rotations only on d_{t}.\n\nPros:\nUses a simple parametrization of the rotation matrices.\n\nCons:\nNot clear justification and motivations\nThe experiments are really lacking:\nNo ablation study\nThe results are only limited to single toy task.\n\n\nGeneral Comments:\n\nThis paper proposes to use the rotation matrices with LSTMs. However there is no clear justification why is this particular parametrization of rotation matrix is being used over others and why is it only applied before the output gate. The experiments are seriously lacking, an ablation study should have been made and the results are not good enough. The experiments are only limited to bAbI task which doesn’t tell you much. This paper is not ready for publication, and really feels like it is rushed.\n\nMinor Comment:\nThis paper needs more proper proof-reading. There are some typos in it, e.g.:\n1st page, senstence --> sentence\n4th page, the the ... --> the\n\n", "The paper proposes the usage of a 2-D rotation based gating mechanism in LSTM (RotLSTM) to increase accuracy and speed up the convergence. The validation that was presented in the paper shows that the RotLSTM outperforms a baseline LSTM in most of the bAbI tasks and in some cases, it requires a smaller cell size to achieve the same accuracy and it converges earlier. As part of the reproducibility challenge, we downloaded the code that was provided with the paper and tried to reproduce the results. We used that code to see if the numbers that were published matched the paper results. We also tried to see if the conclusion held after performing model selection and picking better models.\n\nPros:\nThe code was helpful, implementing exactly what was presented in the paper. It did not take us more than a couple of hours to reuse it for other purposes (model selection or different architectures and different datasets).\nThe ideas in the paper were clearly explained and the paper was easy to understand.\n\nCons:\nWhen we rerun the published code on the same tasks with the same hyper-parameters we had a different outcome. In our setup RotLSTM did not converge faster than the baseline LSTM except on task 18, and accuracy wise it performed at most as well as the baseline LSTM.\n\nWe performed model selection to choose a good LSTM to compare with. We limited the tasks to task 5, task 7, and task 18 as they were the best-performing tasks from the published results. In 2 out of these three tasks (5 and 7), training the selected LSTM and a RotLSTM that uses the same hyperparameters shows that the RotLSTM performed better than the LSTM. In the third task (task 18) it was the opposite. The hyper-parameters that were tuned during this task are the epochs, the batch size, the embedded hidden size, the query hidden size and the sentence hidden size. We performed Bayesian optimization to select the model, each 5-uplet of values is used to train 5 different models and the output of the objective function is the average of accuracies of these 5 models. The regions of the search are:\nEpochs [5:5:200] ([start:step:end])\nBatch size [16:16:256]\nEmbedded hidden size, query hidden size, and sentence hidden size [1:1:100] each\n\nOther comments:\nThe experiments took significant time to be performed. Model selection for each task takes between 3h30 and 4h00 (on 4 core blade using Keras and GPyOpt’s multicore options). The reproduction of the results regarding the impact of the cell size on the test accuracy took 24 hours of experimentation (4 cores and Tesla K80 GPU).\nWe tried to experiment with a time series dataset but we found no difference in accuracy between LSTM and RotLSTM. RotLSTM also required significantly more time to train.\n\nReproducibility report: https://www.dropbox.com/s/ok4z66ccg5m6bff/comp-551-final.pdf?dl=0\n", "It is worth noting there is a more recent bAbI LSTM baseline that outperforms the proposed model for almost all of the tasks:\n\nhttps://arxiv.org/abs/1610.09027\n\nTable 2 (in the suppl.). It may be worth comparing to these numbers. In that case the LSTM is jointly trained over all tasks, and is not tuned on a per-task basis. That was with a cell size of 100, trained with RMSProp with a learning rate of 1e-5." ]
[ 4, 4, 3, -1, -1 ]
[ 3, 3, 4, -1, -1 ]
[ "iclr_2018_ByUEelW0-", "iclr_2018_ByUEelW0-", "iclr_2018_ByUEelW0-", "iclr_2018_ByUEelW0-", "iclr_2018_ByUEelW0-" ]
iclr_2018_Syt0r4bRZ
Tree2Tree Learning with Memory Unit
Traditional recurrent neural network (RNN) or convolutional neural net- work (CNN) based sequence-to-sequence model can not handle tree structural data well. To alleviate this problem, in this paper, we propose a tree-to-tree model with specially designed encoder unit and decoder unit, which recursively encodes tree inputs into highly folded tree embeddings and decodes the embeddings into tree outputs. Our model could represent the complex information of a tree while also restore a tree from embeddings. We evaluate our model in random tree recovery task and neural machine translation task. Experiments show that our model outperforms the baseline model.
rejected-papers
the problem is interesting, and the reviewers acknowledge it's worth an effort to tackle. unfortunately all the reviewers found the work to be too preliminary without a convincing evidence supporting the proposed approach against other alternatives (or on its own.)
train
[ "HJA-R_Fxf", "rkP1vRFxz", "rkAf3gcgM", "Hy6misoxf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "Summary: the paper proposes a tree2tree architecture for NLP tasks. Both the encoder and decoder of this architecture make use of memory cells: the encoder looks like a tree-lstm to encode a tree bottom-up, the decoder generates a tree top-down by predicting the number of children first. The objective function is a linear mixture of the cost of generating the tree structure and the target sentence. The proposed architecture outperforms recursive autoencoder on a self-to-self predicting trees, and outperforms an lstm seq2seq on En-Cn translation.\n\nComment:\n\n- The idea of tree2tree has been around recently but it is difficult to make it work. I thus appreciate the authors’ effort. However, I wish the authors would have done it more properly.\n- The computation of the encoder and decoder is not novel. I was wondering how the encoder differs from tree-lstm. The decoder predicts the number of children first, but the authors don’t explain why they do that, nor compare this to existing tree generators. \n- I don’t understand the objective function (eq 4 and 5). Both Ls are not cross-entropy because label and childnum are not probabilities. I also don’t see why using Adam is more convenient than using SGD.\n- I think eq 9 is incorrect, because the decoder is not Markovian. To see this we can look at recurrent neural networks for language modeling: generating the current word is conditioning on the whole history (not only the previous word).\n- I expect the authors would explain more about how difficult the tasks are (eg. some statistics about the datasets), how to choose values for lambda, what the contribution of the new objective is.\n\nAbout writing:\n- the paper has so many problems with wording, e.g. articles, plurality.\n- many terms are incorrect, e.g. “dependent parsing tree” (should be “dependency tree”), “consistency parsing” (should be “constituency parsing”)\n- In 3.1, Socher et al. do not use lstm\n- I suggest the authors to do some more literature review on tree generation\n", "This paper proposes a tree-to-tree model aiming to encode an input tree into embedding and then decode that back to a tree. The contributions of the work are very limited. Basic attention models, which have been shown to help model structures, are not included (or compared). Method-wise, the encoder is not novel and decoder is rather straightforward. The contributions of the work are in general very limited. Moreover, this manuscript contains many grammatical errors. In general, it is not ready for publication. \n\nPros:\n- Investigating the ability of distributed representation in encoding input structured is in general interesting. Although there have been much previous work, this paper is along this line.\n\nCons:\n- The contributions of the work are very limited. For example, attention, which have been widely used and been shown to help capture structures in many tasks, are not included and compared in this paper.\n- Evaluation is not very convincing. The baseline performance in MT is too low. It is unclear if the proposed model is still helpful when other components are considered (e.g., attention). \n- For the objective function defined in the paper, it may be hard to balance the \"structure loss\" and \"content loss\" in different problems, and moreover, the loss function may not be even useful in real tasks (e.g, in MT), which often have their own objectives (as discussed in this paper). Earlier work on tree kernels (in terms of defining tree distances) may be related to this work. \n- The manuscript is full of grammatical errors, and the following are some of them:\n\"encoder only only need to\"\n\"For for tree reconstruction task\"\n\"The Socher et al. (2011b) propose a basic form\"\n\"experiments and theroy analysis are done\"\n", "This paper presents a model to encode and decode trees in distributed representations. \nThis is not the first attempt of doing these encoders and decoders. However, there is not a comparative evalution with these methods.\nIn fact, it has been demonstrated that it is possible to encode and decode trees in distributed structures without learning parameters, see \"Decoding Distributed Tree Structures\" and \"Distributed tree kernels\".\nThe paper should present a comparison with such kinds of models.\n", "We are tasked to evaluate a research paper as a class project and we need to evaluate your results and their plausibility. Could we have access to your source code for training the Neural Nets and the training data to analyze the results.\n\nIt would be of great help and we could forward a lot of positive feedback hopefully." ]
[ 2, 5, 4, -1 ]
[ 4, 4, 4, -1 ]
[ "iclr_2018_Syt0r4bRZ", "iclr_2018_Syt0r4bRZ", "iclr_2018_Syt0r4bRZ", "iclr_2018_Syt0r4bRZ" ]
iclr_2018_Hk2MHt-3-
Coupled Ensembles of Neural Networks
We investigate in this paper the architecture of deep convolutional networks. Building on existing state of the art models, we propose a reconfiguration of the model parameters into several parallel branches at the global network level, with each branch being a standalone CNN. We show that this arrangement is an efficient way to significantly reduce the number of parameters while at the same time improving the performance. The use of branches brings an additional form of regularization. In addition to splitting the parameters into parallel branches, we propose a tighter coupling of these branches by averaging their log-probabilities. The tighter coupling favours the learning of better representations, even at the level of the individual branches, as compared to when each branch is trained independently. We refer to this branched architecture as "coupled ensembles". The approach is very generic and can be applied with almost any neural network architecture. With coupled ensembles of DenseNet-BC and parameter budget of 25M, we obtain error rates of 2.92%, 15.68% and 1.50% respectively on CIFAR-10, CIFAR-100 and SVHN tasks. For the same parameter budget, DenseNet-BC has an error rate of 3.46%, 17.18%, and 1.8% respectively. With ensembles of coupled ensembles, of DenseNet-BC networks, with 50M total parameters, we obtain error rates of 2.72%, 15.13% and 1.42% respectively on these tasks.
rejected-papers
The paper studies end-to-end training of a multi-branch convolutional network. This appears to lead to strong accuracies on the CIFAR and SVHN datasets, but it remains unclear whether or not this results transfers to ImageNet. The proposed approach is hardly novel, and lacks a systematic comparison with "regular" ensembling methods and with related mixture-of-experts approaches (for instance: S. Gross et al. Hard Mixtures of Experts for Large Scale Weakly Supervised Vision, 2017; Shazeer et al. Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer, 2017).
train
[ "SJBEkBpmz", "SJXrqMPgf", "Hk8Nwx9xf", "rkbHBeSbM", "B1jYCEamG", "H1OW0NpmM", "SJovT46Qz" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "We have updated the paper based on the reviewer suggestions and also added responses to their questions.\n\nMain updates:\n\n- added figure to demonstarte the model architecture and fusion scheme (Figure 1)\n- added Section G to compare between single-branch and multi-branch models for a fixed training time budget.\n- added Section H to compare between single-branch and multi-branch models in a low training data scenario.\n- added Section I for experiments on ImageNet\n\nUpdate in table 9 on January 11.", "This work proposed a reconfiguration of the existing state-of-the-art CNN model architectures including ResNet and DensNet. By introducing new branching architecture, coupled ensembles, they demonstrate that the model can achieve better performance in classification tasks compared with the single branch counterpart with same parameter budget. Additionally, they also show that the proposed ensemble method results in better performance than other ensemble methods (For example, ensemble over independently trained models) not only in combined mode but also in individual branches.\n\nPaper Strengths:\n* The proposed coupled ensembles method truly show impressive results in classification benchmark (DenseNet-BC L = 118 k = 35 e = 3).\n* Detailed analysis on different ensemble fusion methods on both training time and testing time.\n* Simple but effective design to achieve a better result in testing time with same total parameter budget.\n\t\nPaper Weakness:\n* Some detail about different fusing method should be mentioned in the main paper instead of in the supplementary material.\n* In practice, how much more GPU memory is required to train the model with parallel branches (with same parameter budgets) because memory consumption is one of the main problems of networks with multiple branches.\n* At least one experiment should be carried out on a larger dataset such as ImageNet to further demonstrate the validity of the proposed method.\n* More analysis can be conducted on the training process of the model. Will it converge faster? What will be the total required training time to reach the same performance compared with single branch model with the same parameter budget?\n", "Strengths:\n* Very simple approach, amounting to coupled training of \"e\" identical copies of a chosen net architecture, whose predictions are fused during training. This forces the different model instances to become more complementary.\n* Perhaps counterintuitively, experiments also show that coupled ensembling leads to individual nets that perform better than those produced by separate training.\n* The practical advantages of the proposed approach are twofold:\n1. Given a fixed parameter budget, coupled ensembling leads to better accuracy than a single net or an ensemble of disjointly-trained nets.\n2. For the same accuracy, coupled ensembling yields significant parameter savings.\n\nWeaknesses:\n* Although results are very strong, the proposed models do not outperform the state-of-the-art, except for the models reported in Table 4, which however were obtained by *traditional* ensembling of coupled ensembles. \n* Coupled ensembling requires joint training of all nets in the ensemble and thus is limited by the size of the model that can be fit in memory. Conversely, traditional ensembling involves separate training of the different instances and this enables the learning of an arbitrary number of individual nets. \n* I am surprised by the results in Table 2, which suggest that the optimal number of nets in the ensemble is remarkably low (only 3!). It'd be valuable to understand whether this kind of result holds for other network architectures or whether it is specific to this choice of net.\n* Strictly speaking it is correct to refer to the individual nets in the ensembles as \"branches\" and \"basic blocks.\" Nevertheless, I find the use of these terms confusing in the context of the proposed approach, since they are commonly used to denote concepts different from those represented here. I would recommend refraining from using these terms here.\n\nOverall, the paper provides limited technical novelty. Yet, it reveals some interesting empirical findings about the benefits of coordinated training of models in an ensemble.\n", "This paper presents a deep network architecture which processes data using multiple parallel branches and combines the posterior from these branches to compute the final scores; the network is trained in end-to-end, thus training the parallel branches jointly. Existing literature with branching architecture either employ a 2 stage training approach, training branches independently and then training the fusion network, or the branching is restricted to local regions (set of contiguous layers). In effect, this paper extends the existing literature suggesting end-to-end branching. While the technical novelty, as described in the paper, is relatively limited, the thorough experimentation together with detailed comparisons between intuitive ways to combine the output of the parallel branches is certainly valuable to the research community.\n\n+ Paper is well written and easy to follow.\n+ Proposed branching architecture clearly outperforms the baseline network (same number of parameters with a single branch) and thus offer yet another interesting choice while creating the network architecture for a problem\n+ Detailed experiments to study and analyze the effect of various parameters including the number of branches as well as various architectures to combine the output of the parallel branches.\n+ [Ease of implementation] Suggested architecture can be easily implemented using existing deep learning frameworks.\n\n- Although joint end-to-end training of branches certainly brings value compared to independent training, but the increased resource requirements may limits the applicability to large benchmarks such as ImageNet. While authors suggests a way to circumvent such limitations by training branches on separate GPUs but this would still impose limits on the number of branches as well as its ease of implementation.\n- Adding an overview figure of the architecture in the main paper (instead of supplementary) would be helpful.\n- Branched architecture serve as a regularization by distributing the gradients across different branches; however this also suggests that early layers on the network across branches would be independent. It would helpful if authors would consider an alternate archiecture where early layers may be shared across branches, suggesting a delayed branching, with fusion at the final layer.\n- One of the benefits of architectures such as DenseNet is their usefulness as a feature extractor (output of lower layers) which generalizes even to domain other that the dataset; the branched architecture could potentially diminish this benefit.\n\nMinor edits: Page 1. 'significantly match and improve' => 'either match or improve'\n\nAdditional notes:\n- It would interesting to compare this approach with a conditional training pipeline that sequentially adds branches, keeping the previous branches fixed. This may offer as a trade-off between benefits of joint training of branches vs being able to train deep models with several branches.\n", "Thank you for the review and valuable feedback. Please find our responses to your questions below:\n\n1. Although joint end-to-end training of branches certainly brings value compared to independent training, but the \nincreased resource requirements may limits the applicability to large benchmarks such as ImageNet. While authors \nsuggests a way to circumvent such limitations by training branches on separate GPUs but this would still impose \nlimits on the number of branches as well as its ease of implementation.\n\nCoupled ensemble learning is precisely a way to increase the performance (minimizing the top-1 error rate) for a \ngiven parameter budget (and/or for a given memory budget, see section B) and/or for a given \ntraining time budget (see section G)). Regarding the training on multiple GPUs, branch parallelism \nis a quite natural and efficient way to split the storage and to parallelize the computation but this is not the only \npossible one. Also, our experiments suggest that the network performance does not critically depends on the exact \nnumber of branches.\n\n\n2. Adding an overview figure of the architecture in the main paper (instead of supplementary) would be helpful.\n\nA figure has been inserted in section 3.\n\n\n3. Branched architecture serve as a regularization by distributing the gradients across different branches; however \nthis also suggests that early layers on the network across branches would be independent. It would helpful if authors \nwould consider an alternate architecture where early layers may be shared across branches, suggesting a delayed \nbranching, with fusion at the final layer.\n\nThanks for the suggestion. We planned to investigate this but we did not have enough time before the deadline and \nthe paper is already quite long.\n\n\n4. One of the benefits of architectures such as DenseNet is their usefulness as a feature extractor (output of lower \nlayers) which generalizes even to domain other that the dataset; the branched architecture could potentially diminish \nthis benefit.\n\nWe see no a priori reason why features extracted by branched architecture should be less efficient than those \nextracted from non-branched ones. We even see no a priori reason either why the benefit they bring in classification \ntasks should not be transferred also with the extracted features. We will conduct such transfer \nexperiments in the future.\n\n\n5. It would interesting to compare this approach with a conditional training pipeline that sequentially adds \nbranches, keeping the previous branches fixed. This may offer as a trade-off between benefits of joint training of \nbranches vs being able to train deep models with several branches\n\nThanks for the suggestion. We had planned to investigate this but we did not have enough time before the deadline. ", "Thank you for the review and valuable feedback. Please find our responses to your questions below:\n\n1. Some detail about different fusing method should be mentioned in the main paper instead of in the supplementary \nmaterial.\n\nDetails are given in the supplementary material but the fusion methods are also discussed in section 3. A figure has \nalso been inserted to demonstrate the architecture (Figure 1).\n\n\n2. In practice, how much more GPU memory is required to train the model with parallel branches (with same parameter \nbudgets) because memory consumption is one of the main problems of networks with multiple branches.\n\nA discussion of the memory requirements and how we address it has been added to section B of supplementary material.\n\n\n3. At least one experiment should be carried out on a larger dataset such as ImageNet to further demonstrate the \nvalidity of the proposed method.\n\nWe have started experiments on ImageNet, the results are reported in Section I. Results show a benefit from using \ncoupled ensembles. Currently the baseline is not state-of-the-art. We are conducting additional experiments and will \nupdate when they are available.\n\n\n4. More analysis can be conducted on the training process of the model. Will it converge faster? What will be the \ntotal required training time to reach the same performance compared with single branch model with the same parameter \nbudget?\n\nThe multi-branch approach leads to better performance even with a constant training time budget. We have added section G in the supplementary material with new experimental results and a discussion.", "Thank you for the review and valuable feedback. Please find our responses to your questions below:\n\n1. Although results are very strong, the proposed models do not outperform the state-of-the-art, except for the\nmodels reported in Table 4, which however were obtained by *traditional* ensembling of coupled ensembles. \n\nWe found two works which achieve better performances:\n\nCutout regularization: This is a data augmentation scheme which is applied to existing models and improves their \nperformance. It is likely that cutout applied to coupled ensembles will also lead to better performance. In contrast, the proposed coupled ensembles scheme applies to the model architecture itself.\n\nShakeDrop: Modification of previously proposed Shake-Shake method. We propose an architectural deisgn choice. Similar to 'cutout', it is likely that ShakeDrop can be adapted to the coupled ensemble framework, leading to improved performance.\n\nApart from these two works and as far as we know, our results are on par with or better than the state of the art for all parameter budget.\n\n\n2. Coupled ensembling requires joint training of all nets in the ensemble and thus is limited by the size of the \nmodel that can be fit in memory. Conversely, traditional ensembling involves separate training of the different \ninstances and this enables the learning of an arbitrary number of individual nets. \n\nCoupled ensemble learning is precisely a way to increase the performance (minimizing the top-1 error rate) for a \ngiven parameter budget (and/or for a given memory budget, see Section B), and/or for a given training time budget (\nsee section G). As we report in section 4.7, it is possible to use the classical ensemble learning approach on top of \nthe coupled ensemble learning one to obtain further benefit.\n\n\n3. I am surprised by the results in Table 2, which suggest that the optimal number of nets in the ensemble is \nremarkably low (only 3!). It'd be valuable to understand whether this kind of result holds for other network \narchitectures or whether it is specific to this choice of net.\n\nThe optimum number probably depends on the the network architecture, on the target task, and on the network size. The\nnetwork size is likely to have a strong influence. The target network size in table 2 is of only 0.8M. On the other\nhand, from table 3, we can see that when the target network size is 32 times bigger, the difference in overall\nperformance is not statistically significant (see section F for number of branches varying from 3 to 8.\n\n\n4. Strictly speaking it is correct to refer to the individual nets in the ensembles as \"branches\" and \"basic blocks.\" \nNevertheless, I find the use of these terms confusing in the context of the proposed approach, since they are \ncommonly used to denote concepts different from those represented here. I would recommend refraining from using \nthese terms here.\n\nYes, this is a problem for which we have not yet found a good solution. \"Instance\", \"element\" or \"column\" could be \nused too. We changed \"basic\" to \"element\" as this is consistent with the ensembling terminology but we kept \"branch\" \nas it actually correspond to the high-level network architecture. We understand that this might be confusing since \nthe internal structure of the element blocks may already be branched (e.g. ResNeXt or Shake-Shake) but this risk of \nconfusion is limited in practice at the level of granularity that we are considering here.\n" ]
[ -1, 6, 6, 6, -1, -1, -1 ]
[ -1, 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_Hk2MHt-3-", "iclr_2018_Hk2MHt-3-", "iclr_2018_Hk2MHt-3-", "iclr_2018_Hk2MHt-3-", "rkbHBeSbM", "SJXrqMPgf", "Hk8Nwx9xf" ]
iclr_2018_SJLy_SxC-
Log-DenseNet: How to Sparsify a DenseNet
Skip connections are increasingly utilized by deep neural networks to improve accuracy and cost-efficiency. In particular, the recent DenseNet is efficient in computation and parameters, and achieves state-of-the-art predictions by directly connecting each feature layer to all previous ones. However, DenseNet's extreme connectivity pattern may hinder its scalability to high depths, and in applications like fully convolutional networks, full DenseNet connections are prohibitively expensive. This work first experimentally shows that one key advantage of skip connections is to have short distances among feature layers during backpropagation. Specifically, using a fixed number of skip connections, the connection patterns with shorter backpropagation distance among layers have more accurate predictions. Following this insight, we propose a connection template, Log-DenseNet, which, in comparison to DenseNet, only slightly increases the backpropagation distances among layers from 1 to (1+log2⁡L), but uses only Llog2⁡L total connections instead of O(L2). Hence, \logdenses are easier to scale than DenseNets, and no longer require careful GPU memory management. We demonstrate the effectiveness of our design principle by showing better performance than DenseNets on tabula rasa semantic segmentation, and competitive results on visual recognition.
rejected-papers
The paper presents an empirical study into sparse connectivity patterns for DenseNets. Whilst sparse connectivity is potentially interesting, the paper does not make a strong argument for such sparse connectivity patterns: in particular, the results on ImageNet suggest that sparse connectivity performs substantially worse than full connectivity (at the same FLOPS-level, Log-DenseNet obtains ~2.5% lower accuracy than baseline DenseNet models, and the best Log-DenseNet is ~4% worse than the best DenseNet). On CamVid, both network architectures appear to perform on par. The paper motivates the model architecture by the high memory consumption of DenseNets but, frankly, that is a very weak motivation: DenseNets are actually very memory-efficient if implemented correctly (https://arxiv.org/pdf/1707.06990.pdf). The fact that such implementations are not well-supported by TensorFlow/PyTorch is a shortcoming of those deep-learning frameworks, not in DenseNets. (In fact, the memory management features that deep-learning frameworks have implemented to make residual networks memory-efficient (for instance, caching GPU memory allocation in PyTorch) are far more complex than the "thousand lines of C++" currently needed to implement a DenseNet correctly.) Such issues will likely be resolved relatively soon by better implementations, and are hardly a good motivation for a different network architecture.
val
[ "BJaucU_gM", "H12ujAYlG", "Hkxey1cxM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper investigates how to impose layer-wise connections in DenseNets most efficiently. The authors propose a connection-pattern, which connects layer i to layer i-2^k, k=0,1,2... The authors also propose maximum backpropgation distance (MBD) for measuring the fluency of gradient flow in the network, and justify the Log-DenseNet's advantage in this framework. Empirically, the author demonstrates the effectiveness of Log-DenseNet by comparing it with two other intuitive connection patterns on CIFAR datasets. Log-DenseNet also improves on FC-DenseNet, where the connection budget is the bottleneck because the feature maps are of high resolutions.\n\n\nStrengths:\n1. Generally, DenseNet is memory-hungry if the connection is dense, and it is worth studying how to sparsify a DenseNet. By showing the improvements on FC-DenseNet, Log-DenseNet demonstrates good potential on tasks which require upsampling of feature maps. \n2. The ablation experiments are well-designed and the visualizations of connectivity pattern are clear.\n\nWeakness:\n1. Adding a comparison with Log-DenseNet and vanilla DenseNet in the Table 2 experiment would make the paper stronger. Also, the NearestHalfAndLog pattern is not used in any latter visual recognition experiments, so I think it's better to just compare LogDenseNet with the two baselines instead. Despite there are CIFAR experiments on Log-DenseNet in latter sections, including results here would be easier to follow.\n2. I would like to see the a comparison with the DenseNet-BC in the segmentation and CIFAR classification tasks, which uses 1x1 conv layers to reduce the number of channels. It should be interesting to study whether it is possible to further sparsify DenseNet-BC, as it has much higher efficiency.\n3. The improvement of efficiency on classifications task is not that significant.\n", "This paper introduces a new connectivity pattern for DenseNets, which encourages short distances among layers during backpropagation and gracefully scales to wider and deeper architectures. Experiments are performed to analyze the importance of the skip connections’ place in the context of image classification. Then, results are reported for both image classification and semantic segmentation tasks.\n\nThe clarity of the presentation could be improved. The main contribution of the paper is a network design that places skip connections to minimize the distances between layers, increasing the distance from 1 to 1 + log L when compared to traditional DenseNets. This design principle allows to mitigate the memory required to train DenseNets, which is critical for applications such as semantic segmentation where the input resolution has to be recovered.\n\nExperiments seem well executed; the authors consider several sparse connectivity patterns for DenseNets and provide empirical evidence highlighting the advantages of having a short maximum backpropagation distance (MBD). Moreover, they provide an analysis on the trade-off between the performance of a network and its computational cost.\n\nAlthough literature review is quite extensive, [a] might be relevant to discuss in the Network Compression section.\n[a] https://arxiv.org/pdf/1412.6550.pdf\n\nIt is not clear why Log-DenseNets would be easier to implement than DenseNets, as mentioned in the abstract. Could the authors clarify that?\n\nIn Tables 1-2-3, it would be good to add the results for Log-DenseNet V2. Adding the MBD of each model in the tables would also be beneficial.\n\nIn Table 3, what does “nan” accuracy mean? (DeepLab-LFOV)\n\nFinally, the authors might want to review the citep/cite use in the manuscript. ", "The paper proposes a nice idea of sparsification of skip connections in DenseNets. The authors decide to use a principle for sparsification that would minimize the distance among layers during the backpropagation. \n\nThe presentation of the paper could be improved. The paper presents an elegant and simple idea in a dense and complex way making the paper difficult to follow. E. g., Fig 1 d is discussed in Appendix and not in the main body of the paper, thus, it could be moved to Appendix section.\n\nTable 1 and 3 presents the results only for LogDenseNet V1, would it be possible to add results for V2 that have different MBD. Also, the budget for the skip connections is defined as log(i) in Table 1 and Table 2 has the budget of log(i/2), would it be possible to add the total number of skip connections to the tables? It would be interesting to compare the total number of skip connections in Jegou et. al. to LogDenseNet V1 in Table 3.\n\nOther issues:\n- Table 3, has an accuracy of nan. What does it mean? Not available or not a number? \n- L is used as the depth, however, in table 1 it appears as short for Log-DenseNetV1. Would it be possible to use another letter here?\n- “…, we make x_i also take the input from x_{i/4}, x_{i/8}, x_{i/16}…”. Shouldn’t x_{1/2} be used too?\n- I’m not sure I understand the reasons behind blurred image in Fig 2 at ½. It is mentioned that “it and its feature are at low resolution”. Could the authors comment on that?\n- Abstract: “… Log-DenseNets are easier than DenseNet to implement and to scale.” It is not clear why would LogDenseNets be easier to implement. " ]
[ 6, 6, 5 ]
[ 4, 4, 4 ]
[ "iclr_2018_SJLy_SxC-", "iclr_2018_SJLy_SxC-", "iclr_2018_SJLy_SxC-" ]
iclr_2018_HknbyQbC-
Generating Adversarial Examples with Adversarial Networks
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.
rejected-papers
The paper presents AdvGAN: a GAN that is trained to generate adversarial examples against a convolutional network. The motivation for this method is unclear: the proposed attack does not outperform simpler attack methods such as Carlini-Wagner attack. In white-box settings, a clear downside for the attacker is that it needs to re-train its GAN everytime the defender changes its convolutional network. More importantly, the work appears preliminary. In particular, the lack of extensive quantitative experiments on ImageNet makes it difficult to compare the proposed approach to alternative attacks methods such as (I-)FGSM, DeepFool, and Carlini-Wagner. The fact that AdvGAN performs well on MNIST is nice, but MNIST should be considered for what it is: a toy dataset. If AdvGANs are, as the authors state in their rebuttal, fast and good at generating high-resolution images, then it should be straightforward to perform comprehensive experiments with AdvGANs on ImageNet (rather than focusing on a small number of images on a single target, as the authors did in their revision)?
train
[ "BkkpVvWEM", "rJFANDH4f", "SJgWlg6yM", "BJjc-tsef", "S1gL2gTlf", "Skv0tazVz", "Hk6T-Bomz", "B1-sJrjQz", "B1OZgBoQG", "Sy3y-GGbf", "rkkYlehxz", "HJZ_Lhjgf", "BkGQbf9lz", "BylMzPmxM", "B1IfJszxM", "BknKcXbxz", "HJgQmzbgM", "HJpmU2JeG", "HkZgMFp1z", "HJ2cJmgJf" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "author", "public", "public", "author", "public", "author", "public", "author", "public" ]
[ "Changes made in our revised version are listed as below:\n- Added perturbation plot in Figure 3(c)(d) and Fig 4(b) for CIFAR-10 and ImageNet, respectively.\n- Added a comparison between AdvGAN, FGSM, and optimization methods, comparing the relative changes from original images to adversarial examples (Table 7 in the appendix). \n- Added a detailed description in section 3.2 about how to control distortion amount generated by AdvGAN.\n- Added suggested references and updated section 2 to include more comprehensive analysis for related work.\n- Updated the wording throughout the paper to make it more clear.\n- Added human perceptual study and quantitative results for AdvGAN on high resolution images in Section 4.5\n- Added additional generated adversarial examples by AdvGAN on high resolution images in Figure 8 in Appendix.\n\nWe would like to thank the reviewers again for the useful feedbacks and suggestions.", "Thanks for the valuable feedback. Regarding the reviewer’s request about high resolution images, we have added (1) a quantitative experiment on attack success rates, (2) a user study on perceptual realism of the examples, and (3) additional qualitative examples, which demonstrate that AdvGAN can effectively generate high resolution adversarial examples. The details are as follows.\n\nWe generate 100 high resolution (299x299) adversarial examples under an L_infinity bound of 0.01 (pixel values are in range [0,1]). This competition provided a dataset compatible with ImageNet. We observe that the attack success rate of AdvGAN is 100%. Section 4.5 details the experiment settings. \n\nIn order to evaluate the perceptual realism of high resolution adversarial examples generated by AdvGAN, we have added a human study in Section 4.5. In our study, participants chose AdvGAN’s adversarial examples as more realistic over the original images in 49.4% of the trials (matching the realism of the original images results in around 50%). This experiment shows that these high resolution AdvGAN adversarial examples are about as realistic as benign images. \n\nIn addition to the quantitative experiment and the user study, we include some high resolution adversarial examples in Figure 8. \n", "I thank the authors for the thoughtful response and rebuttal. The authors have substantially updated their manuscript and improved the presentation.\n\nRe: Speed. I brought up this point because this was a bulleted item in the Introduction in the earlier version of the manuscript. In the revised manuscript, this bullet point is now removed. I will take this point to be moot.\n\nRe: High resolution. The authors point to recent GAN literature that provides some first results with high resolution GANs but I do not see quantitative evidence in the high resolution setting for this paper. (Figure 4 provides qualitative examples from ImageNet but no quantitative assessment.)\n\nBecause the authors improved the manuscript, I upwardly revised my score to 'Ok but not good enough - rejection'. I am not able to accept this paper because of the latter point.\n==========================\n\nThe authors present an interesting new method for generating adversarial examples. Namely, the author train a generative adversarial network (GAN) to adversarial examples for a target network. The authors demonstrate that the network works well in the semi-white box and black box settings.\n\nThe authors wrote a clear paper with great references and clear descriptions.\n\nMy primary concern is that this work has limited practical benefit in a realistic setting. Addressing each and every concern is quite important:\n\n1) Speed. The authors suggest that training a GAN provides a speed benefit with respect to other attack techniques. The FGSM method (Goodfellow et al, 2015) is basically 1 inference operation and 1 backward operation. The GAN is 1 forward operation. Granted this results in a small difference in timing 0.06s versus 0.01s, however it would seem that avoiding a backward pass is a somewhat small speed gain.\n \nFurthermore, I would want to question the practical usage of having an 'even faster' method for generating adversarial examples. What is the reason that we need to run adversarial attacks 'even faster'? I am not aware of any use-cases, but if there are some, the authors should describe the rationales at length in their paper.\n\n2) High spatial resolution images. Previous methods, e.g. FGSM, may work on arbitrarily sized images. At best, GANs generate reasonable images that are lower resolutions (e.g. < 128x128). Building GAN's that operate above-and-beyond moderate spatial resolution is an open research topic. The best GAN models for generating high resolution images are difficult to train and it is not clear if they would work in this setting. Furthermore, images with even higher resolutions, e.g. 512x512, which is quite common in ImageNet, are difficult to synthesizes using current techniques.\n\n3) Controlling the amount of distortion. A feature of previous optimization based methods is that a user may specify the amount of perturbation (epsilon). This is a key feature if not requirement in an adversarial perturbation because a user might want to examine the performance of a given model as a function of epsilon. Performing such an analysis with this model is challenging (i.e. retraining a GAN) and it is not clear if a given image generated by a GAN will always achieve a given epsilon perturbation/\n\nOn a more minor note, the authors suggest that generating a *diversity* of adversarial images is of practical import. I do not see the utility of being able to generate a diversity of adversarial images. The authors need to provide more justification for this motivation.", "This paper describes AdvGAN, a conditional GAN plus adversarial loss. AdvGAN is able to generate adversarial samples by running a forward pass on generator. The authors evaluate AdvGAN on semi-white box and black box setting.\n\nAdvGAN is a simple and neat solution to for generating adversary samples. The author also reports state-of-art results.\n\nComment:\n\n1. For MNIST samples, we can easily find the generated sample is a mixture of two digitals. Eg, for digital 7 there is a light gray 3 overlap. I am wondering this method is trying to mixture several samples into one to generate adversary samples. For real color samples, it is harder to figure out the mixture.\n2. Based on mixture assumption, I suggest the author add one more comparison to other method, which is relative change from original image, to see whether AdvGAN is the most efficient model to generate the adversary sample (makes minimal change to original image).\n\n\n\n", "The paper proposes a way of generating adversarial examples that fool classification systems.\nThey formulate it for a blackbox and a semi-blackbox setting (semi being, needed for training their own network, but not to generate new samples).\n\nThe model is a residual gan formulation, where the generator generates an image mask M, and (Input + M) is the adversarial example.\nThe paper is generally easy to understand and clear in their results.\nI am not awfully familiar with the literature on adversarial examples to know if other GAN variants exist. From this paper's literature survey, they dont exist. \nSo this paper is innovative in two parts:\n- it applies GANs to adversarial example generation\n- the method is a simple feed-forward network, so it is very fast to compute\n\nThe experiments are pretty robust, and they show that their method is better than the proposed baselines.\nI am not sure if these are complete baselines or if the baselines need to cover other methods (again, not fully familiar with all literature here).\n", "We thank the commenter for the question!\nWe are performing targeted attack where we input the target in the loss function as shown in function eq 2, t denotes our target, and we train a generator for a specific target. During test, the trained network will generate the targeted attack for any test image. \n", "We thank the reviewer for the thoughtful comments and suggestions.\n\nSpeed. Speed is just one advantage of our method and is not the main motivation of our method. We agree with the reviewer that FGSM is already fast enough for most applications. However, our proposed model is much more effective in the fooling rate against both white-box and blackbox settings than the FGSM method and is still faster than the FGSM method. In wide resnet setting on CIFAR, FGSM takes 0.39s to generate 100 examples (1 forward and 1 backward pass through the classifier), while AdvGAN takes 0.16s to generate 100 examples (1 forward pass through the generator). In our experiments, the classifier was a wide resnet with 46.16M parameters, while the generator had 0.24M parameters. The speed difference is even larger with deeper classifiers. Moreover, in AdvGAN, the forward operation does not use classifier’s network, but uses generator’s network. Overall, we claim to have developed a faster and more effective alternative method to generating adversarial examples, but improving the speed is just a byproduct for us and generating more photorealistic and effective adversarial examples in both semi white-box and blackbox settings is the main goal.\n\nHigh spatial resolution images. Early GANs have had this problem. However, we claim that AdvGAN still works at high spatial resolution (and it is not unique in doing so). Here are three techniques we applied in AdvGAN.\n(i) Previous work in high resolution: Our method is built on image-to-image translation and conditional GANs (e.g. [pix2pix], [CycleGAN]) rather than unconditional GANs (e.g. [vanilla GANs], [DCGAN]). Many conditional GAN methods have been shown to be able to produce photorealistic results at relatively high resolution (256x256 and 512x512 from [pix2pix] and [CycleGAN]). The recent pix2pixHD paper on arXiv from NVIDIA [Wang et al. 2017] can even produce 2k photo-realistic images. Even recent unconditional GANs like progressive GANs [Karras et al. 2017] are able to produce 1k images.\n(ii) Retaining details from original image: Our goal is to produce the perturbation rather than the final image: output = input + G(input). Details and textures are copied from the input image.\n(iii) Resolution-independent architecture: Our model is fully convolutional and can be applied to input images with arbitrary sizes, similar to [pix2pix] and [CycleGAN].\n\nControlling the amount of distortion. We added more detailed description in the updated paper about how we control the amount of perturbation. Basically, we use parameter c within the hinge loss as shown in eq. (3) to allow users to specify the perturbation amount (epsilon). Note that AdvGAN can explicitly control the amount of perturbation since in the MNIST challenge, it is strictly required that the perturbation is bounded within 0.3 in terms of L-infinity. So the competition results also show that we are able to bound the perturbation accurately so as to win the challenge.\n\nWhy are we interested in the diversity of adversarial examples? We have seen that ensemble adversarial training works better than adversarial training against FGS + rand [Florian et al. 2017]. This indicates that more diverse adversarial examples are needed to perform adversarial training as a defense. In addition, exploring other diverse adversarial examples can help us better understand the space of adversarial examples. For these reasons, we are interested in how to produce diverse adversarial examples, but indeed, we have not made it the main goal of AdvGAN.\n\nReference\n[Pix2pix] Isola, Phillip, et al. \"Image-to-image translation with conditional adversarial networks.\" arXiv preprint arXiv:1611.07004 (2016).\n[CycleGAN] Zhu, Jun-Yan, et al. \"Unpaired image-to-image translation using cycle-consistent adversarial networks.\" arXiv preprint arXiv:1703.10593 (2017).\n[Vanilla GAN] Goodfellow, Ian, et al. \"Generative adversarial nets.\" Advances in neural information processing systems. 2014.\n[DCGAN] Radford, Alec, Luke Metz, and Soumith Chintala. \"Unsupervised representation learning with deep convolutional generative adversarial networks.\" arXiv preprint arXiv:1511.06434 (2015).\n[Wang et al. 2017] Wang, Ting-Chun, et al. \"High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs.\" arXiv preprint arXiv:1711.11585 (2017).\n[Karras et al. 2017] Karras, Tero, et al. \"Progressive growing of gans for improved quality, stability, and variation.\" arXiv preprint arXiv:1710.10196 (2017).\n[Florian et al. 2017] Tramèr, Florian, et al. \"Ensemble Adversarial Training: Attacks and Defenses.\" arXiv preprint arXiv:1705.07204 (2017).\n", "We thank the reviewer for the thoughtful comments and suggestions. As mentioned by the reviewer, for baselines comparison, the proposed AdvGAN is currently the best attack method in Madry et al.’s MNIST Adversarial Examples Challenge (https://github.com/MadryLab/mnist_challenge), which includes many state-of-the-art attack methods.\nIn our updated version, we have also added another comparison on MNIST and CIFAR-10 with FGSM and optimization methods, showing the perturbation amount in Table 7 in the appendix. ", "We thank the reviewer for the thoughtful comments and suggestions. We plot the perturbation in Figure 3 (c) (d) and 4 (b) in the updated version. From these plots, we can see that AdvGAN’s perturbations (amplified by 10×) do not resemble images from CIFAR-10/ImageNet. \nFor fair comparison against other attacks, we limit the perturbation to 0.3 L_infinity distance for MNIST and 8 for CIFAR-10. \n\nWe have compared the attack success rate of adversarial examples by different methods under defenses in Tables 3 and 4 and show that AdvGAN can often achieve high attack success rate under the same perturbation budget compared to other methods. \nWe have also added another comparison on MNIST and CIFAR-10 with FGSM and optimization methods, showing the relative change from original image in Table 7 in the appendix. \nFrom the table, we can see that AdvGAN adds comparable perturbation with CW and less perturbation compared with FGSM. As AdvGAN aims to generate photo realistic images with bounded perturbation instead of minimizing the perturbation as CW does, the perturbation added by AdvGAN is slightly higher compared to CW.\n", "In the architectures described in Fig 1 and Appendix B there seems to be no provision for conditioning the adversarial examples generated on the target label. I don't quite understand how you could generate targeted adversarial examples for a test image without providing the target label as an input to the generator. Thanks in advance for your answer.", "You did evaluate your attack against different defense methods. But, the question is what would be the *appropriate* defense against your attack? And have you tested your attack against that?", "We do think about defense when proposing an attack. In this paper, we tested our attack on defended models in the evaluation section. In our opinion, the results show that our attack is challenging to defend against because it successfully attacks different kinds of defenses.\n\nTo the above commenter, your enthusiasm is appreciated, but we don’t see a ‘straightforward’ way to defend against this attack. It would be helpful if you can provide a proposed defense algorithm for it. We also plan to open source our attack code. Again, our attack was ranked number 1 on the MNIST challenge by Madry’s group, which is a state-of-the-art defense. In our opinion, this suggests that it is not *straightforward* to defend against.\n", "When proposing an attack, we need to think about the right defense. I think we agree that a fixed adversarial example generator can be defended by training a discriminator. Your point that that discriminator cannot defend against other attacks is irrelevant, because here we are talking about defending against your attack, not others'. \n\nTo summarize, while defending against your attacks seems straightforward, this is not the case for other attacks. \n\n", "Trying to follow this discussion. What was it that makes adversarial examples generated by GAN easier defended compared to other attacks? ", " I think the meaning of the question has continued to change subtly, with this latest comment bringing up the use of “the same GAN” for doing the defense. What defense method are you thinking of? (Keep in mind that the discriminator in the GAN cannot distinguish the real and generated images by definition of successfully training a GAN.)\nThat aside, the claim that adversarial examples generated by GAN can be easier defended is not the take-away here. Many fixed attacks can easily be mitigated; this would not be unique to AdvGAN. See Carlini & Wagner’s paper [https://arxiv.org/abs/1705.07263] for many such mitigations and simple workarounds that show that they are ultimately ineffective as defenses. We actually show in the paper (table 3,4,5) many cases where adversarial examples generated by AdvGAN are more successful against defenses than other strong attacks. Moreover, we apply AdvGAN on the MNIST challenge (https://github.com/MadryLab/mnist_challenge) and achieve 88.93% accuracy on the published robust model in the semi-whitebox setting, and 92.76% in the black-box setting, which wins the top position in the challenge. This shows that adversarial examples generated by GAN are actually harder to defend compared with other attacks.", "So, I guess this means that adversarial examples generated by GAN can be easier defended compared to other attacks. Essentially, the same GAN that does the attack can do the defense.", "The answer a very limited yes, where it would appear to work, but may not be a good idea, since this problem is not perfectly symmetric. It is sufficient to be used as an attack for GAN to generate a few different adversarial examples. However, an efficient defense with GAN has to consider the much broader and more complex space of all adversarial examples.\n\nFor a fixed AdvGAN instance, you should be able to train a discriminator to differentiate the outputs of that specific AdvGAN from benign data. Lee et al. have proposed a related method of using GAN for adversarial training [https://arxiv.org/abs/1705.03387].\nHowever, the resulting discriminator is not very useful as a general defense, because it does not detect other attacks, possibly even another instance of AdvGAN. \n\nIn a similar setting, Carlini & Wagner have shown that the C&W attack can bypass a classifier that’s been trained to detect C&W attacks [https://arxiv.org/abs/1705.07263].\n", "My question was: can a GAN defend against adversarial examples generated by your method (using GAN)?", "It doesn’t follow so easily. We’ve shown that a GAN can generate attacks and that it can generate a variety of adversarial examples, but there’s no evidence that the range of outputs covers the entire space of adversarial examples. Furthermore, GANs only learn to approximate a true distribution based on limited training data--just like a classifier in this respect--so they may be susceptible to adversarial examples in the same way.", "If a GAN can learn to attack, can't another GAN learn the adversarial perturbations and defend against it?\n" ]
[ -1, -1, 4, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HknbyQbC-", "SJgWlg6yM", "iclr_2018_HknbyQbC-", "iclr_2018_HknbyQbC-", "iclr_2018_HknbyQbC-", "Sy3y-GGbf", "SJgWlg6yM", "S1gL2gTlf", "BJjc-tsef", "iclr_2018_HknbyQbC-", "HJZ_Lhjgf", "BkGQbf9lz", "B1IfJszxM", "BknKcXbxz", "BknKcXbxz", "HJgQmzbgM", "HJpmU2JeG", "HkZgMFp1z", "HJ2cJmgJf", "iclr_2018_HknbyQbC-" ]