paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
iclr_2018_rkA1f3NpZ
Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks
Deep learning has become the state of the art approach in many machine learning problems such as classification. It has recently been shown that deep learning is highly vulnerable to adversarial perturbations. Taking the camera systems of self-driving cars as an example, small adversarial perturbations can cause the system to make errors in important tasks, such as classifying traffic signs or detecting pedestrians. Hence, in order to use deep learning without safety concerns a proper defense strategy is required. We propose to use ensemble methods as a defense strategy against adversarial perturbations. We find that an attack leading one model to misclassify does not imply the same for other networks performing the same task. This makes ensemble methods an attractive defense strategy against adversarial attacks. We empirically show for the MNIST and the CIFAR-10 data sets that ensemble methods not only improve the accuracy of neural networks on test data but also increase their robustness against adversarial perturbations.
rejected-papers
The paper empirically evaluates the effectiveness of ensembles of deep networks against adversarial examples. The paper adds little to the existing literature in this area: an detailed study on "ensemble adversarial training" already exists, and the experimental evaluation in this paper is limited to MNIST and CIFAR (results on those datasets do not necessarily transfer very well to much higher-dimensional datasets such as ImageNet). Moreover, the reviewers identify several shortcomings in the experimental setup of the paper.
train
[ "By2d_sBef", "r1k7g8Oxz", "B1fsb6bWz", "rJB1WReNG", "HkHuuvBMf", "SJ1fdDHfM", "SJ7nwDrfG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper describes the use of ensemble methods to improve the robustness of neural networks to adversarial examples. Adversarial examples are images that have been slightly modified (e.g. by adding some small perturbation) so that the neural network will predict a wrong class label.\n\nEnsemble methods have been used by the machine learning community since long time ago to provide more robust and accurate predictions.\n\nIn this paper the authors explore their use to increase the robustness of neural networks to adversarial examples.\n\nDifferent ensembles of 10 neural networks are considered. These include techniques such as bagging or injecting noise in the \ntraining data. \n\nThe results obtained show that ensemble methods can sometimes significantly improve the robustness against adversarial examples. However,\nthe performance of the ensemble is also highly deteriorated by these examples, although not as much as the one of a single neural network.\n\nThe paper is clearly written.\n\nI think that this is an interesting paper for the deep learning community showing the benefits of ensemble methods against adversarial\nexamples. My main concern with this paper is the lack of comparison with alternate techniques to increase the robustness against adversarial examples. The authors should have compared with the methods described in:\n\n(Goodfellow et al., 2014; Papernot et al., 2016c), \n(Papernot et al., 2016d) \n(Gu & Rigazio, 2014)\n\nFurthermore, the ensemble approach has the main disadvantage of increasing the prediction time by a lot. For example, with 10 elements in the ensemble, predictions are 10 times more expensive.\n------------------------------\nI have read the updated version of the paper. I think the authors have done a good job comparing with related techniques. Therefore, I have slightly increased my score.\n", "Summary: This paper proposes to use ensembling as an adversarial defense mechanism. The defense is evaluated on MNIST and CIFAR10 ans shows reasonable performance against FGSM and BIM.\n\nClarity: The paper is clearly written and easy to follow. \n\nOriginality: Building an ensemble of models is a well-studied strategy that was shown long ago to improve generalization. As far as I know, this paper is however the first to empirically study the robustness of ensembles against adversarial examples. \n\nQuality: While this paper contributes to show that ensembling works reasonably well against adversarial examples, I find the contribution limited in general.\n- The method is not compared against other adversarial defenses. \n- The results illustrate that adding Gaussian noise on the training data clearly outperforms the other considered ensembling strategies. However, the authors do not go beyond this observation and do not appear to try to understand why it is the case. \n- Similarly, the Bagging strategy is shown to perform reasonably well (although it appears as a weaker strategy than Gaussian noise) but no further analysis is carried out. For instance, it is known that the reduction of variance is maximal in an ensemble when its constituents are maximally decorrelated. It would be worth studying more systematically if this correlation (or 'diversity') has an effect on the robustness against adversarial examples. \n- I didn't understand the motivation behind considering two distinct gradient estimators. Why deriving the exact gradient of an ensemble is more complicated?\n\nPros: \n- Simple and effective strategy.\n- Clearly written paper. \nCons:\n- Not compared against other defenses.\n- Limited analysis of the results. \n- Ensembling neural networks is very costly in terms of training. This should be considered.\n\nOverall, this paper presents an interesting and promising direction of research. However, I find the current analysis (empirically and/or theoretically) to be too limited to constitutes a solid enough piece of work. For this reason, I do not recommend this paper for acceptance. ", "In this manuscript, the authors empirically investigated the robustness of some different deep neural networks ensembles to two types of attacks, namely FGSM and BIM, on two popular datasets, MNIST and CIFAR10. The authors concluded that the ensembles are more accurate on both clean and adversaries samples than a single deep neural network. Therefore, the ensembles are more robust in terms of the ability to correctly classify the adversary attacks.\n\nAs the authors stated, an attack that is designed to fool one network does not necessarily fool the other networks in the same way. This is likely why ensembles appear more robust than single deep learners. However, robustness of ensembles to the white-box attacks that are generated from the ensemble is still low for FGS. Generally speaking, although FGS attacks generated from one network can fool less the whole ensembles, generating FGS adversaries from a given ensemble is still able to effectively fool it. Therefore, if the attacker has access to the ensemble or even know the classification system based on that ensemble, then the ensemble-based system is still vulnerable to the attacks generated specifically from it. Simple ensemble methods are not likely to confer significant robustness gains against adversaries.\n\nIn contrast to FGS results, surprisingly BIM-Grad1 is able to fool more the ensemble than BIM-Grad2. Therefore, it seems that if the attacker makes BIM adversaries from only a single classifier, then she can simply and yet effectively mislead the whole ensemble. In comparison to BIM-Grad2, BIM-Grad1 results show that BIM attacks from one network (BIM-Grad1) can more successfully fool the other different networks in the ensembles in a similar way! BIM-Grad2 is not that much able to fool the ensemble-based system even this attack generated from the ensemble (white-box attacks). In order to confirm the robustness of the ensembles to BIM attacks, the authors can do more experiments by generating BIM-Grad2 attacks with higher number of iterations.\n\nIndeed, the low number of iterations might cause the lower rate of success for generating adversaries by BIM-Grad2. In fact, BIM adversaries from the ensembles might require more number of iterations to effectively fool the majority of the members in the ensembles. Therefore, increasing the number of iterations can increase the successful rate of generating BIM-Average Grad2 adversaries. Note that in this case, it is recommended to compare the amount of distortion (perturbation) with different number of iterations in order to indicate the effectiveness of the ensembles to white-box BIM attacks.\n\nDespite to averaging the output probabilities to compute the ensemble final prediction, the authors generated the adversaries from the ensemble by computing the sum of the gradients of the classifiers loss. A proper approach would have been to average of these gradients. The fact the sum is not divided by the number of members (i.e., sum of gradients instead of average of gradients) is increasing the step size of the adversarial method proportionally to the ensemble size, raising questions on the validity of the comparison with the single-model adversarial generation.\n\nOverall, I found the paper as having several methodological flaws in the experimental part, and rather light in terms of novel ideas. As noticed in the introduction, the idea of using ensemble for enhancing robustness as already been proposed. Making a paper only to restate it, is too light for acceptation. Moreover, experimental setup using a lot of space for comparing results on standard datasets (i.e., MNIST and CIFAR10), even with long presentation of these datasets. Several issues are raised in the current experiments and require adjustments. Experiments should also be more elaborated to make the case stronger, following at least some of indications provided. \n", "We would like to acknowledge that we uploaded a revised version of our paper on December 18. Those changes where motivated by the comments of the reviewers and can be summarized in:\n\n1. We added a subsection and a table to compare our method against other popular methods (Adversarial Training and Defensive Distillation).\n2. We added a paragraph describing the advantages as well as the disadvantages of using ensemble methods as defense method.\n3. We reformulated a few sentenses and fixed a few typos.\n\nWe described all this changes in our responses to the reviewers on December 18. ", "We highly appreciate your feedback.\n\nIn respect to your concerns about the comparability with other methods: We added a section where we compare ensembles with “adversarial learning” (Goodfellow et al., 2014; Papernot et al., 2016c) and with “defensive distillation” (Papernot et al., 2016d). We hope that this resolves your concerns about comparability. Note, we did not compare with (Gu & Rigazio, 2014), due to the non-trivial parameter choices required by this method (particularly the choice of the network architecture).\n\nWe agree with you that the increased computational time when using ensembles should be mentioned in the paper. Hence, we added a new paragraph to the manuscript about the advantages and disadvantages of using ensembles including topics like prediction time, memory requirements, but also higher accuracy on unperturbed test data (we found this is one of the main advantages of ensembles over other defense methods).\n", "We greatly appreciate your insightful feedback. We would like to respond to your comments concerning quality:\n\n1.\tWe added a comparison with other defense methods, specifically with “adversarial training” (Goodfellow et al., 2014; Papernot et al., 2016c) and with “defensive distillation” (Papernot et al., 2016d).\n\n2.\tIt is true that adding Gaussian noise produced the best defense strategy, however this came at a cost of a reduction in accuracy on (unperturbed) test data of about 7% in the Cifar-10 case. That is why we considered Bagging as the better method: it might be a little worse on adversarial perturbed data but better than the Gaussian noise case on unperturbed test data. It is our believe that in real applications unperturbed data is the standard case and adversarial attacked data is a special event. Hence, loosing accuracy on test data can be quite problematic. \n\n3.\tThanks for your idea of evaluating the effect of diversity of the classifiers on the defensive performance of the ensembles. We think that this is worth looking into. But we believe, this would go beyond the scope of our manuscript.\n\n4.\tThe objective of using Grad. 1 was to study the transferability of an attack of one classifier to all classifiers in the ensemble. As we mentioned in the paper, Grad. 2 represents the correct gradient to attack an ensemble.\n\n5.\tWe agree with you that computing Grad. 1 is no more complicated than computing Grad. 2. Hence, we changed the corresponding sentences accordingly.\n\nYou are correct about the increased computational costs when using ensembles. We therefore added a new paragraph were we highlight the advantages of using ensembles as well as the disadvantages (like an increase of computational costs and memory requirements). The advantage section includes especially the increase in accuracy on unperturbated test data while still performing well against adversaries.\n", "Thank you very much for your constructive feedback and your valuable comments. \n\nWe first like to respond to your last comment about originality of our work: To the best of our knowledge, this is the first paper to empirically evaluate the robustness of ensembles against adversarial attacks. The first paper we cited in this context was about how to build ensembles of specialist defenses (classifiers that classify on a subset of classes only and then are joined to be able to predict all classes. The focus is rather on how to build these specialist classifiers.) and the second paper showed an attack on how to break such specialist defenses. However, we did not find any paper that considered general ensemble methods as defense mechanism and analyzed what kind of ensembles are more robust.\n\nWe agree with you in terms that by attacking with FGSM in combination with Adv. 2 and BIM in combination with Adv. 1 one obtains the strongest attack against ensembles, which we also wrote in the experimental part of our paper. \n\nTo your comment that the accuracy on attacked ensembles is relatively low we would like to highlight that all images were scaled to the unit interval [0,1]. Hence, for example the BIM attack on MNIST could make a maximum distortion of 20% at each pixel and in CIFAR-10 case up to 2%. We added a comparison of our method with other defense methods (defensive distillation and adversarial training) on the same kind of attacks to show the effectiveness of ensembles.\n\nIn respect to running BIM-Grad. 2 attacks with more iterations: You are correct that by increasing the number of iterations one can get somewhat better attacks (however this comes at a significant increase of computational cost). Nevertheless, we had to fix the parameters for our evaluations. Note that in the BIM attack the values are clipped to be in an epsilon neighborhood of the true image. This might be why running the attacks for more iterations has no major effect.\n\nYou mentioned that in Grad. 2 the average of the gradients might be better than the sum of the gradients. Here, we like to point out that in both the FGSM and the BIM attack one always computes the sign function of the gradients and sign(\\sum(gradients)) = sign(\\average(gradients)). However, we agree with you that the average is the correct gradient for our weighting system (even though it results in the very same FGSM and BIM attacks). Hence, we changed our manuscript accordingly.\n\nTo your final comment about the experiment part: we added a comparison of our method with “defensive distillation” (Papernot et al., 2016d) and “adversarial training” (Goodfellow et al., 2014; Papernot et al., 2016c) to make our case stronger.\n" ]
[ 7, 5, 4, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_rkA1f3NpZ", "iclr_2018_rkA1f3NpZ", "iclr_2018_rkA1f3NpZ", "iclr_2018_rkA1f3NpZ", "By2d_sBef", "r1k7g8Oxz", "B1fsb6bWz" ]
iclr_2018_HkeJVllRW
Sparse-Complementary Convolution for Efficient Model Utilization on CNNs
We introduce an efficient way to increase the accuracy of convolution neural networks (CNNs) based on high model utilization without increasing any computational complexity. The proposed sparse-complementary convolution replaces regular convolution with sparse and complementary shapes of kernels, covering the same receptive field. By the nature of deep learning, high model utilization of a CNN can be achieved with more simpler kernels rather than fewer complex kernels. This simple but insightful model reuses of recent network architectures, ResNet and DenseNet, can provide better accuracy for most classification tasks (CIFAR-10/100 and ImageNet) compared to their baseline models. By simply replacing the convolution of a CNN with our sparse-complementary convolution, at the same FLOPs and parameters, we can improve top-1 accuracy on ImageNet by 0.33% and 0.18% for ResNet-101 and ResNet-152, respectively. A similar accuracy improvement could be gained by increasing the number of layers in those networks by ~1.5x.
rejected-papers
The paper studies factorizations of convolutional kernels. The proposed kernels lead to theoretical and practical efficiency improvements, but these improvements are very, very limited (for instance, Figure 5). It remains unclear how they compare to popular alternative approaches such as group convolutions (used in ResNeXt) or depth-separable convolutions (used in MobileNet). The reviewers identify a variety of smaller issues with the manuscript.
train
[ "BkdNAltlG", "rkoBp1qxM", "BJWR685ez", "B13t7GhGf", "SylLNMhff", "BkQZNf3zz", "rk6X7z2zf", "S1XPWG2ff" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "\n\nThis paper presented interesting ideas to reduce the redundancy in convolution kernels. They are very close to existing algorithms.\n\n(1)\tThe SW-SC kernel (Figure 2 (a)) is an extension of the existing shaped kernel (Figure 1 (c)).\n(2)\tThe CW-SC kernel (Figure 2 (c)) is very similar to interleaved group convolutions. The CW-SC kernel can be regarded as a redundant version of interleaved group convolutions [1]. \n\nI would like to see more discussions on the relation to these methods and more strong arguments for convincing reviewers to accept this paper. \n\n[1] Interleaved Group Convolutions. Ting Zhang, Guo-Jun Qi, Bin Xiao, and Jingdong Wang. ICCV 2017. http://openaccess.thecvf.com/content_ICCV_2017/papers/Zhang_Interleaved_Group_Convolutions_ICCV_2017_paper.pdf", "This paper introduces a new design of kernels in convolutional neural networks. The idea is to have sparse but complementary kernels with predefined patterns, which altogether cover the same receptive field as dense kernels. Because of the sparsity of such kernels, deeper or wider networks can be designed at the same computational cost as networks with dense kernels.\n\nStrengths:\n- The complementary kernels come at no loss compare to standard ones\n- The resulting wider networks can achieve better accuracies than the original ones\n\nWeaknesses:\n- The proposed patterns are clear for 3x3 kernels, but no solution is proposed for other dimensions\n- The improvement over the baseline is not very impressive\n- There is no comparison against other strategies, such as 1xk and kx1 kernels (e.g., Ioannou et al. 2016)\n\nDetailed comments:\n- The separation into + and x patterns is quite clear for 3x3 kernels. However, two such patterns would not be sufficient for 5x5 or 7x7 kernels. This idea would have more impact if it generalized to arbitrary kernel dimensions.\n\n- The improvement over the original models are of the order of less than 1 percent. I understand that such improvements are not easy to achieve, but one could wonder if they are not due to the randomness of initialization/mini-batches. It would be more meaningful to report average accuracies and standard deviations over several runs of each experiment.\n\n- Section 4.4 briefly discusses the comparison with using 3x1 and 1x3 kernels, mentioning that an empirical comparison is beyond the scope of this paper. To me, this comparison is a must. In fact, the discussion in this section is not very clear to me, as it mentions additional experiments that I could not find (maybe I misunderstood the authors). What I would like to see is the results of a model based on the method of Ioannou et al, 2016 with the same number of FLOPS.\n\n- In Section 2, the authors review ideas of so-called random kernel sparsity. Note that the work of Wen et al., 2016, and that of Alvarez & Salzmann, NIPS 2016, do not really impose random sparsity, but rather aim to cancel out entire kernels, thus reducing the size of the model and not requiring implementation overhead. They also do not require pre-training and re-training, but just a single training procedure. Note also that these methods often tend not to decrease accuracy, but rather even increase it (by a similar magnitude to that in this paper), for a more compact model.\n\n- In the context of random sparsity, it would be worth citing the work of Collins & Kohli, 2014, Memory Bounded Deep Convolutional Networks.\n\n- I am not entirely convinced by the discussion of the grouped sparsity method in Section 3.1. In fact, the order of the channels is arbitrary, since the kernels are learnt. Therefore, it seems to me that they could achieve the same result. Maybe the authors can clarify this?\n\n- Is there a particular reason why the central points appears in both complementary kernels (+ and x)?\n\n- Why did the authors change the training procedure of ResNets slightly compared to the original paper, i.e., 50k training images instead of 45k training + 5k validation? Did the baseline (original model) reported here also use 50k? What would the results be with 45k?\n\n- Fig. 5 is not entirely clear to me. What was the width of each layer? The original one or the modified one?\n\n- It would be interesting to report the accuracy of a standard ResNet with 1.325*width as a comparison, as well as the runtime of such a model.\n\n- In Table 4, I find it surprising that there is an actual speedup for the model with larger width. I would have expected the same runtime. How do the authors explain this? \n", "Summary:\nThis paper proposed a sparse-complementary convolution as an alternative to the convolution operation in deep networks. In this method, two new types of kernels are developed, namely the spatial-wise and channel-wise sparse-complementary kernels. The authors argue that the proposed kernels are able to cover the same receptive field as the regular convolution with almost half the parameters. By adding more filters or layers in the model while keeping the same FLOPs and parameters, the models with the proposed method outperform the regular convolution models. The paper is easy to follow and the idea is interesting. However, the novelty of the paper is limited and the experiments are not sufficient.\n\nStrengths:\n1. The authors proposed the sparse-complementary convolution to cover the same receptive field as the regular convolution. \n\n2. The authors implement the proposed sparse-complementary convolution on NVIDIA GPU and achieved competitive speed under the same computational load to regular convolution.\n\n3. The authors demonstrated that, given the same resource budget, the wider networks with the proposed method are more efficient than the deeper networks due to the nature of GPU parallel mechanism.\n\nWeak points:\n\n1. The novelty of this paper is limited. The main idea is to design complementary kernels that cover the same receptive field as the regular convolution. However, the performance improvement is marginal and may come from the benefit of wide networks rather than the proposed complementary kernels. Moreover, the experiments are not sufficient to support the arguments. For example, how is the performance of a model containing SW-SC or CW-SC without deepening or widening the networks? Without such experiment, it is unclear whether the improved performance comes from the sparse-complementary kernels or the increased number of kernels.\n\n2. The relationship between the proposed spatial-wise kernels and the channel-wise kernels is not very clear. Which kernel is better and how to choose between them in a deep network? There is no experimental proof in the paper.\n\n3. The proposed two kernels introduce sparsity in the spatial and channel dimension, respectively. The two methods are used separately. Is it possible to combine them together?\n\n4. The proposed method only considers the “+-shape” and “x-shape” sparse pattern. Given the same receptive field with multiple complementary kernels, is the kernel shape important for the training? There is no experimental result to verify this.\n\n5. As mentioned in the paper, there are many methods which introduce sparsity in the convolution layer, such as “random kernels”, “low-rank approximated kernels” and “mixed-shape kernels”. However, there is no experimental comparison with these methods.\n\n6. In the paper, the author mentioned another sparse-complementary baseline (sc-seq), which applies sparse kernels sequentially. It yields smaller receptive field than the proposed method when the model depth is very small. Indeed, when the model goes deeper, the receptive field becomes very close to that of the proposed method. In the experiments, it is strange that this method can also achieve comparable or better results. So, what is the advantage of the proposed “sc” method compared to the “sc-seq” method?\n\n\n8. Figure 5 is hard to understand. This figure only shows that training shallower networks is more effective than training the deeper networks on GPU. However, it does not mean training the wider networks is more efficient than training the deeper ones.\n", "Thanks for all reviewers' feedback. The revision is available and modifications are highlighted in blue color", "6. The major difference between CW-SC and group convolution is the order of output feature maps. CW-SC embeds a permutation into convolution. Thus, if two CW-SC layers are deployed consecutively, it is different from deploying two consecutive group convolutions. E.g., for CW-SC, every feature map at layer L+2 gets the information of all feature maps at layer L; however, for group convolution, the feature map at layer L+2 only gets the information from the first half or the second half feature maps at layer L since the group convolution always use the particular partition of feature maps. (The paper is also revised accordingly.)\n\n7. The pattern design is inspired from two perspectives: (I) the observation from trained network with dense kernels, like VGG16 and VGG11, and (II) some preliminary empirical studies that the kernel without center point degrades performance. \n\n8. We train all baselines and our methods by ourselves, and all experiments on CIFAR-10/100 use 50k images. Thus, it should be a fair comparison and we do not expect significant difference if we use 45k image to train and test the models.\n\n9. We use Table 3 to replace original Figure 5 to clarify the results. We would like to compare the computational effectiveness between wider and deeper ResNets and both with SW-SC layers. For the wider networks, the w are set to 1.3125, for deeper networks, w are 1.0 but with more layers, and we tested on three configurations of base networks (ResNet-32, ResNet-110 and ResNet-164). The results show that no matter which configuration is, the wider network is always effective than deeper network at the same FLOPs. Please see Table 3 for details. \n\n10. We can expect that ResNet (w=1.3125) outperforms our sparse models with the same number of kernels but ResNet (w=1.3125) costs more parameters and computations. It is a similar comparison between ResNet (w=1.0) and ResNet-sc (w=1.0), and we add this comparison in Table 8. The results show that ResNet-sc (w=1.0) reduces the FLOPs and parameters but slightly degrades the performance; furthermore, Table 9 shows the speed comparison. For most cases, our SW-SC achieves theoretical speedup (1.8x) or even better (please refer to bullet 11 for the details of benchmark.)\n\n11. We would like to clarify how do we benchmark first. For the baseline, we directly use CUDNN 6.0 to run the CNN model without any modification; however, for our SW-SC approach, we customized our implementation with batched convolution to fully utilize GPU resource. Thus, maybe CUDNN is not fully optimized but we do not have the control to tune its performance since it is proprietary product from NVIDIA. Our goal is to provide a proof of concept that this type of deterministic is really helpful for GPU as compared to the kernels with random sparsity.\n", "Thanks for reviewer's comments.\n\nThis paper does not aim to achieve the significant improvement for the state-of-the-art CNN models but rather rethinks how do we design convolutional kernels. Do we always need dense kernels or we can use more sparse kernels to achieve better accuracy? The proposed method is orthogonal to any type of macro-architecture network design, (We select two state-of-the-art networks (ResNet and DenseNet) as our case study.)\n\nOur method is not restricted to 3x3 kernels. Based on the expressions in the manuscript, we define our sparse kernel by even- and odd-indexed, and then 3x3 will be a special case since its kernel shapes become `+` and `x`; therefore it can be extended to 5x5 or large kernels. Nonetheless, as suggested in the VGG16 paper, the 5x5 kernels can be achieved by two 3x3 kernels, and most of the state-of-the-art network only use 3x3 kernels. We also deploy it on AlexNet (which has 5x5 kernel) to validate our work can be extended to other kernel sizes.\n\nWe compare our results with Ioannou et.al. 2016 (ICLR 2016) and interleaved group convolution (ICCV 2017) in Table 6 and 7 in the revision.\n\nHere are the response of detailed comments:\n\n1. We deploy our idea on AlexNet, we replace all 3x3 and 5x5 kernels with our SW-SC kernels. Our AlexNet-sc-1.3125 achieves 1.5% accuracy improvement at the same computational costs, which justifies our approach can be deployed on larger kernel; however, 3x3 is the preferred.\n\n2. Due to limited computing resource we had, we rerun the experiments of ResNet-101-sc-1.3125 two times, the average top-1 accuracy is 21.71%, which is still better than baseline 0.25%, by considering the performance difference of ResNet-101 and ResNet-152 is only 0.44%, which increases ~1.5x FLOPs and parameters to achieve so. Our approach still achieves gain with the same macro-network structure without increasing overhead on FLOPs and parameters.\n\n3. We compare the work with Ioannou et al. 2016 on ResNet-50 (We trained models by ourselves). We evaluate two configurations of their work, (I) replace 3x3 convolutions by 1x3, 3x1 and 1x1 convolutions and (II) replace 3x3 convolution by only 1x3 and 3x1 convolutions. We also increase the width to align the FLOPs and parameters. Under the same FLOPs, their work achieved 24.46% and 24.17% top-1 error rate for configuration (I) and (II) respectively. The results are worse than the baseline and ours. The bottleneck topology in a residual block embeds the feature into low-dimension space, by simply using 1x3 and 3x1 convolutions is not sufficient to extract discriminative features in the low-dimension space (note that: 1x3 and 3x1 are rank-1 kernels and not complemented); however, our kernels are rank-2 and rank-3 kernels and they are complementary which provide better approximation; thus, our approach learns better feature representation for better performance. (See Table 6 in the revision.)\n\n4. Yes, for those reference works (Wen et al and Alvarez & Salzmann), their sparsity is structural but still random, which means the sparsity are not fixed across different models. They show the speedup on CPU and GPU in their paper since CPU and GPU are flexible to prepare the data for different structural sparse convolution but for customized neural network accelerator, those random but structural sparse convolution will still bring the overhead in configuring the processing elements. Thus, we want our kernel to be deterministic across different models, and then we are able to provide the regularity for both GPU implementation and customized neural network accelerator, which might be realized by ASIC or FPGA; furthermore, more and more IoT devices are able to run inference on the edge; without deterministic sparsity, those edge devices might gain limited advantages from random but structural sparsity.\n\n5. This paper is cited in the revision and discussed in 'Related Works' section.", "Regarding the weak points addressed by the reviewer:\n\nWe would like to restate our goals in this paper first. This paper does not aim to achieve significant improvement for the state-of-the-art CNN models but rethinking how do we design convolutional kernels. Do we always need dense kernels or we can use more sparse kernels to achieve better accuracy? The proposed method is orthogonal to any type of macro-architecture network design, (We select two state-of-the-art networks (ResNet and DenseNet) as our case study.) Furthermore, we add more experiments to study the proposed method and compare with recent work in mixed-shape kernel (ICLR2016) and interleaved group convolution (ICCV2017).\n\nHere are our response:\n\n1. The goal of this paper aims to explore a better way to utilize the parameters in a CNN model; conventionally, dense kernels are used but there are too many redundancies (the redundant parameters still improve performance but it is marginal.) By simply using SW-SC kernels on the original networks, the performance would be degraded but complexity is also reduced; i.e., a trade-off between complexity and accuracy. We add the comparison between ResNet and ResNet-sc (w=1.0) to discuss the effect from SW-SC kernels (see Table 8 in supplementary section). As expected, ResNet-sc (w=1.0) reduces FLOPs and parameters with slight performance degradation. \n\n2. SW-SC and CW-SC are orthogonal to each other, SW-SC sparsifies kernels in spatial-wise domain; on the other hand, the CW-SC sparsifies kernels in channel-wise domain. For our best practice, SW-SC is for kxk kernels (k > 1) and CW-SC for 1x1 kernels as we used in our all experiments. The reason is that for computer vision tasks, SW-SC is a simplified version to extract spatial context and CW-SC is a simplified version to fuse different feature maps. We also provide some results by combining them together and the results show that our original setting is simple and achieve the similar results (see Table 8 in the supplementary section.)\n\n3. It is possible to combine with them since SW-SC and CW-SC are orthogonal. There is no conflict between them. However, by combining them together, the kernel might be too sparse to capture features. For the best practice, we deploy SW-SC for kxk kernels (k > 1) and CW-SC for 1x1 kernels. There are some preliminary experiments for combining them together at Table 8 in the supplementary section.\n\n4. Ideally, if the sparse shapes are complementary, we can stack more layers to achieve the same receptive fields; however, the shape of base kernel still matters, we experiment another sparse and complementary kernels and the performance are degraded and it empirically justify that the shape of sparse kernel should follow the nature of computer vision to extract meaningful features. (See Table 8 and Figure 6 in the supplementary section.)\n\n5. We compare the work with Ioannou et al. 2016 on ResNet-50 (We trained models by ourselves). We evaluate two configurations of their work, (I) replace 3x3 convolutions by 1x3, 3x1 and 1x1 convolutions and (II) replace 3x3 convolution by only 1x3 and 3x1 convolutions. We also increase the width to align the FLOPs and parameters. Under the same FLOPs, their work achieved 24.46% and 24.17% top-1 error rate for configuration (I) and (II) respectively. The results are worse than the baseline and ours. The bottleneck topology in a residual block embeds the feature into low-dimension space, by simply using 1x3 and 3x1 convolutions is not sufficient to extract discriminative features in the low-dimension space (note that: 1x3 and 3x1 are rank-1 kernels and not complemented); however, our kernels are rank-2 and rank-3 kernels and they are complementary which provide better approximation; thus, our approach learns better feature representation for better performance. (See Table 6 in the revision.)\n\n6. As the reviewer pointed out, our approach performs better for shallow networks\b. `sc-seq` might achieve competitive performance when the network is deep. A shallower network use fewer resource in both training and deployment. E.g., the system required fewer buffers for storing the features in the intermediate layers and the number of data loading of weights is less as well. We also perform different widening ratio (2.625 and 3.9375) for ResNet-32-sc and ResNet-32-sc-seq on CIFAR-10/100, ResNet-32-sc always outperforms ResNet-32-sc-seq.\n\n\n8. We use Table 3 to replace original Figure 5 to clarify the results. We would like to compare the computational effectiveness of wider and deeper ResNets and both with SW-SC layers. For the wider networks, the \"w\" are set to 1.3125, for deeper networks, \"w\" are 1.0 but with more layers, and we tested on three configurations of base networks (ResNet-32, ResNet-110 and ResNet-164). The results show that no matter which configuration is, the wider network is always effective than deeper network at the same FLOPs. Please see Table 3 for details. ", "Thanks for your comments. \n\nInterleaved group convolution (IGC) is composed of two group convolutions followed by permutations, and the group numbers of the first group convolution and the second group convolution are correlated. (The group numbers of the second group convolution is the factor of the number of channels in a group of the first group convolution.)\n\nTherefore, one clear difference of our paper from IGC is SW-SC kernels (mixed-shape kernels), which are orthogonal to each other; on the other hand, our CW-SC is a group convolution embeds the permutation on output feature maps; hence, CW-SC assures each feature map layer L+2 can get information from all feature maps at layer L. \n\nLogically, CW-SC might be reduced to IGC whose group numbers are 2 for both group convolutions; however, there are a little different in implementation. IGC splits a group convolution and a permutation into two stages but CW-SC embeds the permutation into convolution, which provides more advantages in hardware implementation (ASIC and FPGA). Permutation process usually involves data movement and hence a temporary buffer might be required to swap data to different addresses. Thus, by embedding the permutation into convolution, our CW-SC has the potential to be more efficient than IGC in implementation even though they achieve identical algorithmic performance.\n\nOn the other hand, we compare our SW-SC with IGC under similar FLOPs and parameters for ResNet-18 network, our ResNet-18-sc (w=1.3125) outperforms IGCs (IGC-L4M32+Identity and IGC-L16M16+Identity) on the ImageNet dataset by ~1 to 2% without introducing extra FLOPs and parameters (see Table 7 in the revision for details) except for the extreme case of IGC (IGC-L100M2), which is similar to XceptionNet. Furthermore, we also integrate our SW-SC with IGC (IGC-L16M16+Identity), this combination boosts the performance by another 1.8% over original IGC, which justify the orthogonality between two works." ]
[ 6, 5, 5, -1, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HkeJVllRW", "iclr_2018_HkeJVllRW", "iclr_2018_HkeJVllRW", "iclr_2018_HkeJVllRW", "rkoBp1qxM", "rkoBp1qxM", "BJWR685ez", "BkdNAltlG" ]
iclr_2018_ByqFhGZCW
MACHINE VS MACHINE: MINIMAX-OPTIMAL DEFENSE AGAINST ADVERSARIAL EXAMPLES
Recently, researchers have discovered that the state-of-the-art object classifiers can be fooled easily by small perturbations in the input unnoticeable to human eyes. It is known that an attacker can generate strong adversarial examples if she knows the classifier parameters. Conversely, a defender can robustify the classifier by retraining if she has the adversarial examples. The cat-and-mouse game nature of attacks and defenses raises the question of the presence of equilibria in the dynamics. In this paper, we present a neural-network based attack class to approximate a larger but intractable class of attacks, and formulate the attacker-defender interaction as a zero-sum leader-follower game. We present sensitivity-penalized optimization algorithms to find minimax solutions, which are the best worst-case defenses against whitebox attacks. Advantages of the learning-based attacks and defenses compared to gradient-based attacks and defenses are demonstrated with MNIST and CIFAR-10.
rejected-papers
The paper studies a adversarial attacks and defenses against convolutional networks based on a minimax formulation of the problem. Whilst this is an interesting direction of research, the present paper seems preliminary. In particular, compared to several other independent ICLR submissions, the empirical evaluation is quite weak: it does not consider the strongest known gradient-based attack (Carlini-Wagner) as baseline and does not report results on ImageNet. The reviewers identify several issues related to Lemma 1 and to the clarity of presentation.
train
[ "B1kbvzdxz", "B1oPNGYez", "B1VlMxjlG", "HkCRNN6XM", "HJRfymTXM", "SkfZVe6mz", "H1tsY-hQz", "SyJ9m4CWz", "rymYGVR-f", "HkLIb4Cbz", "rygOMUuCb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public", "author", "author", "author", "author", "public" ]
[ "The game-theoretic approach to attacks with / defense against adversarial examples is an important direction of the security of deep learning and I appreciate the authors to initiate this kind of study. \n\nLemma 1 summarizes properties of the solutions that are expected to have after reaching equilibria. Important properties of saddle points in the min-max/max-min analysis assume that the function is convex/concave w.r.t. to the target variable. In case of deep learning, the convexity is not guaranteed and the resulting solutions do not have necessarily follow Lemma 1. Nonetheless, this type of analysis can be useful under appropriate solutions if non-trivial claims are derived; however, Lemma 1 simply explains basic properties of the min-max solutions and max-min solutions works and does not contain non-tibial claims.\n\nAs long as the analysis is experimental, the state of the art should be considered. As long as the reviewer knows, the CW attack gives the most powerful attack and this should be considered for comparison. The results with MNIST and CIFAR-10 are different. In some cases, MNIST is too easy to consider the complex structure of deep architectures. I prefer to have discussions on experimental results with both datasets.\n\nThe main takeaway from the entire paper is not clear very much. It contains a game-theoretic framework of adversarial examples/training, novel attack method, and many experimental results.\n\nMinor:\nDefinition of g in the beginning of Sec 3.1 seems to be a typo. What is u? This is revealed in the latter sections but should be specified here.\n\nIn Section 3.1, \n>This is in stark contrast with the near-perfect misclassification of the undefended classifier in Table 1.\nThe results shown in the table seems to indicate the “perfect” misclassification.\n\nSentence after eq. 15 seems to contain a grammatical error\n\nThe paragraph after eq. 17 is duplicated with a paragraph introduced before\n", "This paper presents a sensitivity-penalized loss (the loss of the classifier has an additional term in squart of the gradient of the classifier w.r.t. perturbations of the inputs), and a minimax (or maximin) driven algorithm to find attacks and defenses. It has a lemma which claims that the \"minimax and the maximin solutions provide the best worst-case defense and attack models, respectively\", without proof, although that statement is supported experimentally.\n\n+ Prior work seem adequately cited and compared to, but I am not really knowledgeable in the adversarial attacks subdomain.\n- The experiments are on small/limited datasets (MNIST and CIFAR-10). Because of this, confidence intervals (over different initializations, for instance) would be a nice addition to Table 5.\n- There is no exact (\"alternating optimization\" could be considered one) evaluation of the impact of the sensitivy loss vs. the minimax/maximin algorithm.\n- The paper is hard to follow at times (and probably that dealing with the point above would help in this regard), e.g. Lemma 1 and experimental analysis.\n- It is unclear (from Figures 3 and 7) that \"alternative optimization\" and \"minimax\" converged fully, and/or that the sets of hyperparameters were optimal.\n+ This paper presents a game formulation of learning-based attacks and defense in the context of adversarial examples for neural networks, and empirical findings support its claims.\n\n\nNitpicks:\nthe gradient descent -> gradient descent or the gradient descent algorithm\nseeming -> seemingly\narbitrary flexible -> arbitrarily flexible\ncan name \"gradient descent that maximizes\": gradient ascent.\nThe mini- max or the maximin solution is defined -> are defined\nis the follow -> is the follower\n", "The authors describe a mechanism for defending against adversarial learning attacks on classifiers. They first consider the dynamics generated by the following procedure. They begin by training a classifier, generating attack samples using FGSM, then hardening the classifier by retraining with adversarial samples, generating new attack samples for the retrained classifier, and repeating. \n\nThey next observe that since FGSM is given by a simple perturbation of the sample point by the gradient of the loss, that the fixed point of the above dynamics can be optimized for directly using gradient descent. They call this approach Sens FGSM, and evaluate it empirically against the various iterates of the above approach. \n\nThey then generalize this approach to an arbitrary attacker strategy given by some parameter vector (e.g. a neural net for generating adversarial samples). In this case, the attacker and defender are playing a minimax game, and the authors propose finding the minimax (or maximin) parameters using an algorithm which alternates between maximization and minimization gradient steps. They conclude with empirical observations about the performance of this algorithm.\n\nThe paper is well-written and easy to follow. However, I found the empirical results to be a little underwhelming. Sens-FGSM outperforms the adversarial training defenses tuned for the “wrong” iteration, but it does not appear to perform particularly well with error rates well above 20%. How does it stack up against other defense approaches (e.g. https://arxiv.org/pdf/1705.09064.pdf)? Furthermore, what is the significance of FGSM-curr (FGSM-81) for Sens-FGSM? It is my understanding that Sens-FGSM is not trained to a particular iteration of the “cat-and-mouse” game. Why, then, does Sens-FGSM provide a consistently better defense against FGSM-81? With regards to the second part of the paper, using gradient methods to solve a minimax problem is not especially novel (i.e. Goodfellow et al.), thus I would liked to see more thorough experiments here as well. For example, it’s unlikely that the defender would ever know the attack network utilized by an attacker. How robust is the defense against samples generated by a different attack network? The authors seem to address this in section 5 by stating that the minimax solution is not meaningful for other network classes. However, this is a bit unsatisfying. Any defense can be *evaluated* against samples generated by any attacker strategy. Is it the case that the defenses fall flat against samples generated by different architectures? \n\n\nMinor Comments:\nSection 3.1, First Line. ”f(ul(g(x),y))” appears to be a mistake.", "That does clarify my misunderstanding. For a moment I thought the perturbations are scalar multiples of raw gradients, but yes it is FGSM indeed.\nThanks for your response!", "Thanks again for the comment.\nWe don't clearly understand the question since the Lp norm of the perturbation is always normalized to 1 in this paper. \nAn adversarial sample z is z = x + eta*q, where x is the original image, eta is the perturbation strength, and q is the perturbation pattern with \\|q\\|_p=1. For FGSM, the pattern q = sign(grad loss) has a unit L-inf norm. After sensitivity training, the new q=sign(grad new loss) with a unit L-inf norm, cannot affect the new classifier as much as the old q (with a unit L-inf norm) could affect the undefended classifier using the same eta value. It would have to use a larger eta value.\nWe hope this clarifies the issue.\n\n\n\n\n \n", "Thanks a lot for your kind explanations.\n\nI find it hard to agree with the last sentence though -- \"A classifier with a lower sensitivity is certainly more robust to gradient-based attacks since it causes an attacker to use a stronger (eta) perturbation to fool the classifier. \"\n\nTo me, eta does not tell how strong the perturbation is -- the Lp norm of the perturbation does. If a network has been trained with the sensitivity penalty, then the image gradient itself will be downscaled. Then, even if a larger value of eta is applied, the Lp norm of the perturbation could still be smaller. My question is, does the sensitivity penalty improve the robustness of a network against attacks with the same Lp norm, rather than the same eta value (which is done in this work; please correct me if I'm wrong). If the robustness in this case remains the same, then I wouldn't say the sensitivity penalty works as a defence strategy.", "Thank you for your interest and comments. The ICCV'17 paper is certainly interesting and is in line with the main idea of the current paper that the adversarial example problems should be viewed as an attacker-defender game. \nThe main difference between the ICCV'17 paper and the current paper is that we propose continuous minimax problems and new optimization algorithms as opposed to the classic discrete minimax problems on probability simplices used in the ICCV'17 paper. We will update our related work in the final version. \n\nRegarding local optima and the guarantee: We cannot at present expect to have the flexibility of non-convex models and the global optimality of convex-concave models at the same time. But in practice, we often observe that a local optimum of a neural network performs nearly as well as any other local optimum. It would be very desirable to have rigorous and tight bounds on the performance of deep neural networks. \n\nRegarding the formulation in eq7: We haven't tested it, but we believe that we can cause misclassification eventually by increasing eta. However, it also increases detectability of the presence of perturbation which defeats the purpose.\nA classifier with a lower sensitivity is certainly more robust to gradient-based attacks since it causes an attacker to use a stronger (eta) perturbation to fool the classifier. \n\n\n", "<Common>\n\nWe thank all the reviewers for important suggestions.\nWe could see where the submitted version was unclear or has caused confusions. \nFollowing the comments, we EXTENSIVELY revised the paper, re-ran the experiments and reported additional results to answer the questions. \nIn particular, we show how the proposed minimax algorithm gives us better results than alternating descent/ascent used in GAN training, and how the class of neural-net based attacks is more general than the class of gradient-based attacks.\n\nSince we believe most of the questions are now addressed in the submitted revision, we politely ask the reviewers for updating their evaluations.\n\n\n<Reviewer 3>\n\n\"Lemma 1 simply explains basic properties of the min-max solutions and max-min solutions works and does not contain non-tibial claims.\"\n\nMathematically they are straightforward, although they have not been applied in this domain before.\nWe agree this is not the essence of the experiments, and the new Table 5 now has more conclusive experimental results.\n\n\n\"... the CW attack gives the most powerful attack and this should be considered for comparison.\"\n\nAFAIK, there is no particularly effective method to optimization-based attacks [Huang'15, CW'15] when eta is large. In the revision, we discuss how the neural-network based attacks is an approximation of a much larger class of attacks such as CW, and how the approximation allows us to practically find minimax defenses. Direct adversarial training against optimization-based attacks [Huang'15] does not work, as shown in the new Table 3 (LWA FGSM). \n\n\"The results with MNIST and CIFAR-10 are different. In some cases, MNIST is too easy to consider the complex structure of deep architectures. I prefer to have discussions on experimental results with both datasets.\"\n\nWe understand but the paper is already 12 pages without the CIFAR-10 results. As one can see, our conclusions on the MNIST results are also applicable to the corresponding CIFAR-10 results, except that the error rates of different defenses/attacks are not as much spread as MNIST. \n\n\n\"The main takeaway ... is not clear\"\n\nWe extensively revised the paper as well as reported important missing results. The key messages are 1) optimal defense-attack has to be studied as a dynamic problem, 2) we provide analytical and numerical tools to study them, and 3) the minimax defense is empirically better than previous adversarially-trained classifeirs or the results of optimization without sensitivity terms. \n\n\n\"The paragraph after eq. 17 is duplicated with a paragraph introduced before\"\n\nThe paragraph (about maximin) is not the same as the previous paragraph (about minimax). They are exactly the opposite.\n", "<Common>\n\nWe thank all the reviewers for important suggestions.\nWe could see where the submitted version was unclear or has caused confusions. \nFollowing the comments, we EXTENSIVELY revised the paper, re-ran the experiments and reported additional results to answer the questions. \nIn particular, we show how the proposed minimax algorithm gives us better results than alternating descent/ascent used in GAN training, and how the class of neural-net based attacks is more general than the class of gradient-based attacks.\n\nSince we believe most of the questions are now addressed in the submitted revision, we politely ask the reviewers for updating their evaluations.\n\n\n\n<Reviewer 2>\n\n\"The experiments are on small/limited datasets (MNIST and CIFAR-10). Because of this, confidence intervals (over different \ninitializations, for instance) would be a nice addition to Table 5.\"\n\nWe are in the process of repeating all the experiments and will report them as soon as they are available.\n\n\n\n\"There is no exact evaluation of the impact of the sensitivity loss vs. the minimax/maximin algorithm.\"\n\nAs for adversarial training against gradient-type attacks (Sec 3), the new Table 3 compares the classifiers trained with (Sens-FGSM) and without (LWA-FGSM) the sensitivity term, where the latter procedure is similar to Huang et al.'15. Sens-FGSM performs slightly better than LWA-FGSM.\nAs for training against learning-based attacks (Sec 4), the new Table 5 compares the classifiers trained with (Minimax) and without (Alt) sensitivity term. The minimax solutions are shown to be more robust than the alt solutions. Fig 3 also shows that the solutions under the two methods converge to very different values.\n\n\n\"...hard to follow ... Lemma 1 and experimental analysis.\"\n\nLemma 1 follows simply from the definition and it was not the essence of the experiments. We replaced it with Table 5 which has more conclusive experimental results.\n\n\n\"It is unclear (from Figures 3 and 7) that \"alternative optimization\" and \"minimax\" converged fully, and/or that the sets of hyperparameters were optimal.\"\n\nWe tested the algorithms with different hyperparameters which did not improve the convergence speed. Instead, we now report the results with a 3-4 times larger number of iterations than before. \n\n", "<Common>\n\nWe thank all the reviewers for important suggestions.\nWe could see where the submitted version was unclear or has caused confusions. \nFollowing the comments, we EXTENSIVELY revised the paper, re-ran the experiments and reported additional results to answer the questions. \nIn particular, we show how the proposed minimax algorithm gives us better results than alternating descent/ascent used in GAN training, and how the class of neural-net based attacks is more general than the class of gradient-based attacks.\n\nSince we believe most of the questions are now addressed in the submitted revision, we politely ask the reviewers for updating their evaluations.\n\n\n<Reviewer 1>\n\n\"Sens-FGSM ... does not appear to perform particularly well with error rates well above 20%.\"\n\nYes, that is true. With large eta's, hardening a classifier against all FGSM attacks by adversarial training is difficult, regardless of whether sensitivity norm is used or not. \n\n\n\"https://arxiv.org/pdf/1705.09064.pdf\"\n\nIt is now included in the revision along with two other papers:\n\"...A few researchers have also proposed using a detector to detect and reject adversarial examples \\citep{meng2017magnet,lu2017safetynet,metzen2017detecting}. While we do not use detectors in this work, the minimax idea can be applied to train the detectors similarly.\"\n\n\n\"Why ... does Sens-FGSM provide a consistently better defense aginst FGSM-81?\"\n\nIt's a misunderstanding. FGSM-curr for Sens-FGSM is the attack on the current parameter and not the same as FGSM-81.\nAnyway, Sens-FGSM is consistently better because it is trained so that the loss gradient is small. \n\n\n\"... using gradient methods to solve a minimax problem is not especially novel (i.e. Goodfellow et al.)\"\n\nIt is not true. Alternating descent/ascent used in GAN cannot find minimax solutions but only local saddle points.\nSaddle points can be minimax, maximin or neither. They are the same only when f(u,v) is convex in u and concave in v.\nEmpirically, the minimax and the alternating methods converge to very different values (Fig 3), and the minimax solutions are shown to be more robust than the alt solutions in the new Table 5.\n\n\n\"it’s unlikely that the defender would ever know the attack network utilized by an attacker.\"\n\nYes. The maximin case is the other extreme case which is more hypothetical than realistic. However it gives the lower bound.\n\n \n\"How robust is the defense against samples generated by a different attack network?\" \n\"The authors ... state that the minimax solution is not meaningful for other network classes\"\n\"Is ... the defenses fall flat against samples generated by different architectures?\"\n\nSorry for the confusion. We unnecessarily overstated the limitations of minimax defense. They can certainly be evaluated against any other attack. We show in Table 5 that minimax-trained classifiers are still moderately robust to out-of-class FGSM attacks, whereas FGSM-trained classifiers fails utterly against neural-net based attacks. Evaluation with a different neural network architecture is underway.\n", "Thanks a lot for your great work! I think game theory is really one of the few valid ways to study attacks & defenses regarding adversarial examples, as opposed to the \"cat-and-mouse game\" we see in this field these days. Honestly, it is really becoming harder to trust papers saying \"we have a great defense mechanism\" or \"we have a great attack method\". \n\nAlong a similar line of reasoning, we have published a paper at ICCV'17, \"Adversarial Image Perturbation for Privacy Protection -- A Game Theory Perspective\". We have also proposed a game theoretic framework to find the equilibrium in the dynamics between user and recogniser, trying to thwart/re-enable recognition. Perhaps this paper should also be mentioned in the related work!\n\nI'd like to point out some issues that I'd like to hear your response. First one is the term \"best worst-case defense and attack\". I feel this is contradictory to the fact that \"we can only find local solutions in practice for complex loss functions such as deep networks-based defenders and attackers\" (sec4.2). And this is also to me the biggest hurdle for using game theory with non-convex rewards under this security/privacy setup -- the equilibria, or the saddle points, do not guarantee anything, making the game theoretic analysis inconclusive.\n\nMaybe a minor issue: while I like the cleanness of the formulation in eq7 (\"sensitivity penality\"), it eventually just tries to scale down the image gradients around the training data points (or hopefully around the entire data distribution). So, when FGSM is applied again to sensitivity penalised networks, wouldn't FGSM with larger step size (eta) re-enable high original error rate? Do you have any preliminary results?\n\nWhile game theory has limitations (that it's hard to guarantee upper/lower bounds in non-convex setup), I still think game theory is great in spelling out assumptions explicitly (as we have argued in our ICCV'17 paper). I appreciate that the authors have really discussed the limitations in sec5.1. Overall, I really enjoyed the paper!" ]
[ 5, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ByqFhGZCW", "iclr_2018_ByqFhGZCW", "iclr_2018_ByqFhGZCW", "HJRfymTXM", "SkfZVe6mz", "H1tsY-hQz", "rygOMUuCb", "B1kbvzdxz", "B1oPNGYez", "B1VlMxjlG", "iclr_2018_ByqFhGZCW" ]
iclr_2018_ryvxcPeAb
Enhancing the Transferability of Adversarial Examples with Noise Reduced Gradient
Deep neural networks provide state-of-the-art performance for many applications of interest. Unfortunately they are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can transfer across models: adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack against the deployed black-box systems. In this work, we demonstrate that the adversarial perturbation can be decomposed into two components: model-specific and data-dependent one, and it is the latter that mainly contributes to the transferability. Motivated by this understanding, we propose to craft adversarial examples by utilizing the noise reduced gradient (NRG) which approximates the data-dependent component. Experiments on various classification models trained on ImageNet demonstrates that the new approach enhances the transferability dramatically. We also find that low-capacity models have more powerful attack capability than high-capacity counterparts, under the condition that they have comparable test performance. These insights give rise to a principled manner to construct adversarial examples with high success rates and could potentially provide us guidance for designing effective defense approaches against black-box attacks.
rejected-papers
The paper studies transferability of adversarial examples between model architectures, and proposes a method to improve this transferability. Whilst it covers an interesting and relevant line of research, the paper does not provide strong evidence for its main underling hypothesis: namely, that adversarial perturbations can be split in a model-specified and a data-specific part. The paper's presentation also warrants improvements. The authors have not provided a rebuttal.
train
[ "rkzeadBxf", "SJIOPWdgf", "rkKt2t2xz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper postulates that an adversarial perturbation consists of a model-specific and data-specific component, and that amplification of the latter is best suited for adversarial attacks.\n\nThis paper has many grammatical errors. The article is almost always missing from nouns. Some of the sentences need changing. For example:\n\n\"training model paramater\" --> \"training model parameters\" (assuming the neural networks have more than 1 parameter)\n\"same or similar dataset with\" --> \"same or a similar dataset to\"\n\"human eyes\" --> \"the human eye\"!\n\"in analogous to\" --> \"analogous to\"\n\"start-of-the-art\" --> \"state-of-the-art\"\n\nSome roughly chronological comments follow:\n\nIn equation (1) although it is obvious that y is the output of f, you should define it. As you are considering the single highest-scoring class, there should probably be an argmax somewhere.\n\n\"The best metric should be human eyes, which is unfortunately difficult to quantify\". I don't recommend that you quantify things in terms of eyes.\n\nIn section 3.1 I am not convinced there is yet sufficient justification to claim that grad(f||)^A is aligned with the inter-class deviation. It would be helpful to put equation (8) here. The \"human\" line on figure 1a doesn't make much sense. By u & v in the figure 1 caption you presumably the x and y axes on the plot. These should be labelled.\n\nIn section 4 you write \"it is meaningless to construct adversarial perturbations for the images that target models cannot classify correctly\". I'm not sure this is true. Imagenet has a *lot* of dog breeds. For an adversarial attack, it may be advantageous to change the classification from \"wrong breed of dog\" to \"not a dog at all\".\n\nSomething that concerns me is that, although your methods produce good results, it looks like the hyperparameters are chosen so as to overfit to the data (please do correct me if this is not the case). A better procedure would be to split the imagenet validation set in two and optimise the hyperparameters on one split, and test on the second. You also \"try lots of \\alphas\", which again seems like overfitting.\n\nTarget attack experiments are missing from 5.1, in 5.2 you write that it is a harder problem so it is omitted. I would argue it is still worth presenting these results even if they are less flattering.\n\nSection 6.2 feels out of place and disjointed from the narrative of the paper.\n\nA lot of choices in Section 6 feel arbitrary. In 6.3, why is resnet34 the chosen source model? In 6.4 why do you select those two target models?\n\nI think this paper contains an interesting idea, but suffers from poor writing and unprincipled experimentation. I therefore recommend it be rejected.\n\nPros:\n- Promising results\n- Good summary of adversarial methods\n\nCons:\n- Poorly written\n- Appears to overfit to the test data\n", "This paper focuses on enhancing the transferability of adversarial examples from one model to another model. The main contribution of this paper is to factorize the adversarial perturbation direction into model-specific and data-dependent. Motivated by finding the data-dependent direction, the paper proposes the noise reduced gradient method. \n\nThe paper is not mature. The authors need to justify their arguments in a more rigorous way, like why data-dependent direction can be obtained by averaging; is it true factorization of the perturbation direction? i.e. is the orthogonal direction is indeed model-specific? most of explanations are not rigorous and kind of superficial.\n\n\n", "The problem of exploring the cross-model (and cross-dataset) generalization of adversarial examples is relatively neglected topic. However the paper's list of related work on that toopic is a bit lacking as in section 3.1 it omits referencing the \"Explaining and Harnessing...\" paper by Goodfellow et al., which presented the first convincing attempt at explaining cross-model generalization of the examples.\n\nHowever this paper seems to extend the explanation by a more principled study of the cross-model generalization. Again Section 3.1. presents a hypothesis on splitting the space of adversarial perturbations into two sub-manifolds. However this hypothesis seems as a tautology as the splitting is engineered in a way to formally describe the informal statement. Anyways, the paper introduces a useful terminology to aid analysis and engineer examples with improved generalization across models.\n\nIn the same vain, Section 3.2 presents another hypothesis, but is claimed as fact. It claims that the model-dependent component of adversarial examples is dominated by images with high-frequency noise. This is a relatively unfounded statement, not backed up by any qualitative or quantitative evidence.\n\nMotivated by the observation that most newly generated adversarial examples are perturbations by a high frequency noise and that noise is often model-specific (which is not measured or studied sufficiently in the paper), the paper suggests adding a noise term to the FGS and IGSM methods and give extensive experimental evidence on a variety of models on ImageNet demonstrating that the transferability of the newly generated examples is improved.\n\nI am on the fence with this paper. It certainly studies an important, somewhat neglected aspect of adversarial examples, but mostly speculatively and the experimental results study the resulting algorithm rather than trying trying the verify the hypotheses on which those algorithms are based upon.\n\nOn the plus side the paper presents very strong practical evidence that the transferability of the examples can be enhanced by such a simple methodology significantly.\n\nI think the paper would be much more compelling (are should be accepted) if it contained a more disciplined study on the hypotheses on which the methodology is based upon." ]
[ 4, 5, 5 ]
[ 4, 3, 4 ]
[ "iclr_2018_ryvxcPeAb", "iclr_2018_ryvxcPeAb", "iclr_2018_ryvxcPeAb" ]
iclr_2018_HJdXGy1RW
CrescendoNet: A Simple Deep Convolutional Neural Network with Ensemble Behavior
We introduce a new deep convolutional neural network, CrescendoNet, by stacking simple building blocks without residual connections. Each Crescendo block contains independent convolution paths with increased depths. The numbers of convolution layers and parameters are only increased linearly in Crescendo blocks. In experiments, CrescendoNet with only 15 layers outperforms almost all networks without residual connections on benchmark datasets, CIFAR10, CIFAR100, and SVHN. Given sufficient amount of data as in SVHN dataset, CrescendoNet with 15 layers and 4.1M parameters can match the performance of DenseNet-BC with 250 layers and 15.3M parameters. CrescendoNet provides a new way to construct high performance deep convolutional neural networks without residual connections. Moreover, through investigating the behavior and performance of subnetworks in CrescendoNet, we note that the high performance of CrescendoNet may come from its implicit ensemble behavior, which differs from the FractalNet that is also a deep convolutional neural network without residual connections. Furthermore, the independence between paths in CrescendoNet allows us to introduce a new path-wise training procedure, which can reduce the memory needed for training.
rejected-papers
The paper proposes a new convolutional network architecture, called CrescendoNet. Whilst achieving competitive performance on CIFAR-10 and SVHN, the accuracy of the proposed model on CIFAR-100 is substantially lower than that of state-of-the-art models with fewer parameters; the paper presents no experimental results on ImageNet. The proposed architecture does not provide clear new insights or successful new design principles. This makes it unlikely the current manuscript will have a lot of impact.
train
[ "Bk1jsRM1f", "HJwO67Klz", "H1vOPPFez", "r1-0rdbEz", "SJIR8G67G", "B1rJLMa7f", "HJifHMaXf", "rkpSQMrxG", "B1ZpEWrlz", "BJKHrDNgz", "SJZn-KzxG", "HynVX7Xyz", "B17ZhCMyG", "BJkODoG1z", "BJSwy5MJz", "BylimFMJf", "Bymd5_fyG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "public", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer" ]
[ "The paper presents a new CNN architecture: CrescendoNet. It does not have skip connections yet performs quite well.\n\nOverall, I think the contributions of this paper are too marginal for acceptance in a top tier conference.\n\nThe architecture is competitive on SVHN and CIFAR 10 but not on CIFAR 100. The performance is not strong enough to warrant acceptance by itself.\n\nFractalNets amd DiracNets (https://arxiv.org/pdf/1706.00388.pdf) have demonstrated that it is possible to train deep networks without skip connections and achieve high performance. While CrescendoNet seems to slightly outperform FractalNet in the experiments conducted, it is itself outperformed by DiracNet. Hence, CrescendoNet does not have the best performance among skip connection free networks.\n\nYou claim that FractalNet shows no ensemble behavior. This is clearly not true because FractalNet has ensembling directly built in, i.e. different paths in the network are explicitly averaged. If averaging paths leads to ensembling in CrescendoNet, it leads to ensembling in FractalNet. While the longest path in FractalNet is stronger than the other members of the ensemble, it is nevertheless an ensemble. Besides, as Veit showed, ResNet also shows ensemble behavior. Hence, using ensembling in deep networks is not a significant contribution.\n\nThe authors claim that \"Through our analysis and experiments, we note that the implicit ensemble behavior of CrescendoNet leads to high performance\". I don't think the experiments show that ensemble behavior leads to high performance. Just because a network performs averaging of different paths and individual paths perform worse than sets of paths doesn't imply that ensembling as a mechanism is in fact the cause of the performance of the entire architecture. Similary, you say \"On the other hand, the ensemble model can explain the performance improvement easily.\" Veit et al only claimed that ensembling is a feature of ResNet, but they did not claim that this was the cause of the performance of ResNet.\n\nPath-wise training is not original enough or indeed different enough from drop-path to count as a major contribution.\n\nYou claim that the number of layers \"increase exponentially\" in FractalNet. This is misleading. The number of layers increases exponentially in the number of paths, but not in the depth of the network. In fact, the number of layers is linear in the depth of the network. Since depth is the meaningful quantity here, CrescendoNet does not have an advantage over FractalNet in terms of layer number. Also, it is always possible to simply add more paths to FractalNet if desired without increasing depth. Instead of using 1 long paths, one can simply use 2, 3, 4 etc. While this is not explicitly mentioned in the FractalNet paper, it clearly would not break the design principle of FractalNet which is to train a path of multiple layers by ensembling it with a path of fewer layers. CrescendoNets do not extend beyond this design principle.\n\nYou say that \"First, path-wise training procedure significantly reduces the memory requirements for convolutional layers, which constitutes the major memory cost for training CNNs. For example, the higher bound of the memory required can be reduced to about 40% for a Crescendo block with 4 paths where interval = 1.\" This is misleading, as you need to store the weights of all convolutional layers to compute the forward pass and the majority of the weights of all convolutional layers to compute the backward pass, no matter how many weights you intend to update. In a response to a question I posed, you mentioned that we you meant was \"we use about 40% memory for the gradient computation and storage\". Fair enough, but \"gradient computation and storage\" is not mentioned in the paper. Also, the reduction to 40% does not apply e.g. to vanilla SGD because the computed gradient can be immediately added to the weights and does not need to be stored or combined with e.g. a stored momentum term.\n\nFinally, nowhere in the paper do you mention which nonlinearities you used or if you used any at all. In future revisions, this should be rectified.\n\nWhile I can definitely imagine that your network architecture is well-designed and a good choice for image classification tasks, there is a very saturated market of papers proposing various architectures for CIFAR-10 and related datasets. To be accepted to ICLR, either outstanding performance or truly novel design principles are required.", "This paper proposes a new convolutional network architecture, which is tested on three image classification tasks.\n\nPros:\nThe network is very clean and easy to implement, and the results are OK.\n\nCons:\nThe idea is rather incremental compared to FractalNet. The results seem to be worse than existing networks, e.g., DenseNet (Note that SVHN is no longer a good benchmark dataset for evaluating state-of-the-art CNNs). Not much insights were given.\n\nOne additional question: Skip connections have been shown to be very useful in ConvNets. Why not adopt it in CrescendoNet? What's the point of designing a network without skip connections?\n", "In this paper, the authors propose a new network architecture, CrescendoNet, which is a simple stack of building blocks without residual connections. To reduce the memory required for training, the authors also propose a path-wise training procedure based on the independent convolution paths of CrescendoNet. The experimental results on CIFAR-10, CIFAR-100 and SVHN show that CrescendoNet outperforms most of the networks without residual connections.\n \nContributions:\n\n1 The authors proposed Crescendo block that consists of convolution paths with increasing depth. \n\n3 The authors conducted experiments on three benchmark datasets and show promising performance of CrescendoNet .\n\n3 The authors proposed a path-wise training procedure to reduce memory requirement in training.\n\nNegative points:\n\n1 The motivation of the paper is not clear. It is well known that the residual connections are important in training deep CNNs and have shown remarkable performance on many tasks. The authors propose the CrescendoNet which is without residual connections. However, the experiments show that CrescendoNet is worse than ResNet. \n\n2 The contribution of this paper is not clear. In fact, the performance of CrescendoNet is worse than most of the variants of residual networks, e.g., Wide ResNet, DenseNet, and ResNet with pre-activation. Besides, it seems that the proposed path-wise training procedure also leads to significant performance degradation.\n\n3 The novelty of this paper is insufficient. The CrescendoNet is like a variant of the FractalNet, and the only difference is that the number of convolutional layers in Crescendo blocks grows linearly.\n\n4 The experimental settings are unfair. The authors run 700 epochs and even 1400 epochs with path-wise training on CIFAR, while the baselines only have 160~400 epochs for training.\n\n5 The authors should provide the experimental results on large-scale data sets (e.g. ImageNet) to prove the effectiveness of the proposed method, as they only conduct experiments on small data sets, including CIFAR-10, CIFAR-100, and SVHN.\n\n\n6 The model size of CrescendoNet is larger than residual networks with similar performance.\n\n\nMinor issues:\n\n1In line 2, section 3.4, the period after “(128, 256, 512)” should be removed.\n", "You state that you have higher accuracy than DenseNet-40, lower depth and DenseNet-250, higher accuracy than FractalNet, lower depth than DiracNet etc. However, it is not enough to argue that CrescendoNet is not strictly worse than other architectures in order to argue that CrescendoNet should be used in practice. I don't think the results in the paper support such a case.\n\nI am not convinced that the behavior / design / scientific meaning of CrescendoNet is sufficiently novel compared to FractalNet / DiracNet.", "Dear reviewer, \n \nThank you for your reviews.\n \nCrescendoNet did not only outperform FractalNet, but also has fewer parameters (27.7M vs. 36.5M), blocks (3 vs. 5), and shallower depth (15 vs. 21 layers). Crescendo architecture is simpler than Fractal architecture. The performance improvement makes CrescendoNet more competitive than FractalNet, and the linear growth of layers with respect to the number of branches significantly improves the model efficiency. Also, the performance behaviors of path combination are different between CrescendoNet and FractalNet. FractalNet shows a student-teacher effect while CrescendoNet does not. CrescendoNet may look like a variant of FractalNet, but its design principal and network properties are different.\n \nCrescendoNet outperforms deep networks without residual connections on CIFAR10, CIFA100, and SVHN without data augmentation in given experiments. Although DiracNet slightly outperforms CrescendoNet on CIFAR with data augmentation, it has about twice numbers of layers (34 vs. 15) and parameters (59M vs. 27M) as CrescendoNet does. \n \nYou are right. Our original claim that \"CrescendoNet shows ensemble behavior while FractalNet shows student-teacher effect\" is misleading. Thus we have removed or modified relevant parts in revision such as: \"On the other hand, CrescendoNet shows that the whole network significantly outperforms any set of it. This fact sheds light on exploring the mechanism which can improve the performance of deep CNNs by increasing the number of paths.\" We discovered such a difference and thought it is worth further studying.\n \nThank you for pointing out that our claim that \"Through our analysis and experiments, we note that the implicit ensemble behavior of CrescendoNet leads to high performance\" is not solid. We have replaced it with the following statement in revision. \"Through our analysis and experiments, we discovered an emergent behavior which is significantly different from which of FractalNet. The entire CrescendoNet outperforms any subset of it can provide an insight of improving the model performance by increasing the number of paths by a pattern.\"\n \nWe think you made some assumptions for FractalNet in the following statements.\n \n\"Also, it is always possible to simply add more paths to FractalNet if desired without increasing depth. Instead of using 1 long paths, one can simply use 2, 3, 4 etc.\" \n \nFor the above statement, we agree that it is always possible to do this. However, this will not only break the fractal expansion pattern by involving manually designed parts but also cannot guarantee the good performance. \n \n\"While this is not explicitly mentioned in the FractalNet paper, it clearly would not break the design principle of FractalNet which is to train a path of multiple layers by ensembling it with a path of fewer layers. CrescendoNets do not extend beyond this design principle.\" \n \nWe have a different understanding of the design principle of FractalNet. The author of FractalNet describes his design principle in Abstract: \"We introduce a design strategy for neural network macro-architecture based on self similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals.\" It is clear that the design principle of FractalNet is the fractal expansion rule instead of \"to train a path of multiple layers by ensembling it with a path of fewer layers\" which applies to our model more. The recursive model formula and model illustration also demonstrate the design principle of FractalNet. However, we use the linear expansion rule, which is different from the Fractal design principle. Also, if the design principle can apparently lead to the design of CrescendoNet, the author of FractalNet would like to propose it since it is cleaner and better-performed.\n \nThank you very much for pointing out our misleading statement about path-wise training. We have corrected relevant parts by explicitly saying we can only save computation and storage memory for convolution layers when using momentum optimizers. \n \nAlso, we appreciate you for asking about the missing architecture and experiment design details. We have added the detailed description of fully connected layers and nonlinearities. \n \nThank you again for your careful and professional reviews. Your questions and comments definitely helped us to improve the work.\n", "Dear reviewer, \n \nThank you for your reviews and suggestions. \n \nAlthough CrescendoNet with 15 layers performed worse than DenseNet-BC with 250 layers, it outperformed DenseNet with 40 layers using all given datasets. \n \nThank you for the suggestion of using skip-connections. We are working on applying skip-connection as a module to our architecture. The challenge is leveraging the skip-connection to improve the performance while keeping the architecture clean and efficient. \n \nAs for the motivation, designing a network without skip-connections is our first step. In other words, we first proposed a design pattern with a base model and then we keep trying to incorporate existing and new techniques, such as Residual connections, bottleneck layers, and depth-wise separable convolutions.\n \nThank you again for your reviews and suggestions. \n", "Dear reviewer,\n\nWe appreciate your time and review.\n \nThe motivation of our CrescendoNet is designing a CNN architecture that achieves rich feature representation and is easy to implement. We designed the model without skip-connections as our first step. In the future, we can continue to improve the model by incorporating existing and new techniques, such as Residual connections, bottleneck layers, and depth-wise separable convolutions.\n \nRegarding the performance, CrescendoNet outperforms most of the models without residual connections. Comparing with ResNet and its variants, the performances of CrescendoNet with 15 layers is better than those of original ResNet with 110 layers, ResNet(pre-activation) with 164 layers, DenseNet with 40 layers and matches Wide ResNet with 16 layers. Only ResNet variants with extreme depth ranging from 110 to 1001 outperform CrescendoNet Since the architectures of these ResNet variants are manually designed, the deeper the model is, the less usability, modifiability, and extensibility can be. In contrast, a clean and efficient architecture, e.g., CrescendoNet, has advantages in practical applications. For example, the state-of-the-art object detection model, YOLO-V2, still adopts a VGGNet-like CNN architecture called DarkNet instead of ResNet variants.\n \nComparing with FractalNet, CrescendoNet has fewer parameters, blocks, and shallower depth but better performance. The performance improvement makes CrescendoNet more competitive than FractalNet, and the linear growth of layers significantly improves the model efficiency. CrescendoNet may look like a variant of FractalNet, but its design principal and network properties are different. \n \nAs for the number of training epochs, we think it is more relevant to the learning rate schedule and optimizers used. But you are right. It is our responsibility to reduce the training time as much as possible. \n\nWe have corrected the typo you pointed out.\n \nThank you again for your reviews and corrections.\n", "Dear reader,\n\nThank you for the comments.\n\n1. design manually vs. design by patterns\nAdmittedly, everyone can have his/her definition of the concept of the design pattern. However, to use deeply fusion method, we need manually design the architectures, the segmentation positions and fusion schemes of individual networks. \n\nIn addition, Figure 1(b) is an illustration of the deep fusion concepts in contrast with the shallow fusion. There is no description that the block structures are identical at all. Also, all the architecture examples given in the paper don’t have all blocks with the same structure, according to Table 1 and 2.\n\nFor the statement:\n\"The base model of DFN is a combination of 7 networks with different numbers of layers (19, 50, 5, 8, 10, 11, 14). DFN segments each of 7 networks into three parts and fuses the corresponding segments into blocks. Also, DFN applies different fusion schemes for its blocks. For example, the first block contains branches with numbers of layers (5, 16, 1, 2, 2, 3, 4) while the second block has (6, 16, 1, 2, 3, 3, 4) layers for its branches.\"\nWe can get the above statements from Table 1.\nCaption: “Table 1. Base network architectures. “\nRow #2: “#Layers: 19, 50, 5, 8, 10, 11, 14” This means the fused nets have corresponding numbers of layers.\nRow #4: “C1. ... 5, 16, 1, 2, 2, 3, 4” This means the segments/branches have corresponding layers.\nRow #6: “c2. ... 6, ...” Similarly.\n\n2. fusion vs. expansion\nWe agree that fusion is a helpful way for training.\n\nThank you very much for paying attention to our paper and providing the interesting paper. Please feel free to discuss with us if you have any other ideas.\n", " 1. design manually vs. design by patterns\n DFN presents a framework using the deep fusion concept. It is also a design pattern. Figure 1(b) in DFN uses a simple design pattern: each block is the same, not different. \n\nFrom the DFN paper, I did not find the description about the following comment by the author:\n\"The base model of DFN is a combination of 7 networks with different numbers of layers (19, 50, 5, 8, 10, 11, 14). DFN segments each of 7 networks into three parts and fuses the corresponding segments into blocks. Also, DFN applies different fusion schemes for its blocks. For example, the first block contains branches with numbers of layers (5, 16, 1, 2, 2, 3, 4) while the second block has (6, 16, 1, 2, 3, 3, 4) layers for its branches.\" \n\n2. fusion vs. expansion\nI think this is just a difference in terms, but the essential is almost the same. The fusion process in DFN is not simply a generalization of ResNet and Highway net, but it provides the possible reason why this kind of network structures could be trained easily. \n\nAbout diversity, I found that the following paper provides interesting analysis:\nOn the Connection of Deep Fusion to Ensembling. Liming Zhao, Jingdong Wang, Xi Li, Zhuowen Tu, Wenjun Zeng. https://arxiv.org/abs/1611.07718v1\n", "Dear reader,\n\nThank you for the great question!\n\nThere are three main differences between CrescendoNet and Deeply fused net (DFN):\n\n 1. design manually vs. design by patterns\n DFN blocks have manually defined structures, which are different from each other. CrescendoNet proposes a design pattern, which generates the block architecture by two hyper-parameters and all the blocks have the same structure. \n\n The base model of DFN is a combination of 7 networks with different numbers of layers (19, 50, 5, 8, 10, 11, 14). DFN segments each of 7 networks into three parts and fuses the corresponding segments into blocks. Also, DFN applies different fusion schemes for its blocks. For example, the first block contains branches with numbers of layers (5, 16, 1, 2, 2, 3, 4) while the second block has (6, 16, 1, 2, 3, 3, 4) layers for its branches. For the CrescendoNet base model, each block includes network branches of increasing number of layers (1, 2, 3, 4...), which defined by Scale and Interval. And the structure of each block is identical.\n \n 2. fusion vs. expansion\n The central idea of DFN is combining pre-defined and separated single-path networks by fusing their intermediate feature representations. Thus, DFN uses one fully connected layer for each branch, which means DFN has seven individual fully connected layers for its base model. CrescendoNet generates the whole architecture by the expansion. CrescendoNet has two sequent fully connected layers following the body of the whole net.\n\n Although both fusion and expansion may achieve ensemble behavior, the fusion process in DFN is a generalization of ResNet and Highway net, while CrescendoNet may achieve the ensemble and feature diversification through designed structure patterns. \n\n 3. performance comparison\n For CIFAR10 and CIFAR100 datasets with widely-used data augmentation scheme, CrescendoNet with 15 layers outperforms DFN with 50 layers by a large margin. The best error rates of DFN and CrescendoNet on CIFAR10 are 6.02% and 4.81% respectively and on CIFAR100 are 27.36% and 22.97%.\n \n The Figure 1 in the DFN paper may cause readers think CrescendoNet has the same structure with DFN. However, the figure is an illustration of the deep (may mean \"on intermediate layers\") fusion from different networks. There are no explicit design rules given to specify the architecture of the whole net. I think that is because pre-defined independent networks partially determine the architecture of DFN.\n\n We summarize some small differences between two architectures as follows:\n • DFN uses the same initialization scheme as VGG net used, while CrescendoNet uses the truncated normal distribution.\n • DFN uses average pooling while CrescendoNet not.\n • DFN uses size 2x2 max pooling while Crescendo uses 3x3 (but this doesn't matter observed from our experiment results).\n\n\nWang, Jingdong, Zhen Wei, Ting Zhang, and Wenjun Zeng. \"Deeply-fused nets.\" arXiv preprint arXiv:1605.07716 (2016).\n", "Dear authors,\n\nCan you compare the CrescendoNet architecture with deeply-fused nets, https://arxiv.org/abs/1605.07716?\n\nThanks,", "Dear reviewers, \n\nWe can see your review. We highly appreciate your help for improving our work.\n\nThank you for reviewing our paper.", "Dear authors,\n\nThank you for answering my 2 questions. I just posted my review. I am curious: Can you see the review? Because when I log out of my account, I can no longer see it. Hence, the review isn't public. I am wondering whether at least you can see it.\n\nThanks,", "Dear Reviewers, \n\nThank you for the reply. \n\nYou are definitely right. The loss is computed with all the convolutional layers involved. However, we didn't state that the memory saving is because we don't need to keep all the parameters in memory for the forward and backward pass. We may cause this misunderstanding by the unclear description. Here, we clarify that we save the memory by avoiding the memory for computing and keeping the gradients for the frozen layers.\n\nFor the whole net training using the back propagation, we compute gradients for all the convolutional kernels and keep them in the memory. Then we update all the parameters with corresponding gradients. \n\nFor the path-wise training, we only need to compute the gradients for a portion of convolutional kernels. We do use the frozen parameters, but we avoid the memory used for computing and storing the gradients for the frozen parameters (note that one parameter needs one gradient).\n\nIn section 2.2, we gave an example that the memory requirement for training convolutional layers can be reduced to 40%, when training Crescendo blocks with four branches of lengths: (1, 2, 3, 4). For whole net training, we need to compute and keep ((1+2+3+4) * number_of_feature_maps_per_layer * kernel_size) gradients in each backward pass, while we only need ((4) * number_of_feature_maps_per_layer * kernel_size) gradients when training the deepest path by path-wise training. In this sense, we use about 40% memory for the gradient computation and storage during the back propagation. We are sorry for the misunderstanding.\n\nThank you for pointing out our problems.\n", "Dear authors,\n\nyou say that path-wise training can reduce memory usage by 40%. How is this? If I freeze the weights in all but one path, I still need to have those weights in memory to compute the forward and backward pass of the network. Just because the weights in some layers aren't updated doesn't mean they aren't needed in memory to compute the activations and gradients of other weights.\n\nThanks,", "We are very sorry for missing the description. \n\nFor the activation function, the listed experiment results used ReLU as the activation function following each convolutional layer and fully connected layer. In addition, we also did some tests with the exponential linear unit (ELU) as the activation function, which turned out the performance is almost unchanged. \n\nWe used two fully connected layers, with 384 hidden units and 192 hidden units respectively.\n\nFor the weight initialization, we mentioned in the paper, in the first paragraph of Section 3.2 \"We use truncated normal distribution for parameter initialization. The standard deviation of hyper-parameters is 0.05 for convolutional weights and 0.04 for fully connected layer weights.\"\n\nThank you for pointing out the problem.", "Dear authors,\n\nI would like to confirm you is that you do not use any ReLU or other nonlinearities at all in the convolutional part of the network. Also, how many fully-connected layers do you use and what, if any, nonlinearities do you use before, between or after them?\n\nAlso, how did you initialize the weights in each layer?\n\nThanks," ]
[ 4, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJdXGy1RW", "iclr_2018_HJdXGy1RW", "iclr_2018_HJdXGy1RW", "SJIR8G67G", "Bk1jsRM1f", "HJwO67Klz", "H1vOPPFez", "B1ZpEWrlz", "BJKHrDNgz", "SJZn-KzxG", "iclr_2018_HJdXGy1RW", "B17ZhCMyG", "iclr_2018_HJdXGy1RW", "BJSwy5MJz", "iclr_2018_HJdXGy1RW", "Bymd5_fyG", "iclr_2018_HJdXGy1RW" ]
iclr_2018_Byj54-bAW
A Tensor Analysis on Dense Connectivity via Convolutional Arithmetic Circuits
Several state of the art convolutional networks rely on inter-connecting different layers to ease the flow of information and gradient between their input and output layers. These techniques have enabled practitioners to successfully train deep convolutional networks with hundreds of layers. Particularly, a novel way of interconnecting layers was introduced as the Dense Convolutional Network (DenseNet) and has achieved state of the art performance on relevant image recognition tasks. Despite their notable empirical success, their theoretical understanding is still limited. In this work, we address this problem by analyzing the effect of layer interconnection on the overall expressive power of a convolutional network. In particular, the connections used in DenseNet are compared with other types of inter-layer connectivity. We carry out a tensor analysis on the expressive power inter-connections on convolutional arithmetic circuits (ConvACs) and relate our results to standard convolutional networks. The analysis leads to performance bounds and practical guidelines for design of ConvACs. The generalization of these results are discussed for other kinds of convolutional networks via generalized tensor decompositions.
rejected-papers
The paper performs a theoretical analysis of the representation power of convolutional networks with inter-layer connections. Whilst the results themselves are interesting, the current presentation of the paper stands in the way of the reading grasping and appreciating the main insights from the paper. The authors acknowledge these issues in their rebuttal, but have not yet revised the paper to resolve them. I encourage the authors to revise the paper to address the reviewer comments, and re-submit it to another venue.
train
[ "HywL2b9xG", "rkoPNE9gG", "HkXstRmZM", "B1kgnv67f", "r15DydamM", "BJy-oPT7z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "SUMMARY\n\nTraditional convolutional neural networks consist of a sequence of information processing layers. However, one can relax this sequential design constraint so that higher layers receive inputs from one, some, or all preceding layers. This modification allows information to travel more freely throughout the network and has been shown to improve performance, e.g., in image recognition tasks. However, it is not clear whether this change in architecture truly increases representational capacity or it merely facilitates network training. \n\nIn this paper, the authors present a theoretical analysis of the gain in representational capacity induced by additional inter-layer connections. The authors restrict their analysis to convolutional arithmetic circuits (ConvACs), a class of networks whose representational capacity has been studied previously. An important property of ConvACs is that the network mapping can be recast as a homogeneous polynomial over the input, with coefficients stored in a \"grid tensor\" $\\mathcal{A}^y$. The grid tensor itself is a function of the hidden weight vectors $\\mathbf{a}^{z,i}$. The authors first extend ConvACs to accommodate \"dense\" inter-layer connections and describe how adding dense connections affects the grid tensor. This analysis gives a potentially useful perspective for understanding the mappings that densely connected ConvACs compute.\n\nThe authors' main results (Theorems 5.1-5.3) analyze the \"dense gain\" of a densely connected ConvAC. This quantity roughly captures how much wider a standard ConvAC would need to be in order to represent the network mapping of a generic densely connected ConvAC. This is in a way a measure of the added representational power obtained from dense connections. The authors give upper bounds on this quantity, but also produce a case in which the upper bound is achieved. Importantly, the upper bounds are inversely proportional to a parameter $\\lambda \\leq 1$ controlling the rate at which hidden layer widths decay with increasing depth. The implication is that indeed densely connected ConvACs can have greater representational capacity, however the gain is limited to the case where hidden layers shrink exponentially with increasing depth.\n\nThese results are partly unsurprising, since densely connected ConvACs contain more trainable parameters than standard ConvACs. In Proposition 3, the authors give some criteria for evaluating when it is nonetheless worthwhile to add dense connections to a ConvAC.\n\nCOMMENTS\n\n(1.) The authors address an interesting and important problem: explaining the empirical success of densely connected CNNs such as ResNets & DenseNets, relative to standard CNNs. The tensor algebra machinery built around ConvACs is impressive and seems to generate sound insights. However, I feel the current presentation fails to provide adequate intuition and interpretation of the results. Moreover, there is no overarching narrative linking the formal results together. This makes it difficult for the reader to grasp the main ideas and significance of the work without diving into all the details. For example:\n\n- In Proposition 1, the authors comment that including a dense connection increases the rank of the grid tensor for a shallow densely connected convAC. However, the significance of grid tensor rank is not discussed.\n\n- In Proposition 2, the authors do not explain why it is important that the added term $g(\\mathbf{X})$ contains only polynomial terms of strictly smaller degree. It is not clear how Propositions 1 & 2 relate to the main Theorems 5.1-5.3. Is the characterization of the grid tensor in Proposition 1 used to obtain the bounds in the later Theorems?\n\n- In Section 5, the authors introduce a parameter $\\lambda \\leq 1$ controlling how the widths of the hidden layers decay with increasing depth. This parameter seems central to the following bounds on dense gain, yet the authors do not motivate it, and there is no discussion of decaying hidden layer widths in previous sections.\n\n- The practical significance of Proposition 3 is not sufficiently well explained. First, it is not clear how to use this result if all we have is an upper bound for $G_w$, as given by Theorems 5.1-5.2. It seems we would need a lower bound to be able to conclude that the ratio $\\Delta P_{stand}/ \\Delta P_{dense}$ is large. Second, it would be helpful if the authors commented on the implication for the special case $k=1$ and $r \\leq (1/1+\\lambda) \\sqrt{M}$, where the dense gain is known.\n\n(2.) Moreover, because the authors choose not to sketch the main proof ideas, it is difficult to identify the key novel insights, and how the special structure of densely connected ConvACs factors into the analysis. After studying the proofs in some detail, I have some specific concerns outlined below, which diminish the significance of the results and raise some doubts about soundness.\n\n- In Theorem 5.1, the authors upper bound the dense gain by showing that arbitrary $(L, r, \\lambda, k)$ dense ConvACs can be represented as standard $(L, r^\\prime, \\lambda, 0)$ ConvACs of sufficient width $r^\\prime \\geq G_w r$. The mechanism of the proof is to relate the grid tensor ranks of dense and standard ConvACs. However, a worst case bound on the grid tensor rank of a dense ConvAC is used, which does not seem to rely on the formulation of dense ConvACs. Thus, this result does not tell us anything in particular about dense ConvACs, but rather is a general result relating the expressive capacity of arbitrary depth-$L$ ConvACs and $(L, r^\\prime, \\lambda, 0)$ ConvACs with decaying widths.\n\n- Central to Theorem 5.2 is the observation that a densely connected ConvAC can be viewed as a standard ConvAC, only with \"virtually enlarged\" hidden layers (of width $\\tilde{r}_\\ell = (1 + 1/\\lambda)r_\\ell$ for $k=1$, using the notation of the paper), and blocks of weights fixed to represent the identity mapping. This is a relatively simple idea, and one that seems to hold for general architectures. Thus, I believe Theorem 5.2 can be shown more simply and in greater generality, and without use of the tensor algebra machinery.\n\n- There is some intuitive inconsistency in Theorem 5.3 which I would like some help resolving. We have seen that dense ConvACs can be viewed as standard ConvACs with larger hidden layers and some weights fixed. Effectively, the proof of Theorem 5.3 argues for a regime on $r, \\lambda, M$ where this induced ConvAC uses its full representational capacity. This is surprising to me however, as I would have guessed that having some weights fixed makes this impossible. It would be very helpful if the authors could weigh in on this confusion. Perhaps there is an issue with the application of Lemmas 2 & 3 in the proof of Theorem 5.3. In Lemmas 2 & 3, we assume the tensors $\\mathcal{A}$ and $\\mathcal{B}$ are random. These Lemmas are applied in the proof of Theorem 5.3 to tensors $\\phi^{\\alpha, j, \\gamma}$ appearing in the construction of the dense ConvAC grid tensor. However, the $\\phi^{\\alpha, j, \\gamma}$ tensors do not seem completely random, as there are blocks of fixed weights. Can the authors please clarify how the randomness assumption is satisfied?\n\n(3.) Lastly, I am concerned that the authors do not at least sketch how to generalize these results to architectures of more practical interest. As the authors point out, there is previous work generalizing theoretical results for ConvACs to convolutional rectifier networks. The authors should discuss whether a similar strategy might apply in this case.", "The paper attempts to provide a theoretical justification for \"DenseNet\", a neural net architecture proposed by Huang et al. that contains connections between non-successive layers. The general goal is to look at \"arithmetic circuit\" (AC) variants of DenseNets, in which ReLUs are replaced by linear combinations and pooling layers are replaced by products. In AC versions of a network, the complexity of the final function computed (score function) can be understood via the rank of a certain tensor associated with the network. \n\nThe paper shows bounds on the rank, and attempts to identify situations in which dense connections are likely to help increase the complexity of the function computed.\n\nWhile the goal is good, I find too many aspects of the paper that are confusing, at least to someone not an expert on ConvACs.\n\n- first, the definition of growth rate is quite different from the paper of Huang et al. (here, the rate is defined as the number of forward-layers a given layer is connected to, while Huang et al. define it as the number of 'new features' that get generated in the current layer). \n\n- second, if ReLUs are replaced by simple summations, then I feel that the point of dense blocks is lost (where the non-linearities in each step potentially add complexity). The paper adds the extra step of forward connections across blocks, but this makes the setup quite different from Huang et al.\n\n- third, it appears that the bounds in Theorems 5.1, 5.2 are only _upper bounds_ on the rank. From the definition of the dense gain, it seems that one would like to _lower bound_ the gain (as this would show that there is no ConvAC with small r' and k=0 that can realize a ConvAC with higher k and a given r). Only theorem 5.3 says something of this kind, and even this looks very weak. The gap between r and r' is rather small.\n\n- finally, the only \"practical take-aways\" (which the paper advertises) seem to be that if the dense gain is close to the general bound on G_w, then dense connections don't help. This seems quite weak to me. Furthermore, it's not clear how G_w can be computed (given tensor rank is hard).\n\nOverall, I found that the paper has too many red flags, and the lack of clarity in the writing makes it hard to judge. I believe the paper isn't ready for publication in its current form.", " The authors first extend the convolutional arithmetic circuits to\nincorporate the dense connections. The expressiveness of the score\nfunction(to be optimized while training) of the convolutional\narithmetic circuits, can be understood by the rank of a tensor\nappearing in a decomposition of the network. Authors derive this\nform for the DenseNet variant of convolutional arithmetic circuits,\nand give bounds on the rank of the associated tensor and using these\nbounds argue the expressive power of the network. The authors also\nclaim these bounds can help in practical guidelines while designing\nthe network.\nThe motivation and attempt is quite good, But the paper is written\nquite poorly without any flow, it is very difficult for the reader\nto understand the novelty or significance of the work, no intuition\nis given and descriptions and so short and cryptic.\nAfter Proposition 1, it is written there is no 'clear advantage' for\nlarge N on using dense connections on a shallow convAC. It is not\nvery clear or obvious since the upper bound for the rank is\nincreasing with some parameter increase. This style of writing is\nprevalent throughout the paper.\nThe DenseNet variant is deviating a lot from original Huang et al,\nRelU is dropped, forward connections across blocks etc. Interblock\nconnections is not intuitively motivated. Most readers would find it\nvery difficult, which of the results apply in case of inter and\nintra block connections. Looks like the results are mostly for inter\nblock connections, for which the empirical results are not there.\nTheorem 5.1 and 5.2 gives some upper bound on dense gain(quantity\nrough defines how much expressive power comes in by adding dense\nconnections, compared to standard convAC), but it is not clear how\nan upper bound is helpful here. A lower bound would have been\nhelpful. The statement after theorem 5.1, by tailoring M and widths\n'such that we exploit the expressiveness added by dense\nconnections'. This seems to very loosely written.\nOverall I feel, the motivation and attempt is fine. But partly due\nto the poor presentation style, deviation from DenseNets and unclear\nnature of the practical usefulness of the results, the paper may not\nbe of contribution to the community at this stage.", "We thank the reviewer for his/hers insightful comments. Moreover, we would like to clarify the questions asked here:\n\n 1. The definition of growth rate is quite different from the paper of Huang et al. (here, the rate is defined as the number of forward-layers a given layer is connected to, while Huang et al. define it as the number of 'new features' that get generated in the current layer). \n\nThe reviewer is totally right for pointing this out. While our definition of $k$ is closely related to the growth rate of Huang et al., they are not the same thing. However the current choice is believed to provide nice theoretical insight to the importance of dense connections. \nIn the next version of this paper, we will refrain from using the term “growth rate” to refer to $k$ and clearly discuss its relation to the growth rate of Huang et al.\n\n\n 2. If ReLUs are replaced by simple summations, then I feel that the point of dense blocks is lost (where the non-linearities in each step potentially add complexity). The paper adds the extra step of forward connections across blocks, but this makes the setup quite different from Huang et al.\n \nWe would like to thank the reviewer for this comment. As discussed above there are certain differences between our setup and the one used by Huang et al. although both having in common the connections between different layers. We will explicate these difference and furthermore, we will reconsider our terminology (i.e., dense blocks, intra-block, inter-block), and make use of the jump length to denote different dense connections. While in a standard convolutional network, the activation function is the source of non-linearity, in a ConvAC, this role is played by the pooling layer. We intend to make an analogy between the dense connections across ReLU layers in a convolutional network, and the dense connnections across pooling layers in a ConvAC.\n\n\n 3. It appears that the bounds in Theorems 5.1, 5.2 are only upper bounds on the rank. From the definition of the dense gain, it seems that one would like to ''lower bound'' the gain (as this would show that there is no ConvAC with small $r'$ and $k=0$ that can realize a ConvAC with higher $k$ and a given $r$). Only theorem 5.3 says something of this kind, and even this looks very weak. The gap between r and r' is rather small.\n\nA lower bound can be provided using the approach proposed by Cohen et. al. Nevertheless, such bound does not depend on the width of the intermediate hidden layers. Therefore, this lower bound on the gain would be the value of $M / \\min (r_0, M)$, which is trivial for $r_0 \\geq M$. Furthermore, in this paper we provide upper bounds and show that under certain conditions these bounds can be achieved.\n\n\n 4. The only \"practical take-aways\" (which the paper advertises) seem to be that if the dense gain is close to the general bound on $G_w$, then dense connections don't help. This seems quite weak to me. Furthermore, it's not clear how $G_w$ can be computed (given tensor rank is hard).\n\nWe agree with the reviewer that $G_w$ cannot be computed, since computing the rank of a tensor is hard. Nevertheless, the bounds $G_w$ provide a measure of how much is there to gain with dense connections. Moreover, we will take into consideration additional amount of parameters added by dense connections into our definition of gain, leading to more meaningful and practically relevant bounds.", "We would like to express our gratitude to the reviewer for proofreading our paper and providing an extensive amount of insightful comments. Moreover, we would like to answer to the issues posted here:\n\n 1. These results are partly unsurprising, since densely connected ConvACs contain more trainable parameters than standard ConvACs.\n\nThe reviewer is absolutely right for this comment. However we noticed this weakness in our comparison and conducted further analysis where the notion of gain is reconsidered by normalizing the gain with the number of trainable parameters caused by dense connections. We believe this is not a trivial result and it will be included in the next version.\n\n 2. The significance of grid tensor rank is not discussed.\n\nWhile the rank of tensor may be used as a measure of complexity, it can also be used to show that certain ConvAC configurations cannot represent the functions of some particular ConvAC, as it is shown by Cohen, et al.\n\n 3. In Proposition 2, the authors do not explain why it is important that the added term $g(\\mathbf{X})$ contains only polynomial terms of strictly smaller degree...\n\nThe polynomial degree can be seen as another measure of complexity of the network function, thus strictly smaller polynomial degrees imply strictly simpler network functions. We will discuss this in more depth in future versions of this paper. We agree with the reviewer that the relation between Prepositions 1 and 2 and the main Theorems should be clearly explained and possibly reorganized. We intend to bridge this gap in future versions of this paper.\n\n 4. The parameter $\\lambda is not properly motivated$.\n\nWe agree with the reviewer that the parameter $\\lambda$ is not properly motivated. This parameter is introduced to facilitate the derivations and provide simple expressions for our bounds. Nevertheless, such simplification is not essential for our analysis, since we may refrain from using this parameter $\\lambda$ and obtain gain bounds in terms of $r_0$, …, $r_L$. We will clarify this in future versions of this paper, and provide general expressions for our bound before simplifying to the exponential width decay setting. \n\n 5. The practical significance of Proposition 3 is not sufficiently well explained.\n\nWe agree with the reviewer that the significance of Preposition 3 is not well explained. Lower bound on the gain can be obtained using the same procedure as Cohen et al., but this would lead to a trivial lower bound of $M / \\min (r_0, M)$ that does not depend on the virtual increase of $r_1$, …, $r_L$. Furthermore, we propose to analyze the expressiveness by looking at the upper bounds, and then showing that these upper bounds can be achieved under some conditions. Moreover, in future versions of the paper we will include the number of parameters as part of our definition of gain, thus providing a way of evaluating whether $\\Delta P_{stand}/ \\Delta P_{dense}$ is large. A more detailed discussion of the special case of $k=1$ and $r \\leq (1/1+\\lambda) \\sqrt{M}$ will also be included.\n\n \n 6. In Theorem 5.1, the authors upper bound the dense gain ...\n\nWe fully agree with the reviewer that the result provided in Theorem 5.1 are a general result regarding the expressive capacity of ConvACs. Nevertheless, this result is necessary to derive the bounds of Theorems 5.2 and 5.3. Therefore, in future versions of this paper we will not include Theorem 5.1 as a main result, but rather as a Lemma for proving Theorems 5.2 and 5.3. \n\n\n 7. I believe Theorem 5.2 can be shown more simply and in greater generality...\n\nWhile this idea may be shown without using tensor algebra, it may not show the saturation point where increasing the width of the hidden layers does not increase the expressiveness of the model.\n\n 8. There is some intuitive inconsistency in Theorem 5.3 which I would like some help resolving...\n\nWe are grateful to the reviewer for this comment. Indeed, the applicability of Lemmas 2 and 3 in the proof of Theorem 5.3 is more subtle but still doable, since the independent randomness assumption is not fully satisfied. We will fully revise this proof, and include a modified version of this Theorem, in the next version of this paper.\n\n\n 9. Lastly, I am concerned that the authors do not at least sketch how to generalize these results to architectures of more practical interest...\n\nThis is done by generalizing the tensor product to a tensor product that considers the activation function and pooling operation. For the case of ReLU layers with max pooling, the the tensor product is replaced by the generalized tensor product $\\otimes_g$ defined as $(B \\otimes C)_{ijkl} = \\max ( B_{ij}, C_{kl}, 0 )$. We will include further discussions regarding this issue in future versions of this paper. ", "Firstly, we would like to thank all the reviewer for the helpful comments and thorough reading of the paper. We now answer the issues posted by the reviewer:\n\n 1. After Proposition 1, it is written there is no 'clear advantage' for large N on using dense connections on a shallow convAC. It is not very clear or obvious since the upper bound for the rank is increasing with some parameter increase. \n\nWe agree with the reviewer that this is not clearly explained. Furthermore, in future versions of this paper we will use the ratio between the grid tensor rank and the number of trainable parameters, as measure of expressiveness, and use it to back up our statements.\n\n\n 2. The DenseNet variant is deviating a lot from original Huang et al, RelU is dropped, forward connections across blocks etc. Interblock connections is not intuitively motivated. Most readers would find it very difficult, which of the results apply in case of inter and intra block connections.\n\nEven though ReLU is dropped for building a ConvAC, the extension of these results to convolutional networks with ReLU and max-pooling can be done by replacing the standard tensor products with a generalized tensor product as done by Cohen et. al. While in a standard ReLU based convolutional neural network the non-linearity is provided by activation function, in a ConvAC it is provided by the pooling layer. Furthermore, in an attempt to analyze the effect of broader connections, we must consider forward connections across pooling layers. \n\n 3. Theorem 5.1 and 5.2 gives some upper bound on dense gain(quantity rough defines how much expressive power comes in by adding dense connections, compared to standard convAC), but it is not clear how an upper bound is helpful here. A lower bound would have been helpful.\n\nA lower bound can be provided using the approach proposed by Cohen et. al. Nevertheless, such bound does not depend on the width of the intermediate hidden layers. Therefore, this lower bound on the gain would be the value of $M / \\min (r_0, M)$, which is trivial for $r_0 \\geq M$. Furthermore, in this paper, we believe that if the upper bounds can be achieved for some instances then there will be a instance that can be expressed by the network with larger upper bound and not by the other one, hence, implying the expressive power of the network. In any case, we agree with the reviewer that investigating lower bounds in this context are more helpful and we will attempt again to obtain those bounds.\n" ]
[ 4, 4, 5, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1 ]
[ "iclr_2018_Byj54-bAW", "iclr_2018_Byj54-bAW", "iclr_2018_Byj54-bAW", "rkoPNE9gG", "HywL2b9xG", "HkXstRmZM" ]
iclr_2018_B16_iGWCW
Deep Boosting of Diverse Experts
In this paper, a deep boosting algorithm is developed to learn more discriminative ensemble classifier by seamlessly combining a set of base deep CNNs (base experts) with diverse capabilities, e.g., these base deep CNNs are sequentially trained to recognize a set of object classes in an easy-to-hard way according to their learning complexities. Our experimental results have demonstrated that our deep boosting algorithm can significantly improve the accuracy rates on large-scale visual recognition.
rejected-papers
The paper presents a boosting method and uses it to train an ensemble of convnets for image classification. The paper lacks conceptual and empirical comparisons with alternative boosting and ensembling methods. In fact, it is not even clear from the experimental results whether or not the proposed method outperforms a simple baseline model that averages the predictions of T independently trained convolutional networks.
train
[ "ry00rINxM", "HJecicqxG", "SJR0TtHZG", "BJluMzjMf", "Skm8WGozM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public" ]
[ "This paper consider a version of boosting where in each iteration only class weights are updated rather than sample weights and apply that to a series of CNNs for object recognition tasks.\n\nWhile the paper is comprehensive in their derivations (very similar to original boosting papers and in many cases one to one translation of derivations), it lacks addressing a few fundamental questions:\n\n- AdaBoost optimises exponential loss function via functional gradient descent in the space of weak learners. It's not clear what kind of loss function is really being optimised here. It feels like it should be the same, but the tweaks applied to fix weights across all samples for a class doesn't make it not clear what is that really gets optimised at the end.\n- While the motivation is that classes have different complexities to learn and hence you might want each base model to focus on different classes, it is not clear why this methods should be better than normal boosting: if a class is more difficult, it's expected that their samples will have higher weights and hence the next base model will focus more on them. And crudely speaking, you can think of a class weight to be the expectation of its sample weights and you will end up in a similar setup.\n- Choice of using large CNNs as base models for boosting isn't appealing in practical terms, such models will give you the ability to have only a few iterations and hence you can't achieve any convergence that often is the target of boosting models with many base learners.\n- Experimentally, paper would benefit with better comparisons and studies: 1) state-of-the-art methods haven't been compared against (e.g. ImageNet experiment compares to 2 years old method) 2) comparisons to using normal AdaBoost on more complex methods haven't been studied (other than the MNIST) 3) comparison to simply ensembling with random initialisations.\n\nOther comments:\n- Paper would benefit from writing improvements to make it read better.\n- \"simply use the weighted error function\": I don't think this is correct, AdaBoost loss function is an exponential loss. When you train the base learners, their loss functions will become weighted.\n- \"to replace the softmax error function (used in deep learning)\": I don't think we have softmax error function", "In conventional boosting methods, one puts a weight on each sample. The wrongly classified samples get large weights such that in the next round those samples will be more likely to get right. Thus the learned weak learner at this round will make different mistakes.\nThis idea however is difficult to be applied to deep learning with a large amount of data. This paper instead designed a new boosting method which puts large weights on the category with large error in this round. In other words samples in the same category will have the same weight \n\nError bound is derived. Experiments show its usefulness though experiments are limited\n", "This paper applies the boosting trick to deep learning. The idea is quite straightforward, and the paper is relatively easy to follow. The proposed algorithm is validated on several image classification datasets.\n\nThe paper is its current form has the following issues:\n1. There is hardly any baseline compared in the paper. The proposed algorithm is essentially an ensemble algorithm, there exist several works on deep model ensemble (e.g., Boosted convolutional neural networks, and Snapshot Ensemble) should be compared against.\n2. I did not carefully check all the proofs, but seems most of the proof can be moved to supplementary to keep the paper more concise.\n3. In Eq. (3), \\tilde{D} is not defined.\n4. Under the assumption $\\epsilon_t(l) > \\frac{1}{2\\lambda}$, the definition of $\\beta_t$ in Eq.8 does not satisfy $0 < \\beta_t < 1$. \n5. How many layers is the DenseNet-BC used in this paper? Why the error rate reported here is higher than that in the original paper?\nTypo: \nIn Session 3 Line 7, there is a missing reference.\nIn Session 3 Line 10, “1,00 object classes” should be “100 object classes”.\nIn Line 3 of the paragraph below Equation 5, “classe” should be “class”.\n", " We thank the reviewers for their comments. Individual points are addressed below. \n\n-The objective function of our algorithm is the weighted margin between the average correct classification probability and the average incorrect classification probability, as in Eq.(3) and Eq.(4). The first term of Eq.(4) is just the likelihood of positive samples. So, the loss can be seen as the opposite number of objective function Eq.(3). Maximizing the margin in Eq.(3) is equivalent to minimizing the loss. To update the weight distribution for different categories, we employ an exponential updating rule as in Eq.(7), which encourages focusing on the categories that are hard to classify.\n\n-For large-scale visual recognition, it is worth noting that every object class may contain large numbers of hard images due to huge intra-class visual diversity, thus weighting the sample errors may not be able to achieve the same effects as weighting the object classes according to their learning complexities, e.g., weighting the sample errors may not be able to improve the accuracy rates for the hard object classes. Because large numbers of object classes may have different learning complexities, the errors from the hard object classes and the easy ones may have significantly different effects on optimizing their joint objective function. Therefore, it is very attractive to invest new boosting algorithms that can train the deep networks for the hard object classes and the easy ones sequentially in an easy-to-hard way, such that the ensemble network can improve the accuracy rates for the hard object classes at certain degrees while effectively maintaining high accuracy rates for the easy ones.\n\n-Because large numbers of object classes may have different learning complexities, the sample errors from the hard object classes and the easy ones may have significantly different roles in optimizing their joint objective function on learning their joint deep network. Unfortunately, for large-scale visual recognition (i.e., recognizing large numbers of object classes), weighting the sample errors individually (like traditional deep boosting approaches) may not be able to achieve the same effects as weighting the object classes directly according to their learning complexities, e.g., treating the sample errors from the hard object classes and the easy ones to be equally important may easily distract their joint deep network on achieving higher accuracy rates on recognizing the easy object classes but paying less attentions on correcting the sample errors from the hard object classes. Therefore, it is very attractive to invest new boosting algorithms that can train the deep networks for the hard object classes and the easy ones sequentially in an easy-to-hard way, such that the ensemble network can improve the accuracy rates for the hard object classes at certain degrees while effectively maintaining high accuracy rates for the easy ones.", "We thank the reviewer for the comments. Individual points are addressed below. \n-In Eq. (3), \\tilde{D} is not defined.\nA:As described in the paragraph below Eq. (4), \\tilde{D}_t(l) is the normalized importance score for class l in the tth base expert. Its initial value is 1/C. Then it will be updated iteratively as in Eq. (9).\n-Under the assumption $\\epsilon_t(l) > \\frac{1}{2\\lambda}$, the definition of $\\beta_t$ in Eq.8 does not satisfy $0 < \\beta_t < 1$. \nA:In Section 3.3, we discuss the selection of $\\lambda$. In the case $\\epsilon_t(l) > \\frac{1}{2\\lambda}$,it means that the l-th category is the hard category, and we decrease the hyper-parameter $\\lambda$ such that $0 < \\beta_t < 1$ holds.\n- How many layers is the DenseNet-BC used in this paper? Why the error rate reported here is higher than that in the original paper? \nA: A 100-layer DenseNet-BC model is used in this paper on CIFAR100 which is the same in the original paper (https://arxiv.org/pdf/1608.06993.pdf). The reason why the error rate here is higher is mainly due to that we do not use all 50,000 samples on training split and validation split at the final run, which is the training trick reported in the original paper. As mentioned in our paper, we only use the validation split for the update of the weights in the following iteration. Another minor factor may be that we only train the model once in the first iteration and do not run many times for selection of the best model since we care more about the effects of our boosting algorithm.\n" ]
[ 2, 6, 5, -1, -1 ]
[ 5, 3, 4, -1, -1 ]
[ "iclr_2018_B16_iGWCW", "iclr_2018_B16_iGWCW", "iclr_2018_B16_iGWCW", "ry00rINxM", "SJR0TtHZG" ]
iclr_2018_rkmtTJZCb
Unsupervised Hierarchical Video Prediction
Much recent research has been devoted to video prediction and generation, but mostly for short-scale time horizons. The hierarchical video prediction method by Villegas et al. (2017) is an example of a state of the art method for long term video prediction. However, their method has limited applicability in practical settings as it requires a ground truth pose (e.g., poses of joints of a human) at training time. This paper presents a long term hierarchical video prediction model that does not have such a restriction. We show that the network learns its own higher-level structure (e.g., pose equivalent hidden variables) that works better in cases where the ground truth pose does not fully capture all of the information needed to predict the next frame. This method gives sharper results than other video prediction methods which do not require a ground truth pose, and its efficiency is shown on the Humans 3.6M and Robot Pushing datasets.
rejected-papers
The paper presents a method for forward prediction in videos. The paper insufficiently motivates the proposed method and presents very limited empirical evaluations (no ablation studies, etc.) to backup its claims. This makes it difficult for the reader to put the work into the context of the broader research around learning from unsupervised video data; leading reviewers to complete about perceived lack of novelty and clarity.
train
[ "BJ1X3tYgf", "HyuKwSixM", "B1e1owMZM", "S1q0U7OMG", "ByLWr7XXf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "The paper treats the interesting problem of long term video prediction in complex video streams. I think the approach of adding more structure to their representation before making longer term prediction is also a reasonable one. Their approach combines an RNN that predicts an encoding of scene and then generating an image prediction using a VAN (Reed et al.). They show some results on the Human3.6M and the Robot Push dataset. \n\nI find the submission lacking clarity in many places. The main lack of clarity source I think is about what the contribution is. There are sparse mentions in the introduction but I think it would be much more forceful and clear if they would present VAN or Villegas et al method separately and then put the pieces together for their method in a separate section. This would allow the author to clearly delineate their contribution and maybe why those choices were made. Also the use of hierarchical is non-standard and leads to confusion I recommend maybe \"semantical\" or better \"latent structured\" instead. Smaller ambiguities in wording are also in the paper : e.g. related work -> long term prediction \"in this work\" refers to the work mentioned but could as well be the work that they are presenting. \n\nI find some of the claims not clearly backed by a thorough evaluation and analysis. Claiming to be able to produce encodings of scenes that work well at predicting many steps into the future is a very strong claim. I find the few images provided very little evidence for that fact. I think a toy example where this is clearly the case because we know exactly the factors of variations and they are inferred by the algorithm automatically or some better ones are discovered by the algorithm, that would make it a very strong submission. Reed et al. have a few examples that could be adapted to this setting and the resulting representation, analyzed appropriately, would shed some light into whether this is the right approach for long term video prediction and what are the nobs that should be tweaked in this system. \n\nIn the current format, I think that the authors are on a good path and I hope my suggestions will help them improve their submission, but as it stands I recommend rejection from this conference.", "The paper presents a method for hierarchical future frame prediction in monocular videos. It builds upon the recent method of Villegas et al. 2017, which generates future RGB frames in two stages: in the first stage, it predicts a human body pose sequence, then it conditions on the pose sequence to predict RGB content, using an image analogy network. This current paper, does not constrain the first stage (high level) prediction to be human poses, but instead it can be any high level representation. Thus, the method does not require human annotations.\n\nThe method has the following two sub-networks:\n1) An image encoder, that given an RGB image, predicts a deep feature encoding. \n2) An LSTM predictor, that conditioned on the last observed frame's encoding, predicts future high level structure p_t. Once enough frames are generated though, it conditions on its own predictions. \n3) A visual analogy network (VAN), that given predicted high level structure p_t, it predicts the pixel image I_t, by applying the transformation from the first to tth frame, as computed by the vector subtraction of the corresponding high level encodings (2nd equation of the paper). VAN is trained to preserve parallelogram relationships in the joint RGB image and high level structure embedding.\n\nThe authors experiment with many different neural network connectivities, e.g., not constraining the predicted high level structure to match the encoder's outputs, constraining the predicted high level structure to match the encoder's output (EPEV), and training together the VAN and predictor so that VAN can tolerate mistakes of the predictor. Results are shown in H3.6m and the pushobject datasets, and are compared against the method of Villegas et all (INDIVIDUAL). The conclusion seems to be that not constraining the predicted high level structure to match the encoder’s output, but biasing the encoder’s output in the observed frames to represent ground-truth pose information, gives the best results. \n\nPros\n1) Interesting alternative training schemes are tested\n\nCons:\n1)Numerous English mistakes, e.g., ''an intelligent agents\", ''we explore ways generate\" etc.\n\n2) Equations are not numbered (and thus is hard to refer to them.) E.g., i do not understand the first equation, shouldn’t it be that e_{t-1} is always fixed and equal to the encoding of the last observed (not predicted) frame? Then the subscript cannot be t-1.\n\n3) In H3.6M, the results are only qualitative. The conclusions from the paper are uncertain, partly due to the difficulty of evaluating the video prediction results.\n\n\nGiven the difficulty of assessing the experimental results quantitatively (one possibility to do so is asking a set of people of which one they think is the most plausible video completion), and given the limited novelty of the paper, though interesting alternative architectures are tried out, it may not be suitable to be part of ICLR proceedings as a conference paper. \n", "The paper presents a method for predicting future video frames. The method is based on Villegas et al. (2017), with the main difference being that no ground truth pose is needed to train the network.\n\nThe novelty of the method is limited. It seems that there is very little innovation in terms of network architecture compared to Villegas et al. The difference is mainly on how the network is trained. But it is straightforward to train the architecture of Villegas et al. without pose -- just use any standard choice of loss that compares the predicted frame versus the ground truth frame. I don't see what is non-trivial or difficult about not using pose ground truth in training.\n\nOverall I think the contribution is not significant enough. \n", "We would like to thank the reviewers for taking the time to read our submission and make numerous suggestions for improvements. We uploaded a revision that fixes some English mistakes, improves clarity, and provides new quantitative results on Humans 3.6M that highlight the benefits of the proposed method. \n\nAll three reviewers view the proposed method as insufficiently novel and/or significant. The main motivation of our work is to enable training of hierarchical video prediction models on data where pose groundtruth labels are impractical to collect or unavailable. As Reviewer2 points out, the simple way to do so would be to take the architecture of Villegas et al. and train it using a loss that compares the predicted frame versus the ground truth frame. This is essentially the E2E method described in our submission, and we include this model as one of the baselines in our work. In our experiments, we found that this E2E baseline is not sufficient, and we show that the EPEV method-- which is the main novel contribution of our work--produces much better results on the humans dataset than the E2E method, as shown in the appendix.\n\nAn important concern was that our submission did not have quantitative evidence to support our claims that our results on the Humans 3.6M dataset were better than the CDNA method from Finn et al. To address this, we ran the following evaluations:\n\n1. We ran a pre trained person detector (ssd_mobilenet_v1_coco from the TensorFlow object detection model zoo) on the generated frames from the EPEV method and the CDNA method, on the Humans dataset. The person detector had a much higher average person-class confidence for EPEV frames, compared to CDNA. See section 4.3.1 of the most recent revision for details.\n\n2. We also did a evaluation with a service similar to Mechanical Turk where workers rated the EPEV method as more realistic 53.6% of the time, the CDNA method as more realistic 11.1% of the time and the generated videos as being about the same 35.3% of the time. See section 4.3.2 of the most recent revision for details.\n\nWe believe this provides evidence that the proposed method is both qualitatively and quantitatively better compared to Finn et al. at predictions made on the Humans 3.6M dataset.", "We also made the following revisions in a more recent update:\n\n1. We seperated the description of the method from Villegas et al from the description of our method. We believe this addresses the comment from reviewer 1 about clarifying our key contribution. See section 2 and section 3.\n\n2. We evaluated our method on a toy dataset and showed that it can make reasonable predictions 1,000 frames into the future about 97% of the time, where the CDNA baseline can only do this about 25% of the time. We believe this addresses the comment from reviewer 1 about needing evidence to support our claim that our method works well for long term prediction. See section 4.2." ]
[ 4, 4, 4, -1, -1 ]
[ 4, 4, 4, -1, -1 ]
[ "iclr_2018_rkmtTJZCb", "iclr_2018_rkmtTJZCb", "iclr_2018_rkmtTJZCb", "iclr_2018_rkmtTJZCb", "S1q0U7OMG" ]
iclr_2018_Hk-FlMbAZ
The Manifold Assumption and Defenses Against Adversarial Perturbations
In the adversarial-perturbation problem of neural networks, an adversary starts with a neural network model F and a point \bfx that F classifies correctly, and applies a \emph{small perturbation} to \bfx to produce another point \bfx′ that F classifies \emph{incorrectly}. In this paper, we propose taking into account \emph{the inherent confidence information} produced by models when studying adversarial perturbations, where a natural measure of ``confidence'' is \|F(\bfx)\|_\infty(i.e.howconfidentF$ is about its prediction?). Motivated by a thought experiment based on the manifold assumption, we propose a ``goodness property'' of models which states that \emph{confident regions of a good model should be well separated}. We give formalizations of this property and examine existing robust training objectives in view of them. Interestingly, we find that a recent objective by Madry et al. encourages training a model that satisfies well our formal version of the goodness property, but has a weak control of points that are wrong but with low confidence. However, if Madry et al.'s model is indeed a good solution to their objective, then good and bad points are now distinguishable and we can try to embed uncertain points back to the closest confident region to get (hopefully) correct predictions. We thus propose embedding objectives and algorithms, and perform an empirical study using this method. Our experimental results are encouraging: Madry et al.'s model wrapped with our embedding procedure achieves almost perfect success rate in defending against attacks that the base model fails on, while retaining good generalization behavior.
rejected-papers
The original paper was sloppy in its use of mathematical constructs such as manifolds, made assumptions that are poorly motivated (see review #2 for details), and presented an empirical evaluation is preliminary. Based on the reviews, the authors have substantially revised the paper to try and address those issues by adding new theory, etc. Unfortunately, it is difficult to assess whether these revisions are sufficient to address the aforementioned issues without going through a second round of "full" review. I encourage the authors to use the reviewer comments to further improve the paper, and re-submit to a different venue.
train
[ "rkgspXWgf", "HyjN-YPlz", "S1ZQKlqxM", "HyyW9e0ZM", "SJ6IO2c7G", "Sk93E_N7z", "ryYl30GQM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public" ]
[ "The authors argue that \"good\" classifiers naturally represent the classes in a classification as well-separated manifolds, and that adversarial examples are low-confidence examples lying near to one of these manifolds. The authors suggest \"fixing\" adversarial examples by projecting them back to the manifold, essentially by finding a point near the adversarial example that has high confidence.\n\nThere are numerous issues here, which taken together, make the whole story pretty unconvincing.\n\nThe term \"manifold\" is used very sloppily. To be fair, this is unfortunately common in modern machine learning. An actual manifold is a specific mathematical structure with specific properties. In ML, what is generally hypothesized is that the data (often per class) lives \"near\" to some \"low-dimensional\" structure. In this paper, even the low-dimensionality isn't used --- the \"manifold assumption\" is used as a stand-in for \"the regions associated with different classes are well-separated.\" (This is partially discussed in Section 6, where the authors point out correctly that the same defense as used here could be used with a 1-nn model.) This is fine as far as it goes, but the paper refs Basri & Jacobs 2016 multiple times as if it says anything relevant about this paper: Basri & Jacobs is specifically about the ability of deep nets to fit data that falls on (actual, mathematical) manifolds. This reference doesn't add much to the present story.\n\nThe essential argument of the paper rests on the \"Postulate: (A good model) F is confident on natural points drawn from the manifolds, but has low confidence on points outside of the manifolds.\" \n\nThis postulate is sloppy and speculative. For instance, taken in its strong form, if believe the postulate, then a good model:\n1. Can classify all \"natural points\" from all classes with 100% accuracy.\n2. Can detect adversarial points with 100% accuracy because all high-confidence points are correct classifications and all low-confidence points are adversarial.\n3. All adversarial examples will be low-confidence.\n\nPoint 1 makes it clear that no good model F fully satisfying the postulate exists --- models never achieve 100% accuracy on difficult real-world distributions. But the method for dealing with adversarial examples seems to require Points 2 and 3 being true.\n\nTo be fair, the paper more-or-less admits that how true these points are is not known and is important. Nevertheless, I think this paper comes pretty close to arguing something that I *think* is not true, and doesn't do much to back up its argument. Because of the quality of the writing (generally sloppy), it's hard to tell, but I believe the authors are basically arguing that:\na. You can generally easily detect adversarial points because they are low confidence.\nb. If you go through a procedure to find a point near your adversarial point that is high-confidence, you'll get the \"correct\" (or perhaps \"original\") class back.\n\nI think b follows from a, but a is extremely suspect. I do not personally work in adversarial examples, and briefly looking at the literature, it seems that most authors *do* focus on how something is classified and not its confidence, but I don't think it's *that* hard to generate high-confidence adversarial examples. Early work by Goodfellow et al. (\"Explaining and Harnessing Adversarial Examples\", Figure 1, shows an example where the incorrect classification has very high confidence. The present paper only uses Carlini-Wagner attacks. From a read of Carlini-Wagner, it seems they are heavily concerned with finding *minimal* perturbations to achieve a given misclassification; this will of course produce low-confidence adversaries, but I see no reason why this is a general property of all adversarial examples.\n\nThe experiments are weak. I applaud the authors for mentioning the experiments are very preliminary, but that doesn't make them any less weak. \n\nWhat are we to make of the one image discussed at the end of Section 5 and shown in Figure 1? The authors note that the original image gives low-confidence for the correct class. (Does this mean that the classifier isn't \"good\"? Is it evidence against some kind of manifold assumption?) The authors note the adversarial category has significantly higher confidence, and say \"in this case, it seems that it is the vagueness of the signals/data that lead to a natural difficulty.\" But the signals and data are ALWAYS vague. If they weren't, machine learning would be easy. This paper proposes something, looks at a tiny number of examples, and already finds a counterexample to the theory. What's the evidence *for* the theory? \n\nA lot of writing is given over to how this method is \"semantic\", and I just don't buy it. The connection to manifolds is weak. The basic argument here is really \"(1) If our classifiers produce smooth well-separated high-confidence regions, (2) then we can detect adversaries because they're low-confidence, and (3) we can correct adversaries by projecting them back to high-confidence.\" (1) seems vastly unlikely to me based on all my experience: neural nets often get things wrong, they often get things wrong with high confidence, and when they're right, the confidence is at least sometimes low. The authors use a sloppy postulate about good models and so could perhaps argue I've never seen a good model, but the methods of this paper require a good model. (2) seems to follow logically from (1). (3) is also suspect --- perturbations which are *minimal* can be corrected as this paper does (and Carlini-Wagner attacks are minimal by design), but there's no reason to expect general perturbations to be minimal.\n\nThe writing is poor throughout. It's generally readable, but the wordings are often odd, and sometimes so odd it's hard to tell what was meant. For instance, I spent awhile trying to decide whether the authors assumed common classifiers are \"good\" (according to the postulate) or whether this paper was about a way to *make* classifiers good (I eventually decided the former).", "The manuscript proposes two objective functions based on the manifold assumption as defense mechanisms against adversarial examples. The two objective functions are based on assigning low confidence values to points that are near or off the underlying (learned) data manifold while assigning high confidence values to points lying on the data manifold. In particular, for an adversarial example that is distinguishable from the points on the manifold and assigned a low confidence by the model, is projected back onto the designated manifold such that the model assigns it a high confidence value. The authors claim that the two objective functions proposed in this manuscript provide such a projection onto the desired manifold and assign high confidence for these adversarial points. These mechanisms, together with the so-called shell wrapper around the model (a deep learning model in this case) will provide the desired defense mechanism against adversarial examples.\n\nThe manuscript at the current stage seems to be a preliminary work that is not well matured yet. The manuscript is overly verbose and the arguments seem to be weak and not fully developed yet. More importantly, the experiments are very preliminary and there is much more room to deliver more comprehensive and compelling experiments.", "\n1) Summary\nThis paper proposes a new approach to defending against adversarial attacks based on the manifold assumption of natural data. Specifically, this method takes inputs (possibly coming from an adversarial attack), project their semantic representation into the closest data class manifold. The authors show that adversarial attack techniques can be with their algorithm for attack prevention. In experiments, they show that using their method on top of a base model achieves perfect success rate on attacks that the base model is vulnerable to while retaining generalizability.\n\n\n2) Pros:\n+ Novel/interesting way of defending against adversarial attacks by taking advantage of the manifold assumption.\n+ Well stated formulation and intuition.\n+ Experiments validate the claim, and insightful discussion about the limitations and advantages of the proposed method.\n\n3) Cons:\nNumber of test examples used too small:\nAs mentioned in the paper, the number of testing points is a weakness. There needs to be more test examples to make a strong conclusion about the method’s performance in the experiments.\n\nComparison against other baselines:\nEven though the method proposes a new approach for dealing with adversarial attacks using Madry et al. as base model, it would be useful to the community to see how this method works with other base models.\n\nAlgorithm generalizability:\nAs mentioned by the authors, their method depends on assumptions of the learned embeddings by the model being used. This makes the method less attractive for people that may be interested in dealing with adversarial examples in, for example, reinforcement learning problems. Can the authors comment on this?\n\nAdditional comments:\nThe writing needs to be polished.\n\n\n4) Conclusion:\nOverall, this is a very interesting work on how to deal with adversarial attacks in deep learning, while at the same time, it shows encouraging results of the application of the proposed method. The experimental section could improve a little bit in terms of baselines and test examples as previously mentioned, and also the authors may give some comments on if there is a simple way to make their algorithm not depend on assumptions of the learned embeddings.\n", "We appreciate the insightful comments. Our ideas seem controversial and have received very different opinions. Given the time constraint we want to give first responses for the most important concerns. We have also submitted a first revision, which gives theoretical justification of our ideas. We have also empirically validate that Madry et al.'s model is better.\n\n1. What is this paper about?\n\nIn short, we want to propose taking into account the confidence information produced by models when studying adversarial perturbations. Manifold assumption is important for us to arrive at the conclusion that \"A good model must have well-separated confident regions.\" We view this as a goodness property that gives a clue to adversarial perturbation problem. Specifically, this paper: (1) proposes a goodness property of a model, (2) argues that one may still have to handle low-confidence points with a good model, and (3) identifies a good model from literature and wraps it with embedding and evaluates the result.\n\n2. What is claimed lacks backup, and is unlikely to hold for neural networks.\n\nInterestingly, we can now prove that Madry et al.'s objective encourages a good model. Specifically, Section 4 of the revision gives formalizations of the goodness property and analyzes Madry et al.'s objective. We prove: (1) Even with a somewhat good solution to the objective, high-confidence points with different predictions will be well separated soon as confidence increases. (2) Controlling low-confidence points with wrong predictions is the core challenge to a good solution to the objective. These results are a bit unexpected, and corroborate our intuitions derived from the manifold assumption.\n\nWe are also empirically validating that Madry et al.'s robust model is good: Table 1 of the revision compares two models that share the same architecture where one is trained using the robustness objective and one is trained without robustness objective. We modify CW attack to find attacks of confidence >= 0.9. On the first 30 random samples, only 2 confident attacks are found on the robust model, but 29 confident attacks are found on the natural model! This gives evidence that Madry et al.'s robust model is good.\n\n3. (R3) Whether this paper assumes common classifiers are good, or is about a way to make classifiers good?\n\nPlease refer to point 1: Neither is the goal of this this paper. Existing common classifiers are unlikely to be good, and we seek through literature and decide to use Madry et al.'s model. We apologize for the confusion.\n\n4. (R3) Weak connection with manifold assumption and Basri-Jacobs.\n\nFirst, manifold assumption is critical for us to propose considering confidence information and the goodness property. The low dimensionality in manifold assumption is important to our intuition that handling low-confidence points are more challenging than separating confident regions.\n\nSecond, manifold assumption now manifests in our analysis. In Definition 5, if the data generating distribution has entangled classes, then generalization and robustness are in contradiction: If model generalizes well with correct and confident predictions, then in a small neighborhood there are two points which are predicted with different labels and each with high confidence. This implies poor separation and thus poor robustness.\n\nFinally, there is a factual misunderstanding about Basri-Jacobs. Basri-Jacobs do not only study points that fall on the manifolds, but also they study those nearby the manifolds. This is mentioned in their abstract, and the entire Section 4 is about this. Their work directly inspires our thought experiment and defenses.\n\n5. (R3) CW attacks are about minimal perturbations that change labels, so they only produce low-confidence attacks.\n\nFirst, existing attacks (including FGSM by Goodfellow et al.) all concern about changing labels. CW attacks are by far the strongest known attacks and subsume previous proposals.\n\nSecond, models used in Goodfellow et al. are weak (much smaller than Madry et al.'s and is not trained with robustness). Thus they are far from being good, and so FGSM found confident attacks with minimal perturbations. Our results indicate that it is much harder to generate high-confidence attacks on Madry et al.'s model.\n\nFinally, the fact that CW attacks so far only find low-confidence attacks on Madry et al.'s model, in contrast to that FGSM finds high-confidence attacks on weak models, gives further evidence that Madry et al.'s model is good.\n\n6. (R1) How about reinforcement learning?\n\nWe believe that the same underlying principle applies: One needs to take into account the confidence information of a model when dealing with adversarial perturbations. Specifically, since reinforcement learning is modeled as a Markov decision process, the confidence information naturally exists there (yet no work has taken it into account in dealing with adversarial noise!). We will give more details in a separate reply.", "We have submitted a revision of our draft. Three most important changes:\n\n1. Theoretically, we can now prove that Madry et al.'s objective encourages well-separated confident regions. Especially, predictions of different classes with confidence p will be well separated as soon as p increases. See Proposition 1 in Section 4 of the new draft. Moreover, our analysis also reveals that Madry et al.'s objective may have a weak control of low-confidence but wrong predictions in the sense that there might be many such points (see Proposition 2 in Section 4 of the new draft and discussions thereafter).\n\nThis new theoretical connection/evidence with Madry et al.'s paper is surprising to us, because it exactly matches the intuitions we developed from the manifold assumption.\n\n2. Empirically, we have conducted a new experiment (due to time constraint this is the fastest experiment we can finish before revision deadline) which compares Madry et al.'s robust model with the natural model (which has the same architecture but is trained without any robustness objective) in terms of the separation of confident regions. The results are encouraging and conform to our theoretical analysis very well. In short, it is much harder to find confident attacks on the robust model than the natural model (where confident attacks almost always exist). Please see paragraph \"Comparing models using the goodness property\" in Section 6.\n\n3. We have restructured the paper a bit so as to decouple \"separation of confident regions\" from \"being confident on natural manifolds.\" (thus by manifold assumption confident regions must be separated). This decoupling makes our goal more clear (constructing a robust model requires consideration of separating confident regions, and embedding can be useful in fixing low-confidence errors), with the caveat that being able to generalize requires a learner to learn the manifolds.", "Thanks for the pointer. We will consider adding discussions and experiments based on this synthetic data set in future drafts. Some preliminary thoughts here:\n\n1. Our method is not supposed to work with an arbitrary base model, but only good ones where \"confident regions\" are well separated. Taking this view, the thing is that you can use confidence to distinguish good and bad points. For example, even though average distance to the nearest \"on manifold data\" and \"off manifold data\" are comparable, we can still distinguish them because \"off/far-away from manifold data\" because good models have low confidence on them.\n\n2. Perhaps most surprisingly, we find that good models already follow from existing training objectives. In particular, Madry et al.'s recent robust model already encourage good separation for confident points, but the control for low-confidence points is weak. One can prove such facts using tools as simple as Markov's inequality.\n\n3. That said, we think it is good to try robust objective on the synthetic dataset and see how well confidence really works (note that we also need a good model architecture in order to fit well the geometric structure).", "This paper may be relevant to the current discussion: https://openreview.net/forum?id=SyUkxxZ0b.\nIt explores adversarial examples on a synthetic dataset where the data manifold is defined mathematically (classifying between two concentric spheres). For this dataset it is mathematically defined what is on and off the data manifold (just based on p(x)). The authors can find local adversarial errors both on and off the data manifold, the search space of a small L_2 ball will contain both kinds. In fact the average distance to the nearest \"on the data manifold\" and \"off the data manifold\" adversarial examples are comparable. \n\nThe basic conclusion of this paper is that, at least for this dataset, the reason local errors exist near most correctly classified data points is due to the geometry of the manifold itself, and not some other issue of the neural network classifiers. In fact, the authors can prove a bound relating the amount of test error to the average distance to nearest error, which implies that any model with non-zero test error must have local adversarial errors. It might be useful for the authors to discuss their defense proposal in the context of the sphere dataset given the simplicity of this dataset. It could help clarify what property of either the dataset, or the models the authors are hoping to capitalize on." ]
[ 3, 4, 5, -1, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2018_Hk-FlMbAZ", "iclr_2018_Hk-FlMbAZ", "iclr_2018_Hk-FlMbAZ", "iclr_2018_Hk-FlMbAZ", "iclr_2018_Hk-FlMbAZ", "ryYl30GQM", "iclr_2018_Hk-FlMbAZ" ]
iclr_2018_rJbs5gbRW
On the Generalization Effects of DenseNet Model Structures
Modern neural network architectures take advantage of increasingly deeper layers, and various advances in their structure to achieve better performance. While traditional explicit regularization techniques like dropout, weight decay, and data augmentation are still being used in these new models, little about the regularization and generalization effects of these new structures have been studied. Besides being deeper than their predecessors, could newer architectures like ResNet and DenseNet also benefit from their structures' implicit regularization properties? In this work, we investigate the skip connection's effect on network's generalization features. Through experiments, we show that certain neural network architectures contribute to their generalization abilities. Specifically, we study the effect that low-level features have on generalization performance when they are introduced to deeper layers in DenseNet, ResNet as well as networks with 'skip connections'. We show that these low-level representations do help with generalization in multiple settings when both the quality and quantity of training data is decreased.
rejected-papers
The paper appears unfinished in many ways: the experiments are preliminary, the paper completely ignored a large body of prior work on the subject, and the presentation needs substantial improvements. The authors did not provide a rebuttal. I encourage the authors to refrain from submitting unfinished papers such as this one in the future, as it unnecessarily increases the load on a review system that is already strained.
train
[ "H1YnyaKgG", "r1OmdAYxz", "r1bLTRKxG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The ms analyses a number of simulations how skip connections effect the generalization of different network architectures. The experiments are somewhat interesting but they appear rather preliminary. To indeed show the claims made, error bars in the graphs would be necessary as well will more careful and more generic analysis. In addition clear hypotheses should be stated. \nThe fact that some behaviour is seen in MNIST or CIFAR in the simulations does not permit conclusion for other data sets. Typically extensive teacher student simulations are required to validly make points. Also formally the paper is not in good shape. ", "This paper analyzes the role of skip connections with respect to generalization in recent architectures such as ResNets or DenseNets. The authors perform an analysis of the performance of ResNets and DenseNets under data scarcity constraints and noisy training samples. They also run some experiments assessing the importance of the number of skip connections in such networks.\n\nThe presentation of the paper could be significantly improved. The motivation is difficult to grasp and the contributions do not seem compelling.\n\nMy main concern is about the contribution of the paper. The hypothesis that skip connections ease the training and improve the generalization has already been highlighted in the ResNet and DenseNet paper, see e.g. [a].\n\n[a] https://arxiv.org/pdf/1603.05027.pdf\n\nMoreover, the literature review is very limited. Although there is a vast existing literature on ResNets, DenseNets and, more generally, skip connections, the paper only references 4 papers. Many relevant papers could be referenced in the introduction as examples of successes in computer vision tasks, identity mapping initialization, recent interpretations of ResNets/DensetNets, etc.\n\nThe title suggests that the analysis is performed on DenseNet architectures, but experiments focus on comparing both ResNets and DenseNets to sequential convolutional networks and assessing the importance of skip connections.\n\nIn section 3.1. (1st paragraph) proposes adding noise to groundtruth labels; however, in section 3.1.2,. it would seem that noise is added by changing the input images (by setting some pixel channels to 0). Could the authors clarify that? Wouldn’t the noise added to the groundtruth act as a regularizer?\n\nIn section 4, the paper claims to investigate the role of skip connections in vision tasks. However, experiments are performed on MNIST, CIFAR100, a curve fitting problem and a presumably synthetic 2D classification problem. Performing the analysis on computer vision datasets such as ImageNet would be more compelling to back the statement in section 4.\n", "The paper studies the effect of different network structures (plain CNN, ResNet and DenseNet). This is an interesting line of research to pursue, however, it gives an impression that a large amount of recent work in this direction has not been considered by the authors. The paper contains ONLY 4 references. \n\nSome references that might be useful to consider in the paper:\n- K. Greff et. al. Highway and Residual Networks learn Unrolled Iterative Estimation.\n- C. Zang et. al. UNDERSTANDING DEEP LEARNING REQUIRES RETHINKING GENERALIZATION\n- Q. Liao el. al. Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex\n- A. Veit et. al. Residual Networks Behave Like Ensembles of Relatively Shallow Networks\n- K. He at. Al Identity Mappings in Deep Residual Networks\n\nThe writing and the structure of the paper could be significantly improved. From the paper, it is difficult to understand the contributions. From the ones listed in Section 1, it seems that most of the contributions were shown in the original ResNet and DenseNet papers. Given, questionable contribution and a lack of relevant citations, it is difficult to recommend for acceptance of the paper. \n\nOther issues:\nSection 2: “Skip connection …. overcome the overfitting”, could the authors comment on this a bit more or point to relevant citation? \nSection 2: “We increase the number of skip connections from 0 to 28”, it is not clear to me how this is done.\nSection 3.1.1 “deep Linear model”, what the authors mean with this? Multiple layers without a nonlinearity? Is it the same as Cascade Net?\nSection 3.2 From the data description, it is not clear how the training data was obtained. Could the authors provide more details on this?\nSection 3.2 “…, only 3 of them are chosen to be displayed…”, how the selection process was done?\nSection 3.2 “Instead of showing every layer’s output we exhibit the 3th, 5th, 7th, 9th, 11th, 13th and the final layer’s output”, according to the description in Fig. 7 we should be able to see 7 columns, this description does not correspond to Fig. 7.\nSection 4 “This paper investigates how skip connections works in vision tasks…” I do not find experiments with vision datasets in the paper. In order to claim this, I would encourage the authors to run tests on a CV benchmark dataset (e. g. ImageNet)\n" ]
[ 2, 3, 3 ]
[ 5, 4, 4 ]
[ "iclr_2018_rJbs5gbRW", "iclr_2018_rJbs5gbRW", "iclr_2018_rJbs5gbRW" ]
iclr_2018_SJCq_fZ0Z
Sparse Attentive Backtracking: Long-Range Credit Assignment in Recurrent Networks
A major drawback of backpropagation through time (BPTT) is the difficulty of learning long-term dependencies, coming from having to propagate credit information backwards through every single step of the forward computation. This makes BPTT both computationally impractical and biologically implausible. For this reason, full backpropagation through time is rarely used on long sequences, and truncated backpropagation through time is used as a heuristic. However, this usually leads to biased estimates of the gradient in which longer term dependencies are ignored. Addressing this issue, we propose an alternative algorithm, Sparse Attentive Backtracking, which might also be related to principles used by brains to learn long-term dependencies. Sparse Attentive Backtracking learns an attention mechanism over the hidden states of the past and selectively backpropagates through paths with high attention weights. This allows the model to learn long term dependencies while only backtracking for a small number of time steps, not just from the recent past but also from attended relevant past states.
rejected-papers
The authors propose to use attention over past time steps to try and solve the gradient flow problem in learning recurrent neural networks. Attention is performed over a subset of past states by a hueristic that boils to selecting best time-steps. I agree with the authors that they offer a lot of comparisons, but like the reviewers, I am inclined to find the experiments not very convincing of the arguments they are attempting to make. The model that they propose has similarities to seq2seq in that they use attention to pass more information in the forward pass; in a sense this is a seq2seq model with the same encoder and decoder, and there are parallels to self-attention. The model also has similarities to clockwork RNNs and other skip connection methods.. However, the experiments offered to not tease out these effects. It is unsurprising that a fixed size neural network is unable to do a long copy task perfectly, but an attention model can. What would have been more interesting would have been to explore if other RNN models could have done so. The experiments on pMNIST aren't really compelling as the baselines are far from SOTA (example: https://arxiv.org/pdf/1606.01305.pdf report 0.041 error rate (95.9% test acc) with LSTMs and regularization). Text8 also shows worse results in full BPTT on LSTM. If BPTT is consistently better than this method, it defeats the argument that gradient explosion and forgetting over long sequences is a problem for RNNs (one of the motivations offered for this attention model).
train
[ "S1mm_s0NM", "S1GvBVDWz", "SySGrVOVM", "rkL2eNONz", "H1qBHY5eM", "SJppHuogG", "r1JAdgsQf", "rJoYueomM", "SJtYjJsmf", "rkIdYyoXf" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "We thank the reviewer again for reviewing our paper. We would like to ask the reviewer if there is any further questions regarding our rebuttal, especially the updated MNIST results and the comparisons with full self-attention.", "This work proposes Sparse Attentive Backtracking, an attention-based approach to incorporating long-range dependencies into RNNs. Through time, a “macrostate” of previous hidden states is accumulated. An attention mechanism is used to select the states within the macro-state most relevant to the current timestep. A weighted combination of these previous states is then added to the hidden state as computed in the ordinary way. This construction allows gradients to flow backwards quickly across longer time scales via the macrostate. The proposed architecture is compared against LSTMs trained with both BPTT and truncated BPTT.\n\nPros:\n- Novel combination of recurrent skip connections with attention.\n- The paper is overall written clearly and structured well.\n \n\nCons:\n- The proposed algorithm is compared against TBPTT but it is unclear the extent to which it is solving the same computational issues TBPTT is designed to solve.\n- Design decisions, particularly regarding the attention computation, are not fully explained.\n\nSAB, like TBPTT, allows for more frequent updates to the parameters. However, unlike TBPTT, activations for previous timesteps (even those far in the past) need to be maintained since gradients could flow backwards to them via the macrostate. Thus SAB seems to have higher memory requirements than TBPTT. The empirical results demonstrate that SAB performs slightly better than TBPTT for most tasks in terms of accuracy/CE, but there is no mention of comparing the memory requirements of each. Results demonstrating also whether SAB trains more quickly than the LSTM baselines would be helpful.\n\nThe proposed affine form of attention does not appear to actually represent the salience of a microstate and a given time. The second term of the RHS of equation 1 (w_2^T \\hat{h}^{(t)}) is canceled out in the subtraction in equation 2, since this term is constant for all i. Thus the attention weights for a given microstate are constant throughout time, which seems undesirable.\n\nThe related work discusses skip connections in the context of convolutional nets, but doesn’t mention previous works incorporating skip connections into RNN architectures, such as [1], [2], or [3].\n\nOverall, the combination of recurrent skip connections and attention appears to be novel, but experimental comparisons to other skip connection RNN architectures are missing and thus it is not clear how this work is positioned relative to previous related work. \n\n[1] Lin, Tsungnan, et al. \"Learning long-term dependencies in NARX recurrent neural networks.\" IEEE Transactions on Neural Networks 7.6 (1996): 1329-1338.\n[2] Koutnik, Jan, et al. \"A clockwork rnn.\" International Conference on Machine Learning. 2014.\n[3] Chang, Shiyu, et al. \"Dilated recurrent neural networks.\" Advances in Neural Information Processing Systems. 2017.\n\nEDIT: I have read the updated paper and the author's rebuttal. I am satisfied with the update to the attention weight formulation. Overall, I still feel that the proposed SAB approach represents a change to the model structure via skip connections. Therefore SAB should also be compared against other approaches that use skip connections, and not just BPTT / TBPTT, which operate on the standard LSTM. Thus to me the experiments are still lacking. However, I think the approach is quite interesting and as such I am revising my rating from 4 to 5.", "We’d like to thank you again for your review of the paper. We have updated the paper with your suggestions (including better biological motivation, updated MNIST results, and comparison with self-attention trained using full BPTT). Would you have any other questions regarding the rebuttal? ", "Thank you for your review of the paper and the finding of the cancellation problem in the computation of attention weights! We have eliminated said problem and consequently obtained improved results which significantly outperforms TBPTT and in some cases full BPTT. Lastly, we have noted all of the foregoing in the updated manuscript.\n\nThanks again for pointing out the shortcomings. Do you have any more questions about the rebuttal, especially as regards the attention mechanism and the strength of the experimental results against TBPTT?\n", "The paper proposes sparse attentive backtracking, essentially an attention mechanism that performs truncated BPTT around a subset of the selected states.\n\nThe early claims regarding biological plausibility seem stretched, at least when applying to this work. The \"waiting for life to end to learn\" and student study / test analogies were not helpful from an understanding point of view and indeed raised more questions than insight. The latter hippocampal discussion was at least more grounded.\n\nWhile a strong motivator for this work would be in allowing for higher efficiency on longer BPTT sequences, potentially capturing longer term dependencies, this aspect was not explored to this reviewer's understanding. To ensure clarity, in the character level PTB or Text8 examples, SAB's previous attention was limited to sequences of T = 100 or 180 respectively?\nLimiting the truncation to values below the sequence length for the LSTM baselines also appears strange given the standard within the literature is setting sequence length equal to BPTT length. I presume this was done to keep the number of optimizer updates equal?\nAnother broader question is whether longer term dependencies could be caught at all given the model doesn't feature \"exploration\" in the reinforcement learning sense, especially for non-trivial longer term dependencies.\n\nWhen noting the speed of generating a sparsely sourced summary vector (equation 3), it is worth pointing out that weighted summation over vectors in traditional attention is not a limiting factor as it's a very rapid element-wise only operation over already computed states.\n\nFor the experiments, I was looking for comparisons to attention over the \"LSTM (full BPTT)\" window. This experiment would provide an upper bound and an understanding of how much of SAB's improvement may be as a result of simply adding attention to the underlying LSTM models. Even a simpler and fast (cuDNN compatible) attention mechanism such as [a single cuDNN LSTM layer over the input, an attentional mechanism over the results of the first layer (masked to avoid observing timesteps from the future), summed, and then passed into a softmax] would be informative.\n\nFinally, whilst not a deal breaker for introducing new techniques, stronger LSTM baselines help to further underline the efficacy of the technique. For sequential MNIST, a relatively small dataset, previous papers have LSTM models that achieve 98.2% test accuracy (Arjovsky et al, https://arxiv.org/abs/1511.06464) and the IRNN example included as part of the Keras framework achieves 93% out of the box.\n\nNoting similarities to the Transformer architecture and other similar architectures would also be useful. Both are using attention to minimize the length of a gradient's path, though in Transformers it eliminates the RNN entirely. If a Transformer network performed a k=5 convolution or limited RNN run to produce the initial inputs to the Transformer, it would share many similarities to SAB, though without the sparsity.", "re. Introduction, page 2: Briefly explain here how SAB is different from regular Attention?\n\nGood paper. There's not that much discussion of the proposed SAB compared to regular Attention, perhaps that could be expanded. Also, I suggest summarizing the experimental findings in the Conclusion.", "Q - \"The proposed affine form of attention does not appear to actually represent the salience of a microstate and a given time. The second term of the RHS of equation 1 (w_2^T \\hat{h}^{(t)}) is canceled out in the subtraction in equation 2, since this term is constant for all i. Thus the attention weights for a given microstate are constant throughout time, which seems undesirable.\"\n\nA - We greatly thank the reviewer for their sharp-eyed identification of this problem. The reviewer is almost entirely correct:\n\nAlthough the raw attention weights are not constant, the computed sparsified attention weights come dangerously close to being so due to the excessive linearity of the attention and sparsification mechanisms.\n\nThe sparsified attention weights are not perfectly constant; However, they will change at most as often as the top-kth selected microstate changes, which happens at most as often as a new microstate gets added to the macro-state, and potentially much more rarely than that.\n\nMoreover, upon further review, we have identified a further linear collapse in the attention mechanism, which caused the attention weights to only be a linear function of the difference between the given microstate and the top-kth microstate.\n\nThis is problematic because in principle, if a present hidden state is very similar to a memorized microstate, the attention mechanism should accord it considerable weight, but calculating the attention weight only as a linear function of microstate differences would ignore them by design.\n\nWe have modified the attention mechanism so that it is now \n- a concatenation of the hidden and microstate,\n- a linear layer,\n- a hyperbolic tangent non-linearity, and\n- a linear layer again.\nThis prevents the linear collapses, and simultaneously gives us both increased accuracies and decreased time to convergence across all tasks. We will update the manuscript with our new results.\n\nQ - \"Overall, the combination of recurrent skip connections and attention appears to be novel, but experimental comparisons to other skip connection RNN architectures are missing...\"\n\nA - Our work is orthogonal to the work on skip connections in RNNs. SAB is an attention-controlled way of creating a skip connection between two remote points in time in order to avoid the vanishing or exploding gradient issues that plague the learning of long-term dependencies by RNNs. The amount of attention here is controlled by the extent to which an old microstate 'matches' (in some learned way) the current microstate. Skip connections are not new (proposed as early as 1996 with the NARX networks of Lin et al), but using an attention mechanism to select which time steps to pair together and using this to focus the backprop to just a few past time steps is new. It would be interesting future work to see the effect of using different types of skip connections in RNNs.", "We thank the reviewer for the feedback and comments. \n\nQ - \"Cons:\n- The proposed algorithm is compared against TBPTT but it is unclear the extent to which it is solving the same computational issues TBPTT is designed to solve.\"\n\nA - Taking “computational issues” to refer to the time- and memory-complexity of the algorithms, we would like to clarify that both time-wise and memory-wise, SAB is more expensive than (T)BPTT. However, unlike full BPTT (which is an inherently sequential algorithm), SAB training is parallelizable given the right hardware (GPU compatibility), which could make SAB as fast as TBPTT. In addition to that, SAB solves an optimization issue: Direct gradient flow from a timestep T_future to dynamically-determined, relevant timesteps T_past potentially arbitrarily far away in the past.\n\nBy way of comparison, BPTT does permit gradient flow from any future timestep to any past timestep, but the gradient must flow through T_future - T_past timesteps. In order for any given stream of gradient information to reach arbitrarily far in the past through a finite-capacity channel (Presently, a fixed-size hidden-state vector of 32-bit floating-point numbers), it must compete with and, crucially, survive against other streams all the way along the path backward from T_future to T_past. These other gradient information streams may:\n - Be short-range or long-range\n - Be fully contained within, partially overlapping with or wholly contain the range [T_past, T_future]\n - Concern a greater or lower number of hidden states\n - Require more or less precision in each hidden state.\nThe survival probability of a gradient information stream therefore decays exponentially with the number of hops it must make in BPTT and the number of competing streams.\n\nTBPTT, due to its truncation of gradient flow, is by design unable to sustain a gradient information stream over a timespan greater than the truncation length. The computational benefit of truncation is parallelizability of the backward pass of the RNN.\n\nQ - \"- Design decisions, particularly regarding the attention computation, are not fully explained.\"\n\nA - Thanks for pointing this out. We agree that the attention mechanism used in the submitted version was not ideal and we have now implemented a slightly different formulation of the sparse attention mechanism, leading to improved results in all tasks. A more detailed description of the problem we have identified and solved, as well as explanations for the design choices, have been added.\n\nQ - \"The empirical results demonstrate that SAB performs slightly better than TBPTT for most tasks in terms of accuracy/CE, but there is no mention of comparing the memory requirements of each. \"\n\nA - Our empirical results for SAB, full BPTT and truncated BPTT are summarized in Tables 1- 8. Broadly speaking, in the Copying and Permuted MNIST tasks, SAB outperforms full BPTT. For the Adding task, PennTree Bank and Text8 language modeling tasks, SAB significantly outperforms TBPTT.\n\n- Copying: SAB solves the task for lengths up to 300, and performs much better than full BPTT. For length = 300, SAB accuracy is 98.9% (CE 0.048), whereas full BPTT achieves 35.9% (CE 0.197). Since full BPTT performs much better than TBPTT, SAB significantly outperforms TBPTT of much longer truncation lengths.\n\n- Adding: SAB performs significantly better compared to TBPTT of much longer truncation length. For length = 400, SAB of truncation length = 10 significantly outperforms TBPTT with truncation length = 100.\n\n- PennTree Bank Language Modeling: SAB performs close to full BPTT (BPC of 1.48 vs 1.47; lower is better). SAB (trunc. length = 5) significantly outperforms TBPTT (trunc. length = 20) (BPC 1.48 vs 1.51)\n\n- Text8 Language Modeling: SAB performs close to full BPTT (valid BPC 1.56 vs 1.54), and significantly outperforms TBPTT (BPC 1.56 vc 1.64).\n\n- Permuted MNIST: SAB outperforms full BPTT (accuracy 92.2 vs 91.2). Typically, full BPTT outperforms TBPTT, therefore SAB outperforms TBPTT.\n\nThe extra memory required beyond LSTM's basics (for both full BPTT and TBPTT) is the attention mechanism, which is (2h) * (t**2/2k) * (4 bytes), where h is the size of the hidden states and k is the tok k attention selected.", "We thank you for your positive review! \n\nThanks for pointing out the comparison between SAB type attention and regular forms of attention. We are adding a small discussion on the comparison of SAB type attention and regular attention.\n", "We thank the reviewer for the feedback and comments.\n\n\"The early claims regarding biological plausibility seem stretched,...\"\n\nThanks for pointing this out. Our examples did not illustrate the principles well, and we have revised the respective sections to make them more concise.\n\n\"While a strong motivator for this work would be in allowing for higher efficiency on longer BPTT sequences, potentially capturing longer term dependencies, ...\"\n\nOur experiment on MNIST has a sequence length of 784, which is a good test for long term dependencies. As for the language modeling tasks, it is a common setup to use T=100 for PTB and T =180 for Text8. We followed the same setup in order to have a comparable baseline to other approaches, such as [1], [2], [3]. \n1. Ha, David, Andrew Dai, and Quoc V. Le. \"HyperNetworks.\" arXiv preprint arXiv:1609.09106 (2016).\n2. Cooijmans, Tim, et al. \"Recurrent batch normalization.\" arXiv preprint arXiv:1603.09025 (2016).\n3. Krueger, David, et al. \"Zoneout: Regularizing RNNs by randomly preserving hidden activations.\" arXiv preprint arXiv:1606.01305 (2016). \n\n\"Limiting the truncation to values below the sequence length for the LSTM baselines also appears strange...\"\n\nLimiting truncation to values below the sequence length is used for Truncated Backpropagation Through Time (TBPTT), which is a technique commonly used to alleviate the computational complexity of longer sequences [4, 5].\n4. Saon, George, et al. \"Unfolded recurrent neural networks for speech recognition.\" Fifteenth Annual Conference of the International Speech Communication Association. 2014.\n5. Sak, Haşim, Andrew Senior, and Françoise Beaufays. \"Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition.\" arXiv preprint arXiv:1402.1128 (2014).\n\n\n\"Another broader question is whether longer term dependencies could be caught at all ...\"\n\nWe agree that other mechanisms could be used to foster exploration. In our case, stochastic gradient descent and initial weights which lead to no strong preference do the exploration for us. These methods work well with SAB and they are well-practiced approaches in Deep Learning. Figure 3 in the appendix shows how the attention weight learns to focus on the correct time step as training proceeds.\n\n\"For the experiments, I was looking for comparisons to attention over the \"LSTM (full BPTT)\" window...\"\n\nThanks for pointing this out, this is indeed a nice set of experiments to run. We are currently running those experiments, and will update the paper with the results for LSTM with self-attention. \nNote that LSTM with self-attention requires significantly more GPU memory than SAB, such that the maximum sequence length we can simulate is limited by hardware constraints.\n\n\"Finally, whilst not a deal breaker for introducing new techniques, stronger LSTM baselines help to further underline the efficacy of the technique...\"\n\nUnfortunately, the experiment was labeled incorrectly. In fact, we run the permuted MNIST experiment, not the sequential MNIST experiment as described. We have fixed this error in the revised version. Our baseline for permuted MNIST is similar to the published baselines.\n\n\"Noting similarities to the Transformer architecture and other similar architectures would also be useful...\"\n\nThere are indeed similarities between the Transformer architecture and SAB. As the reviewer mentions, the Transformer architecture eliminates the RNN entirely. SAB utilizes sparse self-attention to help with RNN training, and hence our motivation is different from the ones in the Transformer network. Although, it is indeed interesting future work to see how the sparsity constraint would work for the Transformer architecture. From what we have seen with our experiments, we strongly suspect that sparsity would not hurt (may be able to help) the Transformer architecture." ]
[ -1, 5, -1, -1, 5, 8, -1, -1, -1, -1 ]
[ -1, 3, -1, -1, 4, 4, -1, -1, -1, -1 ]
[ "H1qBHY5eM", "iclr_2018_SJCq_fZ0Z", "H1qBHY5eM", "S1GvBVDWz", "iclr_2018_SJCq_fZ0Z", "iclr_2018_SJCq_fZ0Z", "S1GvBVDWz", "S1GvBVDWz", "SJppHuogG", "H1qBHY5eM" ]
iclr_2018_HJMN-xWC-
Learning Parsimonious Deep Feed-forward Networks
Convolutional neural networks and recurrent neural networks are designed with network structures well suited to the nature of spacial and sequential data respectively. However, the structure of standard feed-forward neural networks (FNNs) is simply a stack of fully connected layers, regardless of the feature correlations in data. In addition, the number of layers and the number of neurons are manually tuned on validation data, which is time-consuming and may lead to suboptimal networks. In this paper, we propose an unsupervised structure learning method for learning parsimonious deep FNNs. Our method determines the number of layers, the number of neurons at each layer, and the sparse connectivity between adjacent layers automatically from data. The resulting models are called Backbone-Skippath Neural Networks (BSNNs). Experiments on 17 tasks show that, in comparison with FNNs, BSNNs can achieve better or comparable classification performance with much fewer parameters. The interpretability of BSNNs is also shown to be better than that of FNNs.
rejected-papers
I am inclined to agree with R1 that there is an extensive literature on learning architectures now, and I have seen two others as part of my area chairing. This paper does not offer comparisons to existing methods for architecture learning other than very basic ones and that reduces the strength of the paper significantly. Further the broad exploration over 17 tasks is more overwhelming, than adding to an insight into the methods.
train
[ "HkQdJFuef", "rkDPp89xz", "BJ25hzHWf", "S1BSve3mz", "BJna8g27G", "Bku44r3Mf", "S1nXjXVbz", "SyTaKQVZf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author" ]
[ "There is a vast literature on structure learning for constructing neural networks (topologies, layers, learning rates, etc.) in an automatic fashion. Your work falls under a similar category. I am a bit surprised that you have not discussed it in the paper not to mention provided a baseline to compare your method to. Also, without knowing intricate details about each of 17 tasks you mentioned it is really hard to make any judgement as to how significant is improvement coming from your approach. There has been some work done on constructing interpretable neural networks, such as stimulated training in speech recognition, unfortunately these are not discussed in the paper despite interpretability being considered important in this paper. ", "This paper introduces a skip-connection based design of fully connected networks, which is loosely based on learning latent variable tree structure learning via mutual information criteria. The goal is to learn sparse structures across layers of fully connected networks. Compared to prior work (hierarchical latent tree model), this work introduces skip-paths. \nAuthors refer to prior work for methods to learn this backbone model. Liu et.al (http://www.cse.ust.hk/~lzhang/ltm/index.htm) and Chen et.al. (https://arxiv.org/abs/1508.00973) and (https://arxiv.org/pdf/1605.06650.pdf). \n\nAs far as I understand, the methods for learning backbone structure and the skip-path are performed independently, i.e. there is no end-to-end training of the structure and parameters of the layers. This will limit the applicability of the approach in most applications where fully connected networks are currently used. \n\nOriginality - The paper heavily builds upon prior work on hierarchical latent tree analysis and adds 'skip path' formulation to the architecture, however the structure learning is not performed end-to-end and in conjunction with the parameters. \n\nClarity - The paper is not self-contained in terms of methodology.\n\nQuality and Significance - There is a disconnect between premise of the paper (improving efficiency of fully connected layers by learning sparser structures) and applicability of the approach (slow EM based method to learn structure first, then learn the parameters). As is, the applicability of the method is limited. \nAlso in terms of experiments, there is not enough exploration of simpler sparse learning methods such as heavy regularization of the weights. ", "The main strengths of the paper are the supporting experimental results in comparison to plain feed-forward networks (FNNs). The proposed method is focused on discovering sparse neural networks. The experiments show that sparsity is achieved and still the discovered sparse networks have comparable or better performance compared to dense networks.\n\nThe main weakness of the paper is lack of cohesion in contributions and difficulty in delineating the scope of their proposed approach.\n\nBelow are some suggestions for improving the paper:\n\nCan you enumerate the paper’s contributions and specify the scope of this work? Where is this method most applicable and where is it not applicable?\n\nWhy is the paper focused on these specific contributions? What problem does this particular set of contributions solve that is not solvable by the baselines? There needs to be a cohesive story that puts the elements together. For example, you explain how the algorithm for creating the backbone can use unsupervised data. On the other hand, to distinguish this work from the baselines you mention that this work is the first to apply the method to supervised learning problems.\n\nThe motivation section in the beginning of the paper motivates using the backbone structure to get a sparse network. However, it does not adequately motivate the skip-path connections or applications of the method to supervised tasks.\n\nIs this work extending the applicability of baselines to new types of problems? Or is this work focused on improving the performance of existing methods? Answers to these questions can automatically determine suitable experiments to run as well. It's not clear if Pruned FNNs are the most suitable baseline for evaluating the results. Can your work be compared experimentally with any of the constructive methods from the related work section? If not, why?\n\nWhen contrasting this work with existing approaches, can you explain how existing work builds toward the same solution that you are focusing on? It would be more informative to explain how the baselines contribute to the solution instead of just citing them and highlighting their differences.\n\nRegarding the experimental results, is there any insight on why the dense networks are falling short? For example, if it is due to overfitting, is there a correlation between performance and size of FNNs? Do you observe a similar performance vs FNNs in existing methods? Whether this good performance is due to your contributions or due to effectiveness of the baseline algorithm, proper analysis and discussion is required and counts as useful research contribution.\n", "Thank you for your suggestions.", "Thank you for your explanations.\n\n#1\nNo. We are NOT “using some form of a cost function”, but proposing an unsupervised learning method. If our understanding is correct, the reviewer is talking about methods of manually validating network structure over validation data. Note that all the baseline FNNs in our experiments are validated over validation data. If the reviewer thinks it necessary to have more discussion in that directions, we will include it.\n\n#2\nWe agree that more introduction and references for Tox21 dataset would help reader better understand the experiment results. Thank you for your suggestions.\n\n#3\nThank you for your suggestions.\n", "My confusion with your point #1 is a simple fact that you are proposing a method of constructing a NN using some form of a cost function. There is a lot of literature where people are trying to build NN using target evaluation metric as the cost function. For classification tasks this would be classification accuracy. Building NN here consist of adding/removing layers, changing learning rates, etc. These are so called architecture search methods. I am aware that these methods are more expensive yet they attempt to come up with a custom architecture for given problem like your method does. As such I expected to see more discussion in this directions.\n\nMy confusion with your point#2 stems from you claiming to provide validation on 17 different tasks, \n12 out of these 17 tasks come from Tox21 data set. Let us look at table 3, what are NR.AhR, NR.AR, ..., SR.p53 tasks, how important is improvement on NR.AhR, is improvement on NR.AR more important than it is on NR.AhR, how significant is the difference between 0.8930 and 0.8843, what is state-of-the art on each of these sets (for feedforward, also other models), is this really correct that BSNN has 338K on all these tasks. For Table 4 similarly, what is state-of-the-art here?\n\nYou treat interpretability rather seriously in this paper so I do think you need to refer to other work done in that area. Second of all, given the way you treat interpretability I would expect you conducting some subjective evaluations by asking human subject to rank models based on the way they group words. I find it hard to be convinced given similarity values such as 0.1729, 0.1632, 0.1553 you compute by means of embeddings derived from word2vec model. ", "Thank you for your reviews.\n\n#Discussion of literature on structure learning for neural networks is missing#\nNo. We Do have the discussion covering most of the important methods, e.g. constructive algorithm (Ash, 1989; Bello, 1992; Kwok & Yeung, 1997), RL algorithm, Genetic algorithm, pruning and so on in our Related Works section. Please take a look at it. And we also compare with a baseline method (pruning) in our experiments.\n\n#Unclear tasks and unclear improvement#\nNo. Firstly, text classification is well studied in the literature and is not at all a mysterious task. In addition, the five large-scale text datasets we included are among the most important text classification datasets nowadays. The Tox21 dataset is also studied in a famous NIPS2017 paper, Self-Normalizing Neural Networks (SELU), in a similar setting. Secondly, we want to emphasize that the goal of this paper is NOT to propose state-of-the-art solutions to the 17 classification tasks, but to propose a structure learning method and compare it with baselines on the 17 tasks. Last but not the least, even when all the baseline FNN structures are fully tuned over the validation data, our method still achieves better/comparable classification performances in all the 17 tasks. This is a clear validation of the effectiveness of our structure learning method, considering the Backbone path in our model contains only 5% of the connections.\n\n#Paper on interpretable neural networks are not discussed#\nThe goal of this paper is to propose a structure learning method for *Parsimonious* neural networks such that the models contain fewer parameters than standard FNNs but still achieve better performance in different tasks. The method is not directly optimizing the structures for interpretability. Better interpretability (than baselines) is just one resulting advantage of our method and hence we think it is not necessary to include a heavy discussion on papers about interpretable neural networks. If the reviewer think that it is necessary, we will add it in our revision.", "Thank you for your reviews.\n\n#End-to-end training#\nWe wish to remind the reviewer that we are proposing an *Unsupervised* structure learning method. One key advantage of unsupervised structure learning is that it can make use of both unlabelled and labelled data, and the learned structure can be transferred to any tasks on the same type of data. Think about the structure of convolutional layer which is used across all kinds of CV tasks. Why don't we train the connectivities of convolutional layer with the parameters in an end-to-end fashion? The reason is that we humans have seen many unlabelled scenes, we know a strong pattern in vision data and hence we design a specific structure suited to that pattern without further learning. Similarly, our method is trying to find such strong patterns in general data other than images and build structures correspondingly, followed by parameter learning in specific tasks. If you train the structure and parameters in an end-to-end manner, then it is supervised learning and task-specific, which is not what we want.\n\nIn addition, compared with an end-to-end method (pruning), our method has achieved higher classification AUC scores in 10 out of 12 tasks and significantly higher interpretability scores in 3 out of 4 tasks. It is clear that the end-to-end method shows no superiority to our method.\n\n#Originality#\nWe want to emphasize the contributions of our paper. Note that prior works on hierarchical latent tree analysis are proposing structure learning methods for Bayesian network, while in this paper we aim at structure learning of deep feed-forward neural networks.\n1. It is the first time that the latent tree-based structure learning method is applied to multi-layer neural network and supervised learning task (classification). Previous works on such topic are for unsupervised tasks only.\n2. This paper proposes a method for learning multi-layer deep sparse feed-forward neural network. This is different from previous works in that previous works on latent tree model learn either multi-layer tree model (Chen et al. 2017a) or two-layer sparse model Chen et al. (2017b).\n\n#Inefficient due to slow EM algorithm.#\nNo. Firstly, we use *Progressive EM* (Chen et al., 2016) and *Stepwise EM* (similar to SGD) (Sato and Ishii 2000; Cappe and Moulines 2009) in our method. They have been shown to be efficient and can easily scale up for hundreds of thousands of training samples in previous works. Secondly, structure learning is only needed during offline training, and the learned sparse connections can speed up online testing. Besides, our method is proposed not only for efficiency, but also for model fit and model storage.\n\n#Regularization of the weights as baseline are missing#\nNo. The pruning method we compare with is usually regarded as a strong regularization over weights in the literature. The regularization is even stronger than l1 norm as it is producing many weights being exactly 0." ]
[ 5, 4, 5, -1, -1, -1, -1, -1 ]
[ 5, 2, 2, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJMN-xWC-", "iclr_2018_HJMN-xWC-", "iclr_2018_HJMN-xWC-", "BJ25hzHWf", "Bku44r3Mf", "S1nXjXVbz", "HkQdJFuef", "rkDPp89xz" ]
iclr_2018_Hyp3i2xRb
Overcoming the vanishing gradient problem in plain recurrent networks
Plain recurrent networks greatly suffer from the vanishing gradient problem while Gated Neural Networks (GNNs) such as Long-short Term Memory (LSTM) and Gated Recurrent Unit (GRU) deliver promising results in many sequence learning tasks through sophisticated network designs. This paper shows how we can address this problem in a plain recurrent network by analyzing the gating mechanisms in GNNs. We propose a novel network called the Recurrent Identity Network (RIN) which allows a plain recurrent network to overcome the vanishing gradient problem while training very deep models without the use of gates. We compare this model with IRNNs and LSTMs on multiple sequence modeling benchmarks. The RINs demonstrate competitive performance and converge faster in all tasks. Notably, small RIN models produce 12%–67% higher accuracy on the Sequential and Permuted MNIST datasets and reach state-of-the-art performance on the bAbI question answering dataset.
rejected-papers
The authors propose to use identity + some weights in the recurrent connections to prevent vanishing gradients. The reviewers found the experiments to have weak baselines, weakening the claims of the paper.
train
[ "SyJ4HP84z", "r19wb884z", "B1qhp-qeG", "rJhtcsdxf", "Hk7vlKsxz", "Hy7jRyD-f", "HJcyyeDbz", "ByAEkxD-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "After reviewing the revised draft, I have decided to not increase the score. I think 7 is still appropriate, as I'm not too sure about the impact.", " Thanks for your reply and clarifications.\n\nI think overall this is a very interesting direction.\nHowever, authors did not address the comparison with previous work in their paper (weak LSTM baselines). This is of importance to properly evaluate the main contribution of this work. Therefore, I have decided to revise my rating slightly down.", "The paper investigates the iterative estimation view on gated recurrent networks (GNN). Authors observe that the average estimation error between a given hidden state and the last hidden state gradually decreases toward zeros. This suggest that GNN are bias toward an identity mapping and learn to preserve the activation through time.\nGiven this observation, authors then propose RIN, a new RNN parametrization where the hidden to hidden matrix is decomposed as a learnable weight matrix plus the identity matrix.\nAuthors evaluate their RIN on the adding, sequential MNIST and the baby tasks and show that their IRNN outperforms the IRNN and LSTM models.\n\nQuestions:\n- Section 2 suggests that use of the gate in GNNs encourages to learn an identity mapping. Does the average iteration error behaves differently in case of a tanh-RNN ?\n- It seems from Figure 4 (a) that the average estimation error is higher for RIN than IRNN and LSTM and only decrease toward zero at the very end. What could explain this phenomenon?\n- While the LSTM baseline matches the results of Le et al., later work such as Recurrent Batch Normalization or Unitary Evolution RNN have demonstrated much better performance with a vanilla LSTM on those tasks (outperforming both IRNN and RIN). What could explain this difference in the performances?\n- Unless I am mistaken, Gated Orthogonal Recurrent Units: On Learning to Forget from Jing et al. also reports better performances for the LSTM (and GRU) baselines that outperform RIN on the baby tasks with mean performances of 58.2 and 56.0 for GRU and LSTM respectively?\n\n- Quality/Clarity:\nThe paper is well written and pleasant to read\n\n- Originality:\nLooking at RNN from an iterative refinement point of view seems novel.\n\n- Significance:\nWhile looking at RNN from an iterative estimation is interesting, the experimental part does not really show what are the advantages of the propose RIN. In particular, the LSTM baseline seems to weak compared to other works.", "Here are my main critics of the papers:\n\n1. Equation (1), (2), (3) are those expectations w.r.t. the data distribution (otherwise I can't think of any other stochasticity)? If so your phrase \"is zero given a sequence of inputs X1, ...,T\" is misleading. \n2. Lack of motivation for IE or UIE. Where is your background material? I do not understand why we would like to assume (1), (2), (3). Why the same intuition of UIE can be applied to RNNs? \n3. The paper proposed the new architecture RIN, but it is not much different than a simple RNN with identity initialization. Not much novelty.\n4. The experimental results are not convincing. It's not compared against any previous published results. E.g. the addition tasks and sMNIST tasks are not as good as those reported in [1]. Also it only has been tested on very simple datasets.\n\n\n[1] Path-Normalized Optimization of Recurrent Neural Networks with ReLU Activations. Behnam Neyshabur, Yuhuai Wu, Ruslan Salakhutdinov, Nathan Srebro.", "Summary: \nThe authors present a simple variation of vanilla recurrent neural networks, which use ReLU hiddens and a fixed identity matrix that is added to the hidden-to-hidden weight matrix. This identity connection acts as a “surrogate memory” component, preserving hidden activations over time steps. \nThe experiments demonstrate that this architecture reliably solves the addition task for up to 400 input frames. It also achieves a very good performance on sequential and permuted MNIST and achieves SOTA performance on bAbI.\nThe authors observe that the proposed recurrent identity network (RIN) is relatively robust to hyperparameter choices. After Le et al. (2015), the paper presents another convincing case for the application of ReLUs in RNNs.\n\nReview: \nI very much like the paper. The motivation and architecture is presented very clearly and I am happy to also see explorations of simpler recurrent architectures in parallel to research of gated architectures!\nI have a few comments and questions:\n1) Clarification: In Section 2.2, do you really mean bit-wise multiplication or element-wise? If bit-wise, can you elaborate why? I might have missed something.\n2) Why does the learning curve of the IRNN stop around epoch 270 in Figure 2c? Also some curves in the appendix stop abruptly without visible explosions. Were these experiments run until completion? If so, would it be possible to plot the complete curves?\n3) I think for a fair comparison with LSTMs and IRNNs a limited hyperparameter search should be performed separately on all three architectures at least for the addition task. Optimal hyperparameters are usually model-specific. Admittedly, the authors mention that they do not intend to make claims about superior performance to LSTMs, however the competitive performance of small RINs is mentioned a couple of times in the manuscript.\nLe et al. (2015) for instance perform a coarse grid search for each model.\n4) I wouldn't say that ResNets are Gated Neural Networks, as the branches are just summed up. There is no (multiplicative) gating as in Highway Networks.\n5) I think what enables the training of very deep networks or LSTMs on long sequences is the presence of a (close-to-)identity component in forward/backward propagation, not the gating. The use of ReLU activations in IRNNs (with identity initialization of the hidden-to-hidden weights) and RINs (effectively initialized with identity plus some noise) makes the recurrence more linear than with squashing activation functions.\n6) Regarding the absence of gating in RINs: What is your intuition on how the model would perform in tasks for which conditional forgetting is useful. Consider for example a task with long sequences, outputs at every time step and hidden activations not necessarily being encouraged to estimate last step hidden activations. Would RINs readily learn to reset parts of the hidden state?\n7) Henaff et al. (2016) might be related, as they are also looking into the addition task with long sequences.\n\nOverall, the presented idea is novel to the best of my knowledge and the manuscript is well-written. I would recommend it for acceptance, but would like to see the above points addressed (especially 1-3 and some comments on 4-6). After a revision I would consider to increase the score.\n\nReferences:\nHenaff, Mikael, Arthur Szlam, and Yann LeCun. \"Recurrent orthogonal networks and long-memory tasks.\" In International Conference on Machine Learning, pp. 2034-2042. 2016.\nLe, Quoc V., Navdeep Jaitly, and Geoffrey E. Hinton. \"A simple way to initialize recurrent networks of rectified linear units.\" arXiv preprint arXiv:1504.00941 (2015).", "1) Thanks for pointing this out. It should have been element-wise instead of bit-wise. We’ve fixed this in the updated revision.\n\n2) We employed Early Stopping during the training. The reason for the unfinished LSTM experiments is because the overfitting occurred. The unfinished IRNN experiment is because the training is interrupted by the explosion of the training error (see Fig. 5(c) for training curve). We tried to mitigate this problem by imposing a relative loose gradient clipping (100), in the end, IRNN is still unstable if the sequence is long (for example in T3).\n\n3) Thanks for the comments. We try to select a set of hyperparameters that can offer a fair comparison of all three tested networks. In preliminary experiments, we have tried different learning rates from {10^−2, 10^−3, 10^−4, 10^−5, 10^−6}. We chose the largest learning rate that does not cause training failure for IRNNs.\n\n4) Thanks for the comments on residual networks. It is true that ResNets do not have multiplicative gates as in Highway Networks. In this paper, we view ResNets as a subcase of Highway Networks where the gates are fully open as pointed out by Greff et al. 2016.\n\n5) Thanks for your comments. We felt the same way for training with long sequences. However, the gating mechanism may be very important in tasks that desire to regulate the network and provide explicit control for hidden activations.\n\n6) There is no mechanism for the network to perform explicit conditional forgetting. RIN may not be capable of readily resetting its hidden states. We will perform more experiments to determine when the network would fail on tasks with long-sequence.\n\n7) Thanks for pointing this interesting article.\n\nKlaus Greff, Rupesh Kumar Srivastava, and Jürgen Schmidhuber. Highway and residual networks learn unrolled iterative estimation. CoRR, abs/1612.07771, 2016.", "1) We assume the tanh-RNN is a vanilla RNN with tanh activation. We run an experiment on tanh-RNN with the adding problem. The network failed to converge. Both average estimation error and variance stay very close to zero (<10^-5). The difference between steps, in this case, is not informative and nearly noise.\n\n2) Thanks for your comments. Indeed the RIN’s average estimation error in Fig. 4(a) is higher than the other two architectures. We believe that this is due to the choice of ReLU activation in RIN and IRNN. Repeated application of ReLU could cause this problem (Jastrzebski et al., 2017). However, the experiment results don’t suggest that larger average estimation errors lower the performance of RIN.\n\n3) Thanks for pointing this out. In this paper, we do not claim that RIN is superior to LSTM. We tried to compare all three networks fairly with a uniform hyperparameter setting. We are aware that other papers produce higher scores in sequential and permuted MNIST. However, even with the same experiment settings (at best of our knowledge), we are unable to reproduce these numbers for sequential and permuted MNIST during the development of this paper. We finally decided to report only our numbers because there is a large variance of reported scores by different previous works.\n\n4) Thank you very much for pointing this out. Sadly, we only found out that the authors updated the numbers for bAbI tasks after the submission of our paper (the 3rd version is updated on Oct 25th). The numbers in the paper are taken from the 2nd version of the paper, and we did our best to replicate their experiment settings (regarding the network architecture). Note that the description for the bAbI tasks is very brief, they did not reveal the training procedure even in the 3rd version.\n\nS. Jastrzebski, D. Arpit, N. Ballas, V. Verma, T. Che, and Y. Bengio. Residual Connections Encourage Iterative Inference. CoRR, abs/1710.04773, October 2017.\n", "1) Thanks for your comments. We have fixed this in the updated revision.\n\n2) The observation in Fig. 1(a) and (b) for LSTM motivates us to study the iterative estimation view in RNNs. Initially, we thought that this phenomenon only exists in gated neural networks such as LSTM and GRU. After we analyzed the GNN (in section 2.2), we found that the gating mechanism is not the only way that an RNN can be trained deeply.\n\n3) In the IRNN paper, the authors proposed to use the identity initialization the observation that “when the error derivatives for the hidden units are backpropagated through time they remain constant provided no extra error-derivatives are added”. In this paper, we view the proposal of RIN from a different direction where the “surrogate memory” component helps the network to learn identity mapping in which there is no help from the gates in RNNs. Additionally, we found in the experiments that the RIN is more stable and faster to train than IRNN.\n\n4) Thank you for comments on this issue. We are aware that there are papers that produce higher scores in Sequential and Permuted MNIST. However, with the same experiment settings as in other paper, we cannot reproduce these numbers for the baseline models while developing this paper. We finally decided to report only our scores because the numbers reported in different works have a large variance. In this paper, we used a uniform experiment setting that compares the networks fairly and imposes as fewer constraints as possible. Thanks for pointing out this nice Path-SGD paper. Neyshabur et al. (2016) tackled the problem of using ReLU in RNNs from an optimization point of view. Similarly, Neyshabur et al. (2016) also found that IRNNs suffer from severe instability during training. This observation matches our results as well.\n" ]
[ -1, -1, 4, 2, 7, -1, -1, -1 ]
[ -1, -1, 5, 4, 4, -1, -1, -1 ]
[ "Hy7jRyD-f", "HJcyyeDbz", "iclr_2018_Hyp3i2xRb", "iclr_2018_Hyp3i2xRb", "iclr_2018_Hyp3i2xRb", "Hk7vlKsxz", "B1qhp-qeG", "rJhtcsdxf" ]
iclr_2018_BJ78bJZCZ
Efficiently applying attention to sequential data with the Recurrent Discounted Attention unit
Recurrent Neural Networks architectures excel at processing sequences by modelling dependencies over different timescales. The recently introduced Recurrent Weighted Average (RWA) unit captures long term dependencies far better than an LSTM on several challenging tasks. The RWA achieves this by applying attention to each input and computing a weighted average over the full history of its computations. Unfortunately, the RWA cannot change the attention it has assigned to previous timesteps, and so struggles with carrying out consecutive tasks or tasks with changing requirements. We present the Recurrent Discounted Attention (RDA) unit that builds on the RWA by additionally allowing the discounting of the past. We empirically compare our model to RWA, LSTM and GRU units on several challenging tasks. On tasks with a single output the RWA, RDA and GRU units learn much quicker than the LSTM and with better performance. On the multiple sequence copy task our RDA unit learns the task three times as quickly as the LSTM or GRU units while the RWA fails to learn at all. On the Wikipedia character prediction task the LSTM performs best but it followed closely by our RDA unit. Overall our RDA unit performs well and is sample efficient on a large variety of sequence tasks.
rejected-papers
RDA improves on RWA, but even so, the model is inferior to the other standard RNN models. As a result R1 and R3 question the motivation for the use of this model -- something the authors should motivate.
test
[ "BkEYMCPlG", "rkzxWW5lf", "r1jjF59lM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors present RDA, the Recurrent Discounted Attention unit, that improves upon RWA, the earlier introduced Recurrent Weighted Average unit, by adding a discount factor. While the RWA was an interesting idea with bad results (far worse than the standard GRU or LSTM with standard attention except for hand-picked tasks), the RDA brings it more on-par with the standard methods.\n\nOn the positive side, the paper is clearly written and adding discount to RWA, while a small change, is original. On the negative side, in almost all tasks the RDA is on par or worse than the standard GRU - except for MultiCopy where it trains faster, but not to better results and it looks like the difference is between few and very-few training steps anyway. The most interesting result is language modeling on Hutter Prize Wikipedia, where RDA very significantly improves upon RWA - but again, only matches a standard GRU or LSTM. So the results are not strongly convincing, and the paper lacks any mention of newer work on attention. This year strong improvements over state-of-the-art have been achieved using attention for translation (\"Attention is All You Need\") and image classification (e.g., Non-local Neural Networks, but also others in ImageNet competition). To make the evaluation convincing enough for acceptance, RDA should be combined with those models and evaluated more competitively on multiple widely-studied tasks.", "This paper extends the recurrent weight average (RWA, Ostmeyer and Cowell, 2017) in order to overcome the limitation of the original method while maintaining its advantage. The motivation of the paper and the approach taken by the authors are sensible, such as adding discounting was applied to introduce forget mechanism to the RWA and manipulating the attention and squash functions.\n\nThe proposed method is using Elman nets as the base RNN. I think the same method can be applied to GRUs or LSTMs. Some parameters might be redundant, however, assuming that this kind of attention mechanism is helpful for learning long-term dependencies and can be computed efficiently, it would be nice to see the outcomes of this combination.\n\nIs there any explanation why LSTMs perform so badly compared to GRUs, the RWA and the RDA?\nOverall, the proposed method seems to be very useful for the RWA.", "Summary:\nThis paper proposes an extension to the RWA model by introducing the discount gates to computed discounted averages instead of the undiscounted attention. The problem with the RWA is that the averaging mechanism can be numerically unstable due to the accumulation operations when computing d_t.\n\nPros:\n- Addresses an issue of RWAs.\n\nCons:\n-The paper addresses a problem with an issue with RWAs. But it is not clear to me why would that be an important contribution.\n-The writing needs more work.\n-The experiments are lacking and the results are not good enough.\n\nGeneral Comments:\n\nThis paper addresses an issue regarding to RWA which is not really widely adopted and well-known architecture, because it seems to have some have some issues that this paper is trying to address. I would still like to have a better justification on why should we care about RWA and fixing that model. \n\nThe writing of this paper seriously needs more work. The Lemma 1 doesn't make sense to me, I think it has a typo in it, it should have been (-1)^t c instead of -1^t c.\n\nThe experiments are only on toyish and small scale tasks. According to the results the model doesn't really do better than a simple LSTM or GRU." ]
[ 4, 6, 3 ]
[ 5, 4, 4 ]
[ "iclr_2018_BJ78bJZCZ", "iclr_2018_BJ78bJZCZ", "iclr_2018_BJ78bJZCZ" ]
iclr_2018_HJ8W1Q-0Z
GATED FAST WEIGHTS FOR ASSOCIATIVE RETRIEVAL
We improve previous end-to-end differentiable neural networks (NNs) with fast weight memories. A gate mechanism updates fast weights at every time step of a sequence through two separate outer-product-based matrices generated by slow parts of the net. The system is trained on a complex sequence to sequence variation of the Associative Retrieval Problem with roughly 70 times more temporal memory (i.e. time-varying variables) than similar-sized standard recurrent NNs (RNNs). In terms of accuracy and number of parameters, our architecture outperforms a variety of RNNs, including Long Short-Term Memory, Hypernetworks, and related fast weight architectures.
rejected-papers
The reviewers agree that while the presented result looks interesting, it is but one result. Further, one of the reviewer finds this to be a weak comparison as well. The novelty of the approach over the paper by Ba et. al. also is in question -- good results on multiple tasks might have made it worth exploring, but the authors did not establish this to be the case convincingly.
train
[ "SJIM1TDez", "BJhNZOtlz", "Syft4Nolf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors present an evolution of the idea of fast weights: training a double recurrent neural network, one \"slow\" trained as usual and one \"fast\" that gets updated in every time-step based on the slow network. The authors generalize this idea in a nice way and present results on 1 experiment. On the positive side, the paper is clearly written and while the fast-weights are not new, the details of the presented method are original. On the negative side, the experimental results are presented on only 1 experiment with a data-set and task made up by the authors. The results are good but the improvements are not too large, and they are measured over weak baselines implemented by the authors. For a convincing result, one would require an evaluation on a number of tasks, including long-studied ones like language modeling, and comparison to stronger related models, such as the Neural Turing Machine or the Transformer (from \"Attention is All You Need\"). Without comparison to stronger baselines and with results only on 1 task constructed by the authors, we have to recommend rejection.", "Summary\nThe paper proposes a neural network architecture for associative retrieval based on fast weights with context-dependent gated updates. The architecture consists of a ‘slow’ network which provides weight updates for the ‘fast’ network which outputs the predictions of the system. The experiments show that the architecture outperforms a couple of related models on an associative retrieval problem.\n\nQuality\nThe authors evaluate their architecture on an associative retrieval task which is similar to the variable assignment task used in Danihelka et al. (2016). The difference with the original task seems to be that the network is also trained to predict a ‘blank’ symbol which indicates that no prediction has been made. While this task is artificial, it does make sense in the context of what the authors want to show. The fact that the authors compare their results with three sensible baselines and perform some form of hyper-parameter search for all of the models, adds to the quality of the experiment. It is somewhat unfortunate that the paper doesn’t give more detail about the precise hyper-parameters involved and that there is no comparison with the associative LSTM from Danihelka et al. Did these hyper-parameters also include the sizes of the models? Otherwise it’s not very clear to me why the numbers of parameters are so much higher for the baseline models. While I think that this experiment is well done, it is unfortunate that it is the only experiment the authors carried out and the paper would be more impactful if there would have been results for a wider variety of tasks. It is commendable that the authors also discuss the memory requirements and increased wall clock time of the model.\n\nClarity\nI found the paper hard to read at times and it is often not very clear what the most important differences are between the proposed methods and earlier ones in the literature. I’m not saying those differences aren’t there, but the paper simply didn’t emphasize them very well and I had to reread the paper from Ba et al. (2016) to get the full picture. \n\nOriginality/Significance\nWhile the architecture is new, it is based on a combination of previous ideas about fast weights, hypernetworks and activation gating and I’d say that the novelty of the approach is average. The architecture does seem to work well on the associative retrieval task, but it is not clear yet if this will also be true for other types of tasks. Until that has been shown, the impact of this paper seems somewhat limited to me.\n\nPros\nExperiments seem well done.\nGood baselines.\nGood results.\n\nCons\nHard to extract the most important changes from the text.\nOnly a single synthetic task is reported.\n\n", "The paper proposed an extension to the fast weights from Ba et al. to include additional gating units for changing the fast weights learning rate adaptively. The authors empirically demonstrated the gated fast weights outperforms other baseline methods on the associative retrieval task.\n\nComment:\n\n- I found the paper very hard to follow. The authors could improve the clarity of the paper greatly by listing their contribution clearly for readers to digest. The authors should emphasize the first half of the method section are from existing works and should go into a separate background section.\n\n- Overall, the only contribution of the paper seems to be the modification to Ba et al. is the Eq. (8). The authors have only evaluated the method on a synthetic associative retrieval task. Without additional experiments on other datasets, it is hard for the reader to draw any meaningful conclusion about the proposed method in general." ]
[ 3, 5, 4 ]
[ 5, 4, 4 ]
[ "iclr_2018_HJ8W1Q-0Z", "iclr_2018_HJ8W1Q-0Z", "iclr_2018_HJ8W1Q-0Z" ]
iclr_2018_rJ1RPJWAW
Learnability of Learned Neural Networks
This paper explores the simplicity of learned neural networks under various settings: learned on real vs random data, varying size/architecture and using large minibatch size vs small minibatch size. The notion of simplicity used here is that of learnability i.e., how accurately can the prediction function of a neural network be learned from labeled samples from it. While learnability is different from (in fact often higher than) test accuracy, the results herein suggest that there is a strong correlation between small generalization errors and high learnability. This work also shows that there exist significant qualitative differences in shallow networks as compared to popular deep networks. More broadly, this paper extends in a new direction, previous work on understanding the properties of learned neural networks. Our hope is that such an empirical study of understanding learned neural networks might shed light on the right assumptions that can be made for a theoretical study of deep learning.
rejected-papers
+ The paper proposes an interesting empirical measure of ""learnability"" of a trained network: how well the predictive function it represents can be learned by another network. And shows it empirically seems to correlate with better generalization. - The work is purely empirical: it features no theory relating this learnability to generalization - Learnability measure is somewhat ad-hoc with moving parts left to be specified (learning network, data splits, ...) - as pointed out by a reviewer, learnability doesn't really provide any answers for now. - the work would be much stronger if it went beyond a mere correlation study, and if learnability considerations allowed to derive a new approach/regularization scheme that was convincingly shown to improve generalization.
train
[ "H17N5b5lf", "BJ9PiZ9eG", "rkf_j7cgG", "rk4gLL2Qf", "rJWuFJtGG", "BJcNFJKMG", "HkqzKJFff", "HkgcOJtGG", "r1sUdyKMM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "Summary:\nThis paper presents very nice experiments comparing the complexity of various different neural networks using the notion of \"learnability\" --- the learnability of a model (N1) is defined as the \"expected agreement\" between the output of N1, and the output of another model N2 which has been trained to match N1 (on a dataset of size n). The paper suggests that the learnability of a model is a good measure of how simple the function learned by that model is --- furthermore, it shows that this notion of learnability correlates well (across extensive experiments) with the test accuracy of the model.\n\nThe paper presents a number of interesting results:\n1) Larger networks are typically more learnable than smaller ones (typically we think of larger networks as being MORE complicated than smaller networks -- this result suggests that in an important sense, large networks are simpler).\n2) Networks trained with random data are significantly less learnable than networks trained on real data.\n3) Networks trained on small mini-batches (larger variance SGD updates) are more learnable than those trained on large minibatches.\n\nThese results are in line with several of the observations made by Zhang et al (2017), which showed that neural networks are able to both (a) fit random data, and (b) generalize well; these results at first seem to run counter to the ideas from statistical learning theory that models with high capacity (VC dimension, radamacher complexity, etc.) have much weaker generalization guarantees than lower capacity models. These results suggest that models that have high capacity (by one definition) are also capable of being simple (by another definition). These results nicely complement the work which studies the \"sharpness/curvature\" of the local minima found by neural networks, which argue that the minima which generalize better are those with lower curvature.\n\nReview:\nQuality: I found this to be high quality work. The paper presents many results across a variety of network architectures. One area for improvement is presenting results on larger datasets (currently all experiments are on CIFAR-10), and/or on non-convolutional architectures. Additionally, a discussion of why learnabiblity might imply low generalization error would have been interesting (the more formal, the better), though it is unclear how difficult this would be.\n\nClarity: The paper is written clearly. A small point: Step 2 in section 3.1 should specify that argmax of N1(D2) is used to generate labels for the training of the second network. Also, what dataset D_i is used for tables 3-6? Please specify.\n\nOriginality: The specific questions tackled in this paper are original (learnability on random vs. real data, large vs. small networks, and large vs. small mini-batch training). But it is unclear to me exactly how original this use of \"learnability\" is in evaluating how simple a model is. It seems to me that this particular use of \"learnability\" is original, even though PAC learnability was defined a while ago.\n\nSignificance: I find the results in this paper to be quite significant, and to provide a new way of understanding why deep neural networks generalize. I believe it is important to find new ways of formally defining the \"simplicity/capacity\" of a model, such that \"simpler\" models can be proven to have smaller generalization gap (between train and test error) relative to more \"complicated\" models. It is clear that VC dimension and radamacher complexity alone are not enough to explain the generalization performance of neural networks, and that neural networks with high capacity by these definitions are likely \"simple\" by other definitions (as we have seen in this paper). This paper makes an important contribution to this conversation, and could perhaps provide a starting point for theoreticians to better explain why deep networks generalize well.\n\nPros\n- nice experiments, with very interesting results.\n- Helps explain one way in which large networks are in fact \"simple\"\n\nCons\n- The paper does not attempt to relate the notion of learnability to that of generalization performance. All it says is that these two metrics appear to be well correlated.", "The proposed approach to figure out what do deep network learn is interesting -- the approach of learning a learned network. Some aspects needs more work to improve the work. The presentation of the results can be improved further. \n\nFirstly, confidence intervals on many experiments are missing (including Tables 3-9). Also, since we are looking at empirically validating the learnability criterion defined by the authors, all the results (the reported confusion tables) need to be tested statistically (to see whether one dominates the other). \n\nWhat is random label learning of N1 telling us? How different would that be in terms of simply learning random labels on real data directly. Further, the evaluations in Tables 3-6 need more attention since we are interested in the TLP=1 vs. PLP=0 case, and TLP=0 vs. PLP=1 case. \n\nThe influence of depth is not clear -- may be it is because of the way results are reported here. A simple figure with increasing layers vs. learnability values would do a better job at conveying the trends. \n\nThe evaluations in Section 3.5 are not conclusive? What is the question being tested for here? \n\nWhat about the influence of number of classes on learnability trends? Some experiments on large class datasets including cifar100 and/or imagenet need to be reported. \n\n--- Comments after response from authors --- \n\nThe authors have clarified and shown results for several of the issues I was concerned about. Although it is still unclear what the learnability model is capturing for deeper model or the trends in Section 3.5 (looks like the trends may relate to stability of SGD as well here) -- the proposed ideas are worth discussing. I have appropriately modified my rating. \n\n", "Review Summary:\nThe primary claim that there is \"a strong correlation between small generalization errors and high learnability\" is correct and supported by evidence, but it doesn't provide much insight for the questions posed at the beginning of the paper or for a general better understanding of theoretical deep learning. In fact the relationship between test accuracy and learnability seems quite obvious, which unfortunately undermines the usefulness of the learnability metric which is used in many experiments in the paper.\n\nFor example, consider the results in Table 7. A small network (N1 = 16 neurons) with low test accuracy results in a low learnability, while a large network (N1 = 1024 neurons) gets a higher test accuracy and higher learnability. In this case, the small network can be thought of as applying higher label noise relative to the larger network. Thus it is expected that agreement between N1 and N2 (learnability) will be higher for the larger network, as the predictions of N1 are less noisy. More importantly, this relationship between test accuracy and learnability doesn't answer the original question Q2 posed: \"Do larger neural networks learn simpler patterns compared to neural networks when trained on real data\". It instead draws some obvious conclusions about noisy labeling of training data.\n\nOther results presented in the paper are puzzling and require further experimentation and discussion, such as the trend that the learnability of shallow networks on random data is much higher than 10%, as discussed at the bottom of page 4. The authors provide some possible reasoning, stating that this strange effect could be due to class imbalance, but it isn't convincing enough.\n\nOther comments:\n-Section 3.4 is unrelated to the primary arguments of the paper and seems like a filler.\n-Equations should have equation numbers\n-Learnability numbers reported in all tables should be between 0-1 per the definition on page 3\n-As suggested in the final sentence of the discussion, it would be nice if conclusions drawn from the learnability experiments done in this paper were applied to the design new networks which better generalize", "After taking the reviewer’s suggestions we have made the following changes:\n\n-\tSec. 3.1: Modified equation (1) to normalize learnability (previous equation multiplied by 100)\n-\tSec. 3: Included new learnability results for MLPs (Multi-Layer Perceptrons) and CIFAR 100 dataset \n-\tAll reported tables now have confidence intervals \n-\tIntroduced a table (Table 5 in Sec. 3.2) for showing class-wise percentage distribution for N1\n-\tIncluded a new section (Sec. 4) with plots of learnability and generalization error vs epoch \n-\tIncluded an appendix about learnability for MNIST\n\n", "Thank you for the review and kind words. The major comment/suggestion is to relate the notion of learnability to that of generalization. Indeed, we have a partial intuitive connection between these two notions based on Figure 3 in the updated draft. We also added Section 4 to discuss this aspect. We hypothesize that learnability captures the inductive bias of SGD training of neural networks. More precisely, when we start training, intuitively, the network is simpler (learnability is high) and generalization error is low (both train and test errors are high). As SGD changes the network to reduce the training error, it becomes more complex (learnability decreases) and the generalization error decreases (train error decreases rapidly while test error does not decrease as rapidly). Training is stopped when the training error becomes less than 1%. At this point, learnability has decreased from its initial high value, and generalization error has increased from its initial low value. Their precise values might be close (as happens in the case of, e.g., N1=N2=VGG11), or not so close (as happens in the case of N1 and N2 being shallow 2-layer CNNs with layer size 1024). Making this connection more formal would be an interesting direction of future work.", "“The proposed approach to figure out what do deep network learn is interesting -- the approach of learning a learned network. Some aspects needs more work to improve the work. The presentation of the results can be improved further. \nFirstly, confidence intervals on many experiments are missing (including Tables 3-9). Also, since we are looking at empirically validating the learnability criterion defined by the authors, all the results (the reported confusion tables) need to be tested statistically (to see whether one dominates the other). “\nThese were not included in later tables to reduce clutter. We have now included these in the updated version. \n \n “What is random label learning of N1 telling us? How different would that be in terms of simply learning random labels on real data directly. Further, the evaluations in Tables 3-6 need more attention since we are interested in the TLP=1 vs. PLP=0 case, and TLP=0 vs. PLP=1 case”\n\nRandom label learning of N1 (Section 3.2) is trying to answer Q1 posed in the introduction: do neural networks learn simple patterns on random training data? Or equivalently, we could ask: are neural networks learned on random training data simple? The results of Section 3.2 tell us that this is not the case. There is a subtle but substantial difference between learning N2 using data from N1 (which itself is obtained by random label learning, as done in this paper) and learning N2 simply from random labels on real data directly. In the first scenario, the training and test data of N2 are both generated by N1, so it is indeed possible to get even 100% accuracy for N2. On the other hand, in the second scenario, the training and test data for N2 are random and independent. So the test accuracy of N2 will be close to 10% with high probability.\n\nWe are sorry, we did not understand your comment about Tables 3-6. Could you please elaborate?\n \n “The influence of depth is not clear -- may be it is because of the way results are reported here. A simple figure with increasing layers vs. learnability values would do a better job at conveying the trends. “\n \nFor clarity, we have now included results on learnability of MLPs with varying depth and a fixed hidden unit size (Table 3). These results suggest that learnability decreases slightly with increasing depth as the number of parameters increase. Note however, that the test accuracies here stay approximately the same with increasing depth. In this case, increasing depth naively does not seem to help.\n\nFor popular networks, we need to be careful about drawing conclusions about depth and learnability since a network with higher depth might still have much fewer parameters and hence have low representational power as well as test accuracy. This is the reason we chose to order the networks in increasing order of their test accuracy, which captures their generalizability since all the networks achieve a training error of zero.\n", "“The evaluations in Section 3.5 are not conclusive? What is the question being tested for here? “\n\nThese experiments are an attempt to better understand the notion of learnability as we now explain in a bit more detail than in the paper: While our experiments in previous sections have the learnability values quite concentrated (confidence intervals are small), they say nothing about how concentrated the function computed by N1 itself is across different runs. More precisely, if we train N1 several times using SGD, we expect that the function computed by N1 approximates the data well. However, this function may differ for different runs of SGD and since we are interested in the learnability of the function computed by N1, we would like to understand if it's the same function we are learning each time. In the experiments of this section we are trying to understand the extent to which this happens.\n \nHere is one concrete conclusion of these experiments (also mentioned in the paper). An immediate conjecture suggested by the confusion matrix of VGG11 is that perhaps all that N2 learns is the original data from N1 as the agreement between the functions computed via different SGD runs is approximately the same as the test accuracy (about 73%). This is refuted by Figure 2 as it shows that only on about 55% of data there is full agreement among the different N1's. \n \nAdditionally, we can try to relate these experiments to other experiments in the paper: The confusion matrices clearly show that the (function computed by) N1 is considerably more stable in the case of shallow networks than in the case of VGG-11. A similarly stark difference between the two cases is seen also in Tables 7 and 8. In the former, the learnability can be much higher compared to test accuracy; but in the latter, learnability is about the same as test accuracy. It's conceivable that these two phenomena are related and investigating this potential link could provide further insights into both.\n\nOf course, these conclusions lead us to further questions. It is not our claim that we provide a full understanding of learnability and generalization. \n\n\n “What about the influence of number of classes on learnability trends? Some experiments on large class datasets including cifar100 and/or imagenet need to be reported. “\n \nWe have included results on CIFAR100 in Table 4. The results here confirm the trends observed on CIFAR10.\n", "Other comments: \n“-Section 3.4 is unrelated to the primary arguments of the paper and seems like a filler”\nWe think that Section 3.4 perfectly aligns with the theme of the paper i.e., exploring learnability of learned neural networks and its relation to generalization. This section is aimed at answering Q3 posed at the beginning of the paper. It is well-known that networks obtained with higher batch size have poorer generalization. As our experiments indicate, networks trained with higher batch size also have poorer learnability. A priori, it's not clear what to expect from such an experiment on learnability. Thus, our experiments in this section can be thought of another confirmation of our finding that learnability and generalization tend to be correlated. \n\n \n“-Equations should have equation numbers “\nThere's only one equation in the paper and it's numbered (1). Did we understand your comment correctly?\n \n“-Learnability numbers reported in all tables should be between 0-1 per the definition on page 3”\nYou are correct. The reported values are percent values obtained by multiplying the value in the definition by 100. We have rectified this in the updated version. \n\n“-As suggested in the final sentence of the discussion, it would be nice if conclusions drawn from the learnability experiments done in this paper were applied to the design new networks which better generalize”\n \nOne immediate approach to achieve this would be to regularize training so as to guide it towards more learnable networks. Since learnability of a network can be estimated (but this is not very cheap) this is a reasonably concrete approach, though considerable amount of work seems to be required to make this work. \nThe final but one sentence of the discussion points out another way for this goal to be achieved: characterizing neural networks that can be efficiently learned via backprop. If such a characterization is available, either regularization of the loss function or modifying the backprop updates might be able to help us design new networks that generalize better. \n\nWhile training networks with better generalization is certainly a long-term goal of this study, it is outside the scope of current paper. We note that while the concept of flat/sharp minima and its relation to generalization were proposed back in 1997, it took almost 20 years to design a new algorithm (Entropy SGD) that exploits this principle to find networks that generalize better (and is still an ongoing program of work).\n", "“-Review Summary: The primary claim that there is a strong correlation between small generalization errors and high learnability\" is correct and supported by evidence, . . .. .Do larger neural networks learn simpler patterns compared to neural networks when trained on real data. It instead draws some obvious conclusions about noisy labeling of training data.”\n\nFirstly, we would like to stress that there is “no obvious connection” between test accuracies and learnability. This is clearly demonstrated by a network N1 which predicts the same class (say class 1) for all examples. While N1 is easily learnable, its test accuracy is 10%. The reason for this apparent conflicting behavior is that even though N1 does noisy labeling of training data, the noise introduced is not random – it is highly structured. \n\nThe same argument applies to the cases of learned small (N1_small = 16 neurons) and large (N1_large = 1024 neurons) networks. At an intuitive level, one would expect that the noise added by N1_small is much more structured (simpler, smoother) compared to that added by N1_large, since the noise added by N1_small is generated by a small network. In short, higher test accuracy of N1_large does not obviously explain its superior learnability value compared to N1_small. Note that learnability and test accuracy can be substantially different (for shallow networks, the learnability can be up to 16 percent points higher---see Table 7), which shows that N2 learns the structure of N1 apart from learning about noisy version of the original data. \n\nAnother way to look at this experiment is to forget that there ever was true data (and hence also forget test accuracies) – all we have are N1_small and N1_large. Given just N1_small and N1_large, considering their relative sizes, traditional wisdom suggests that N1_small is more learnable than N1_large---we think that this has at least as much intuitive force as the hypothesis you suggest. However, that is simply not the case. There is something about N1_large which, despite its large size, makes it much easier to learn than N1_small. This precisely answers Q2: larger neural networks do learn simpler patterns compared to smaller networks when trained on real data. \n\nIf you have a look at the included MNIST results in the appendix, we can clearly see that even a very simpler network very few number of parameters and low-test accuracy is highly learnable because of its simplicity.\n\nTo sum up, explanation of the correlation between generalizability and learnability does not seem to be obvious. We do offer one partial explanation below in reply to AnonReviewer2.\n\n \n \n“Other results presented in the paper are puzzling and require further experimentation and discussion, such as the trend that the learnability of shallow networks on random data is much higher than 10%, as discussed at the bottom of page 4. The authors provide some possible reasoning, stating that this strange effect could be due to class imbalance, but it isn't convincing enough.”\n\nFollowing up on your comment, we present the class imbalance values for two deep networks and two shallow networks on true data, random labels and random images in Table 5 in the updated draft. While class imbalance is slightly higher for shallow networks compared to deeper ones on random data, it is indeed the case that the difference in class imbalance is not high. Answering this question does seem to require further experimentation." ]
[ 7, 6, 4, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJ1RPJWAW", "iclr_2018_rJ1RPJWAW", "iclr_2018_rJ1RPJWAW", "iclr_2018_rJ1RPJWAW", "H17N5b5lf", "BJ9PiZ9eG", "BJ9PiZ9eG", "rkf_j7cgG", "rkf_j7cgG" ]
iclr_2018_HylgYB3pZ
Linearly Constrained Weights: Resolving the Vanishing Gradient Problem by Reducing Angle Bias
In this paper, we first identify \textit{angle bias}, a simple but remarkable phenomenon that causes the vanishing gradient problem in a multilayer perceptron (MLP) with sigmoid activation functions. We then propose \textit{linearly constrained weights (LCW)} to reduce the angle bias in a neural network, so as to train the network under the constraints that the sum of the elements of each weight vector is zero. A reparameterization technique is presented to efficiently train a model with LCW by embedding the constraints on weight vectors into the structure of the network. Interestingly, batch normalization (Ioffe & Szegedy, 2015) can be viewed as a mechanism to correct angle bias. Preliminary experiments show that LCW helps train a 100-layered MLP more efficiently than does batch normalization.
rejected-papers
The paper identifies an interesting problem in sigmoid deep nets, addressed diffferently by batchnorm, and proposes a different simple fix. It shows empirically that constraining neuron's weights to sum to zero improves training of a 100 layers sigmoid MLP. The work is currenlty limited in its theoretical contribution, and regarding the showcased practical interest of the method compared to batchnorm (it's not appplicable to RELUs and shows positive effect on optimization but not generalization).
train
[ "BJr2wYHVM", "SymE04bxf", "H1tXgwtgG", "r1k49d7-M", "S1Dw9mXVf", "Hks4KOgQG", "S1wQ85JQf", "HJ9xvF1XM", "BkxFmu17z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "I appreciate the effort taken by the authors to add more experimental results and enriching the intuitions of LCW. However, as the experiments have further shown, LCW only shows consistently better training accuracy than BN on a simple dataset as CIFAR10, but not as good on the testing data (therefore poor generalization) or on a more sophisticated data as CIFAR100. Meanwhile, as the authors have pointed out, it’s not clear how to add LCW with ResNet, and to RNN where the vanishing gradient is more significant, which again limits its advantages. Therefore, I feel it is appropriate to keep my rating.", "This paper studies the impact of angle bias on learning deep neural networks, where angle bias is defined to be the expected value of the inner product of a random vectors (e.g., an activation vector) and a given vector (e.g., a weight vector). The angle bias is non-zero as long as the random vector is non-zero in expectation and the given vector is non-zero. This suggests that the some of the units in a deep neural network have large values (either positive or negative) regardless of the input, which in turn suggests vanishing gradient. The proposed solution to angle bias is to place a linear constraint such that the sum of the weight becomes zero. Although this does not rule out angle bias in general, it does so for the very special case where the expected value of the random vector is a vector consisting of a common value. Nevertheless, numerical experiments suggest that the proposed approach can effectively reduce angle bias and improves the accuracy for training data in the CIFAR-10 task. Test accuracy is not improved, however.\n\nOverall, this paper introduces an interesting phenomenon that is worth studying to gain insights into how to train deep neural networks, but the results are rather preliminary both on theory and experiments.\n\nOn the theoretical side, the linearly constrained weights are only shown to work for a very special case. There can be many other approaches to mitigate the impact of angle bias. For example, how about scaling each variable in a way that the mean becomes zero, instead of scaling it into [-1,+1] as is done in the experiments? When the mean of input is zero, there is no angle bias in the first layer. Also, what about if we include the bias term so that b + w a is the preactivation value?\n\nOn the experimental side, it has been shown that linearly constrained weights can mitigate the impact of angle bias on vanishing gradient and can reduce the training error, but the test error is unfortunately increased for the particular task with the particular dataset in the experiments. It would be desirable to identify specific tasks and datasets for which the proposed approach outperforms baselines. It is intuitively expected that the proposed approach has some merit in some domains, but it is unclear exactly when and where it is.\n\nMinor comments:\n\nIn Section 2.2, is Layer 1 the input layer or the next?", "The authors introduce the concept of angle bias (angle between a weight vector w and input vector x) by which the resultant pre-activation (wx) is biased if ||x|| is non-zero or ||w|| is non-zero (theorm 2 from the article). The angle bias results in almost constant activation independent of input sample resulting in no weight updates for error reduction. Authors chose to add an additional optimization constraint LCW (|w|=0) to achieve zero-mean pre-activation while, as mentioned in the article, other methods like batch normalization BN tend to push for |x|=0 and unit std to do the same. \n\nClearly, because of lack of scaling factor incase of LCW, like that in BN, it doesnot perform well when used with ReLU. When using with sigmoid the activation being bouded (0,1) seems to compensate for the lack of scaling in input. While BN explicitly makes the activation zero-mean LCW seems to achieve it through constraint on the weight features. Though it is shown to be computationally less expensive LCW seems to work in only specific cases unlike BN.", "Pros:\nThe paper is easy to read. Logic flows naturally within the paper.\n\nCons:\n\n1. Experimental results are neither enough nor convincing. \n\nOnly one set of data is used throughout the paper: the Cifar10 dataset, and the architecture used is only a 100 layered MLP. Even though LCW performs better than others in this circumstance, it does not prove its effectiveness in general or its elimination of the gradient vanishing problem. For the 100 layer MLP, it's very hard to train a simple MLP and the training/testing accuracy is very low for all the methods. More experiments with different number of layers and different architecture like ResNet should be tried to show better results. \n\nIn Figure (7), LCW seems to avoid gradient vanishing but introduces gradient exploding problem.\n\nThe proposed concept is only analyzed in MLP with Sigmoid activation function. In the experimental parts, the authors claim they use both ReLU and Sigmoid function, but no comparisons are reflected in the figures. \n\n2. The whole standpoint of the paper is quite vague and not very convincing.\nIn section 2, the authors introduce angle bias and suggest its effect in MLPs that with random weights, showing that different samples may result in similar output in the second and deeper layers. However, the connection between angle bias and the issue of gradient vanishing lacks a clear analytical connection. The whole analysis of the connection is built solely on this one sentence \"At the same time, the output does not change if we adjust the weight vectors in Layer 1\", which is nowhere verified. \n\nFurther, the phenomenon is only tested on random initialization. When the network is trained for several iterations and becomes more settled, it is not clear how \"angle affect\" affects gradient vanishing problem.\n\n\nMinors:\n1. Theorem 1,2,3 are direct conclusions from the definitions and are mis-stated as Theorems.\n\n2. 'patters' -> 'patterns'\n\n3. In section 2.3, reasons 1 and 2 state the similar thing that output of MLP has relatively small change with different input data when angle bias occurs. Only reason 1 mentions the gradient vanishing problem, even though the title of this section is \"Relation to Vanishing Gradient Problem\". \n", "I appreciate the responses, additional experiments, and revision. I see some advantages of the proposed LCW in test during early epochs, but those advantages are not quite strong. Perhaps, more advantages might be seen in completely different domains or with better regularization methods.", "In addition to the modifications discussed in the responses to the reviewer comments, we have revised\nour paper in the following way:\n\n- Vectors and matrices are written in bold font.\n\n- Figures 4 and 5 are added to show the effect of the angle bias in a 50 layer MLP with ReLU activations.\n Section 2.2.2 is added to discuss these figures.\n\n- Figures 10 and 11 are added to show the effect of LCW (proposed method) in the MLP with ReLU activations.\n Section 3.1.2 is added to discuss these figures.\n\n- In the second paragraph of Section 3.3, we have added an explanation that bias terms are initialized to\n zero in the proposed method.\n", "We thank the reviewer for the insightful comments on our paper.\n\n--\n\nComment 1: Only one set of data is used throughout the paper: the Cifar10 dataset, and\nthe architecture used is only a 100 layered MLP.\n\nResponse 1: We did additional experiments with the SVHN dataset and the CIFAR-100 dataset\nfor each of which we trained 5 layered, 50 layered, and 100 layered MLPs.\nResults are shown in Figure 12, Figure 14, and Figure 15 in the revised manuscript.\n\n--\n\nComment 2: For the 100 layer MLP, it's very hard to train a simple MLP and the\ntraining/testing accuracy is very low for all the methods.\n\nResponse 2: We do not agree to the comment. The training accuracy for CIFAR-10 or SVHN dataset\nis high for the 100 layer MLP, if we apply LCW (proposed method) or batch normalization,\nas shown Figure 12 (a) and Figure 14 (a) in the revised manuscript.\n\n--\n\nComment 3: More experiments with different number of layers and different architecture\nlike ResNet should be tried to show better results. \n\nResponse 3: As mentioned in Response 1, we did experiments with several sizes of MLPs.\nWe also tried ResNet, but it was unable to train ResNet with LCW. This is mainly because\nReLU is used in ResNet, and the gradient explosion explained in Section 5.2 occurs.\nWe are now developing methods that make LCW applicable to ReLU nets, including ResNet.\n\n--\n\nComment 4: In Figure (7), LCW seems to avoid gradient vanishing but introduces gradient exploding problem.\n\nResponse 4: We agree to the comment. We have added an explanation on these points to the\nsecond paragraph of Section 6 in the revised manuscript.\n\n\n--\n\nComment 5: The proposed concept is only analyzed in MLP with Sigmoid activation function.\nIn the experimental parts, the authors claim they use both ReLU and Sigmoid function,\nbut no comparisons are reflected in the figures. \n\nResponse 5: We omitted results with ReLU in the figures, because MLPs with ReLU were not\ntrainable at all when LCW is applied, as mentioned in Section 5.2.\n\n--\n\nComment 6: In section 2, the authors introduce angle bias and suggest its effect in MLPs that\nwith random weights, showing that different samples may result in similar output in the second\nand deeper layers. However, the connection between angle bias and the issue of gradient\nvanishing lacks a clear analytical connection. The whole analysis of the connection is built\nsolely on this one sentence \"At the same time, the output does not change if we adjust the\nweight vectors in Layer 1\", which is nowhere verified. \n\nResponse 6: We have enriched the explanation in Section 2.1 in the revised manuscript,\ndenoting that the shrinking of the distribution of the angle between the weight vector and the\nactivation vector is a reason for why the activation becomes almost constant in deep layers.\nMoreover, we have added analytical results in Section 2.3 that examine the relationship\nbetween the constant activation in deeper layers and the vanishing gradient of weights.\n\n--\n\nComment 7: The phenomenon is only tested on random initialization. When the network is trained\nfor several iterations and becomes more settled, it is not clear how \"angle affect\" affects\ngradient vanishing problem.\n\nResponse 7: We have added Figures 8 and 9, which show the activation and the distribution of\nangles in a MLP with sigmoid activation, respectively, after 10 epochs training.\nWe have also added discussions on these figures to the third paragraph of Section 3.1.1 in\nthe revised manuscript.\n\n--\n\nComment 8: Theorem 1,2,3 are direct conclusions from the definitions and are mis-stated as Theorems.\n\nResponse 8: We have modified the manuscript to refer to these statements as propositions instead of theorems.\n\n--\n\nComment 9: 'patters' -> 'patterns'\n\nResponse 9: In accordance with the comment, we have modified the expression.\n\n--\n\nComment 10: In section 2.3, reasons 1 and 2 state the similar thing that output of MLP has relatively\nsmall change with different input data when angle bias occurs. Only reason 1 mentions the gradient\nvanishing problem, even though the title of this section is \"Relation to Vanishing Gradient Problem\". \n\nResponse 10: In accordance with the comment, we have deleted the second reason from the manuscript.\nAlso, we have enriched the explanation related to reason 1, as mentioned in Response 6.\n", "We thank the reviewer for taking the time to evaluate our paper.\n\n--\n\nComment 1: The authors introduce the concept of angle bias (angle between a weight vector w\nand input vector x) by which the resultant pre-activation (wx) is biased if ||x|| is non-zero\nor ||w|| is non-zero (theorem 2 from the article). The angle bias results in almost constant\nactivation independent of input sample resulting in no weight updates for error reduction.\nAuthors chose to add an additional optimization constraint LCW (|w|=0) to achieve zero-mean\npre-activation while, as mentioned in the article.\n\nResponse 1: We did not intend to indicate that the proposed method (LCW) adds additional\nconstraint ||w||=0 on weight vectors, and we have added an explanation to clearly state that\nit is assumed that ||w|| > 0 in our paper to the first paragraph of Section 2.1 in the\nrevised manuscript.\nThe proposed method adds constraints 'w_1 + .. + w_m = 0' on weight vectors w, where\nw = (w_1, ..., w_m)^\\top in R^m, to force w perpendicular to 1_m = (1, ..., 1) in R^m,\nwhich is assumed to be the mean vector of the activation vector in the previous layer.\n\n--\n\nComment 2: Clearly, because of lack of scaling factor in case of LCW, like that in BN,\nit does not perform well when used with ReLU. When using with sigmoid the activation being\nbounded (0,1) seems to compensate for the lack of scaling in input.\n\nResponse 2: As the reviewer pointed out, the lack of scaling factor in LCW is a cause\nfor not performing well with ReLU. We tried ReLU6 (= min(max(x, 0), 6)) instead\nof ReLU with LCW, but it was still hard to train a deep MLP, in which the exploding gradient\nstill occurred. We are now developing methods to make LCW applicable to ReLU nets.\n\n--\n\nComment 3: While BN explicitly makes the activation zero-mean LCW seems to achieve it through\nconstraint on the weight features. Though it is shown to be computationally less expensive\nLCW seems to work in only specific cases unlike BN.\n\nResponse 3: We agree that LCW has limitation compared to BN as of now. However, it is also\nvery important to understand why batch normalization works so well in many situations.\nWe believe that reducing angle bias is a crucial role of batch normalization, and such\ninterpretation helps us to determine in which part of the network we should apply methods\nlike batch normalization.", "We thank the reviewer for the insightful comments on our paper.\n\n--\n\nComment 1: How about scaling each variable in a way that the mean becomes zero, instead of\nscaling it into [-1,+1] as is done in the experiments? When the mean of input is zero,\nthere is no angle bias in the first layer.\n\nResponse 1: We did experiments with CIFAR-10, in which each variable was scaled to have\nzero mean. As the reviewer pointed out, we have no angle bias in the first layer (the layer\nafter the input layer) in this case.\nHowever, the training of MLPs then got harder and the test accuracy was very row, even if\nwe applied either LCW or batch-normalization. We think this is because normalizing each pixel\nof images in CIFAR-10 ruined the relationship between pixels.\n\n--\n\nComment 2: What about if we include the bias term so that b + w a is the preactivation value?\n\nResponse 2: We have already included the bias term in our original experiment, although\nit was omitted in Equation 2 for simplicity. We have modified Equation 2 to include\nthe bias term for clarity in the revised manuscript.\n\n--\n\nComment 3: It would be desirable to identify specific tasks and datasets for which\nthe proposed approach outperforms baselines. It is intuitively expected that the proposed\napproach has some merit in some domains, but it is unclear exactly when and where it is.\n\nResponse 3: We did additional experiments with the SVHN dataset and the CIFAR-100 dataset,\nwhich are reported in the appendix B of the revised manuscript. The peak value of the test\naccuracy of the proposed method was comparable to that of batch-normalization when the MLP\nhas 5 layers or 50 layers, as shown in Figure 12 (f) and (i), Figure 14 (f) and (i), and\nFigure 15 (f) and (i).\nAn interesting point is that the peak of the test accuracy is around 20 epochs in the\nproposed method. However, we have no clear explanation for this finding. We have added\na description on this point in the third paragraph of Section 5.1 in the revised manuscript.\n\n--\n\nComment 4: In Section 2.2, is Layer 1 the input layer or the next?\n\nResponse 4: Layer 1 is the layer next to the input layer. We have added an explanation of\nthese points to the first paragraph of Section 2.2.1 in the revised version.\n" ]
[ -1, 5, 5, 4, -1, -1, -1, -1, -1 ]
[ -1, 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "S1wQ85JQf", "iclr_2018_HylgYB3pZ", "iclr_2018_HylgYB3pZ", "iclr_2018_HylgYB3pZ", "BkxFmu17z", "iclr_2018_HylgYB3pZ", "r1k49d7-M", "H1tXgwtgG", "SymE04bxf" ]
iclr_2018_SkHkeixAW
Regularization for Deep Learning: A Taxonomy
Regularization is one of the crucial ingredients of deep learning, yet the term regularization has various definitions, and regularization methods are often studied separately from each other. In our work we present a novel, systematic, unifying taxonomy to categorize existing methods. We distinguish methods that affect data, network architectures, error terms, regularization terms, and optimization procedures. We identify the atomic building blocks of existing methods, and decouple the assumptions they enforce from the mathematical tools they rely on. We do not provide all details about the listed methods; instead, we present an overview of how the methods can be sorted into meaningful categories and sub-categories. This helps revealing links and fundamental similarities between them. Finally, we include practical recommendations both for users and for developers of new regularization methods.
rejected-papers
The paper is a well-written review of regularization approaches in deep learning. It does not offer novel approaches or novel insight with empirically demonstrated usefulness => ICLR is not the appropriate venue for it.
test
[ "rywwiW8xz", "HysLX85lf", "r1Dj4EXbf", "rkueKoimM", "Bk1wYis7G", "SkltKjjXG", "Hyg2KssmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper is unusual in that it is more of a review than contributing novel knowledge. It considers a taxonomy of all the ways that machine learning (mostly deep learning) methods can achieve a form of regularization. \n\nUnfortunately, it starts with a definition of regularization ('making the model generalize better') which I believe misses the point which was made in Goodfellow et al 2016 ('intend to improve test error but not necessarily training error'), i.e., that we would like to separate as much as possible the regularization effects from the optimization effect. Indeed, under the definition proposed here, any improvement in the optimizer could be considered like a regularizer, so long as we are not in the overfitting regime. That does not sound right to me.\n\nThere are several places where the authors make TOO STRONG STATEMENTS, taking for truth what are simply beliefs with no strong supporting evidence (at least published). This is not good for a review and when making recommendations.\n\nThe other weakness I estimate in this paper is that I did not get a sense that the taxonomy really helped us (me at least) to get insight into the different mentions being cited. Besides the obvious proposal to combine ideas to write new papers (but we did not need that paper to figure that out) I did not find much meat in the 'future directions' section.\n\nHowever, I that except in a few places the understand of the field displayed by the authors is pretty good and, with correction, could serve as a useful reference for students of deep learning. The recommendations were reasonable although lacking empirical support (or pointers to the literature), so I would take them somewhat carefully, more as the current 'group think' than ground truth.\n\nFinally, here a few minor points which could be fixed.\n\nEq. 1: in typical DL, minimization is approximate, not exact, so the proposed formalism does not reflect reality.\n\nEq. 4: in many cases, the noise is not added (e.g. dropout), so that should be clarified there.\n\npage 3, first bullet of 'Effect on the data representation': not clear, may want to give translations as an example of such transformations,.\n\npage 8, activation functions: the ReLU is actually older than the cited papers, it was used by computational neuroscientists a long time ago. Jarrett 2009 did not use the ReLU but an absolute-value rectifier and it was Glorot 2011 who showed that the ReLU really kicked ass for deeper networks. Nair 2010 used the ReLU in a very different context (RBMs), not really feedforward multi-layer networks where it shines now.\nIn that same section (and probably elsewhere) there are TOO STRONG STATEMENTS, e.g., the \"facts\" mentioned are not facts but merely folk belief, as far as I know, and I would like to see well-done supporting evidence before treating those as facts. Note that approximating the sigmoid precisely would require many ReLUs!\n\npage 8: it is not clear how multi-task learning fits under the 'architecture' formalism provided at the beginning of section 4.\n\nsection 7 (page 10): there is earlier work on the connection between early stopping and L2 regularization, at least dating back to Ronan Collobert's PhD thesis (with neural nets), probably earlier for linear systems.", "The paper attempts to build a taxonomy for regularization techniques employed in deep learning. The authors categorize existing works related to regularization into five big categories, including data, model architecture, regularization term, error term and optimization. Subgroups are identified based on certain attributes. \n\nThe paper is written as a survey paper on literatures related to regularization in deep learning. The five top-level categories are quite obvious. The authors organize works belonging to the first three categories into three big tables, and summarizing the key point of each one using one-liners to provide an overview for readers. While it is a worthy effort, I am not sure it offers much value to readers. Also there is a mix of trainability and regularization. Some of the works were proposed to address trainability issues instead of regularization, for example, densenet, and some of the initialization techniques. \n\nThe authors try to group items in each category into sub-groups according to certain attributes, however, little explanation on how and why these attributes are identified was provided. For example, in table 1, what kind of information does the transformation space or phase provide in terms of helping readers choosing a particular data transformation / augmentation technique. At the end of section 3, the authors claim that dropout, BN are close to each other. Please elaborate on this point. \n\nThe authors offer some recommendation on how to choose or combine different regularization techniques at the end of the paper. However, it is not clear from reading the paper where these insights came from. ", "The main aim of ICLR conference, at least as it is written on its website, is to provide new results on theories, methods and algorithms, supporting further breakthroughs in AI and DL.\n\nIn this respect the authors of the paper claim that their “systematic approach enables the discovery of new, improved regularization methods by combining the best properties of the existing ones.”\n\nHowever, the authors did not provide any discoveries concerning new approaches to regularisation supporting this claim. Thus, the main contribution of the paper is that the authors made a review and performed classification of available regularisation methods. So, the paper is in fact a survey paper, which is more appropriate for full-scale journals. The work, developed by the authors, is really big. However, I am not sure it will bring a lot of benefits for readers except those who need review for some reports, introductions in PhD thesis, etc.\n\nAlthough the authors mentioned some approaches to combine different regularisations, they did not performed any experiments supporting their ideas.\n\nThus, I think that\n- the paper is well written in general,\n- it can be improved (by taking into account several important comments from the Reviewer 2) and served as a review paper in some appropriate journal,\n- the paper is not suited for ICLR proceedings due to reasons, mentioned above.", "We thank the reviewers for the valuable comments. We appreciate the understanding for the length of the paper, which was required to include important information.\n\nWe are happy to see that the reviewers consider the paper to be well written (R3), pinpoint our understanding of the subject and the depth of our work (R1, R3), and recognize our taxonomy to be a worthy effort (R2).\n\nIn the reviews, we see two main directions of criticism. We would like to disprove them here. We address other minor comments in additional individual replies under each review.\n\n#1\nThe reviewers consider our paper to be a mere survey of existing methods, lacking novelty, and having low value for an experienced audience, thus unsuitable for ICLR.\n\nWe disagree with claims (mainly by R3) reducing our contribution to a mere review and classification of methods, because it is the classification scheme itself that is novel and central to our paper.\n\nOur work is unprecedented in the scale of the analysis, considering methods ranging from regularization via data to studying the effects of optimization procedures. It is novel in the way it 1) decouples the \"why\" (assumptions the methods are enforcing) from the \"how\" (mathematical tools for the enforcement), 2) identifies \"atomic\" building blocks of the regularization methods, 3) simplifies discovery of new methods via recombinations of building blocks, and 4) offers a big picture by presenting relations between methods.\n\nResearchers from applied fields may benefit from our taxonomy, as they can focus on the \"why\", i.e. discover new assumptions to be enforced. Thus, we believe our work has much higher application potential than just \"a useful reference for students of deep learning\" (R1) or \"introductions in PhD thesis\" (R3). On the other hand, deep learning researchers can focus on the \"how\" and discover new ways to enforce assumptions.\n\nR1's claim that our suggestion to combine existing methods is \"obvious\" and \"(the community) did not need a paper for that\" is greatly undervaluing our contribution: We do not only say that combining of existing methods can yield new ones; we go further and identify atomic blocks, along with their benefits and limitations.\n\nMoreover, we present novel perspectives on popular techniques (dropout from the optimization point of view, model compression in the data-transformation framework), contributing to their broader understanding.\n\nWe updated the abstract and introduction to better clarify these points. For the listed reasons, we kindly ask the reviewers to reconsider their ratings.\n\n#2\nR1 and R2 consider our definition of regularization to be too wide, encompassing also trainability (R2) and optimization methods (R1).\n\nWe believe it is the only correct approach. It is not so crucial what the optimum of the empirical risk is because 1) it cannot be found exactly, and 2) the empirical and expected risk are not equal; rather, the shape of the loss function and the optimization procedure play together to dictate how the training proceeds in the weight space and where it ends up; therefore, the effects of changing the loss function and optimization procedure are entangled and cannot be simply separated. The learned solution depends on all factors from Section 2.\n\nThis is supported by Zhang et al. (ICLR 2017), who demonstrate that explicit regularization is not sufficient to explain good generalization ability of deep nets. Also consider following examples of methods fitting the community understanding of \"regularization\" which can be simultaneously considered modifications to the optimization procedure or methods improving trainability:\n\n- Dropout is considered regularization. In Section 7, Figure 1, we show how it can be interpreted as a modification to the optimization procedure.\n\n- Weight decay is a regularizer, Krizhevsky et al. (2009) report it to help trainability of the network too.\n\n- Narrowing down the initial hypothesis space is a form of regularization. Pre-training the network weights performs this implicitly (because it limits the subspace of weight configurations which the algorithm can in practice reach), thus it cannot be considered only trainability or optimization improvement.\n\n- Batchnorm was designed to address trainability issues of deep nets; however, Ioffe and Szegedy (2015) also argue it works as a regularizer, introducing noise into the network through batch shuffling and reducing overfitting.\n\nSuch explanations (e.g. beginning of Section 4) are present in the paper; we also added a clarification to the beginning of Section 7 of the revised paper.\n\nWe hope that this demonstrates well that it is not possible to set a clear boundary between regularization and optimization/trainability; instead, they all must be considered when dealing with improving generalization of neural nets. Thus, we find this point of criticism invalid. We kindly ask the reviewers to reconsider their ratings.\n\nAdditional minor comments can be found under each review.", "Major comments are in our common response to all reviewers.\nMinor comments to R1:\n1. Regarding too strong statements:\nWe tried our best to provide references for all the statements made in our paper, allowing the readers to find out the precise context from which each claim originates. In the revised version, we removed the problematic statement about the effect of ReLUs on vanishing gradients and clarified the statement about their expressivity. If there are still some other too strong statements, it would be most helpful if the reviewer could identify them and we will be glad to adjust them.\n\n2. \"I did not get a sense that the taxonomy really helped us (me at least) to get insight into the different mentions being cited. \"\nAs we mention both in the abstract and in the introduction, we did not attempt to fully describe all details of the individual listed methods: \"We are aware that the research works discussed in this taxonomy cannot be summarized in a single sentence. For the sake of structuring the multitude of papers, we decided to merely describe a certain subset of their properties according to the focus of our taxonomy.\"\n\nInstead, we aimed to identify the tools\" they rely on and to show the connections between them. We provide insights about atomic properties of methods (e.g. description of possible data transformations on pages 5-6), and about different ways to interpret certain methods (e.g. Figure 1).\n\n3. \"The recommendations were reasonable although lacking empirical support (or pointers to the literature), so I would take them somewhat carefully, more as the current 'group think' than ground truth.\"\nDescribing methods without saying when and how to use them is inconclusive and may leave many readers more confused than informed, which is why we added this section. Moreover, providing some practical pointers and recommended approaches improves the application potential of our paper and increases its value for readers with limited experience with deep learning.\n\nWe agree that these recommendations are primarily based on our experience and general \"unwritten knowledge\" that is \"between the lines\" in state-of-the-art literature; to make this clear, we added this following disclaimer into the text: \"Note that these recommendations are neither the only nor the best way; every dataset may require a slightly different approach. Our recommendations are a summary of what we found to work well, and what seems to be common themes and \"written between the lines\" in many state-of-the-art works\"\n\n4. Eq. 1 was updated.\n\n5. Eq. 4: As mentioned in its preceding sentence, this equation gives merely an example of a transformation. Indeed, it can have any other, more complicated form.\n\n6. Effect on data representation:\nWe intended to keep this list free of examples and believe the explanations are clear enough. Examples like translation transformation might also mislead the reader about what exact property we mean and introduce false idea of necessary rigidity.\n\n7. Regarding activation functions:\nIn the revised version, we added a reference to (Hahnloser et al., 2000). The other papers are cited for the following reasons: Jarett et al. (2009) is the first occurrence in deep learning context (\"Several rectifying non-linearities were tried, *including the positive part*, and produced similar results.\"), Nair and Hinton (2010) are the first to call the function \"ReLU\", and finally Glorot et al. (2011) is mentioned exactly for the reasons stated by the reviewer, as a very good overview of the properties and qualities of this activation. \n\n8. \"Note that approximating the sigmoid precisely would require many ReLUs!\"\nWe do not claim it can be done precisely, instead we give an example of an approximation with small integrated absolute error and small integrated squared error. Such small/finite integrated absolute error and integrated squared error are possible when approximating a sigmoid with few ReLUs, but not when approximating a ReLU with few sigmoids. Note that similar approximation of tanh (hard tanh) is often used in practice.\n\n9. Regarding multi-task learning:\nWe included multi-task learning in the architecture section because the network architecture needs to be modified (additional branches etc.) to process additional tasks. Note that we also mention it in the error function section because also the error function needs to be modified. We updated the text to make this clear.\n\n10. Regarding the discussion on page 10:\nThe discussion on page 10 is not related to the work of Collobert and Bengio (2004), who analyze the connection between early stopping and L2 regularization. Our discussion focuses on properties of SGD and the relation between training and testing error whereas early stopping relies on the connection between validation and test error. However, we appreciate the remark about the connection between early stopping and L2 regularization, and we added it to the article.\n\nSee also response to all reviewers.", "Major comments can be found in our common response to all reviewers.\nMinor comments to R2:\n1. \"The authors try to group items in each category into sub-groups according to certain attributes, however, little explanation on how and why these attributes are identified was provided...\"\nIn sections 6 and 7 we provided clear explanations about the choice of subcategories. The remaining sections do not allow such clear distinction and our subcategories are one of several possible choices. Our choice is driven by the goal of separating as many separable concepts as possible.\n\n2. \"For example, in table 1, what kind of information does the transformation space or phase provide in terms of helping readers choosing a particular data transformation / augmentation technique\"\nThe taxonomy is not only about heuristics for choosing among methods. It is about something more fundamental: about understanding the \"atomic\" properties of the methods and their relationships. The transformation space and the phase are properties of methods. We do not claim that understanding the properties fully dictates which method will work well with what dataset.\n\n3. Regarding the closeness between Dropout and Batch normalization:\nHere we refer to the fact that both methods rely on applying a simple transformation on the hidden-feature representation of the data.\n\n4. \"The authors offer some recommendation on how to choose or combine different regularization techniques at the end of the paper. However, it is not clear from reading the paper where these insights came from.\"\nSee #3 in the comments to R1\n\nMajor comments can be found in our common response to all reviewers.", "Major comments can be found in our common response to all reviewers.\nMinor comments to R3:\n1. \"Although the authors mentioned some approaches to combine different regularisations, they did not perform any experiments supporting their ideas.\"\nThe core of our work was designing the taxonomy and identification of the atomic building blocks of individual regularization methods. Designing new types of regularization and validating them experimentally was not our aim, which is why these hints are in the section \"Future directions\". There is a vast amount of possibilities to recombine in novel ways the atomic properties which we described; this would go beyond the scope of our work.\n\nMajor comments can be found in our common response to all reviewers." ]
[ 5, 4, 4, -1, -1, -1, -1 ]
[ 5, 4, 5, -1, -1, -1, -1 ]
[ "iclr_2018_SkHkeixAW", "iclr_2018_SkHkeixAW", "iclr_2018_SkHkeixAW", "iclr_2018_SkHkeixAW", "rywwiW8xz", "HysLX85lf", "r1Dj4EXbf" ]
iclr_2018_r111KtCp-
Taking Apart Autoencoders: How do They Encode Geometric Shapes ?
We study the precise mechanisms which allow autoencoders to encode and decode a simple geometric shape, the disk. In this carefully controlled setting, we are able to describe the specific form of the optimal solution to the minimisation problem of the training step. We show that the autoencoder indeed approximates this solution during training. Secondly, we identify a clear failure in the generalisation capacity of the autoencoder, namely its inability to interpolate data. Finally, we explore several regularisation schemes to resolve the generalisation problem. Given the great attention that has been recently given to the generative capacity of neural networks, we believe that studying in depth simple geometric cases sheds some light on the generation process and can provide a minimal requirement experimental setup for more complex architectures.
rejected-papers
+ interesting approach for a detailed analysis of the limitations of autoencoders in solving a simple toy problem - resulting insights somewhat trivial, not really novel, nor practically useful => lacks demonstration of a gain on non-toy task - regularization study too limited in scope: lacking theoretical grounding, and more exhaustive comparison of regularization schemes.
train
[ "H1s3VoOlf", "SJQ2Xg2eM", "SJOrnJf-M", "Hk4SO_-zG", "B1edwu-zf", "ryPkDObfM", "BJcjHdWzG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "1. The idea is interesting, but the study is not comprehensive yet\n2. need to visualize the input data space, with the training data, test data, the 'gaps' in training data [see a recent related paper - Stoecklein et al. Deep Learning for Flow Sculpting: Insights into Efficient Learning using Scientific Simulation Data. Scientific Reports 7, Article number: 46368 (2017).]. \n3. What's the effect of training data size? \n4. How do the intermediate feature maps look like? \n5. Is there an effect of number of layers? Maybe the network architecture is too deep for the simple data characteristics and size of training set. \n6. Other shapes are said to be part of future work, but I am not convinced that serious conclusions can be drawn from this study only? \n7. What about the possible effects of Batch normalization and dropout? \n8. size of 'd' is critical for autoencoders, only one example in appendix does not do justice, also it seems other color channels show up in the results (fig 10), wasn't it binary input?", "The paper considers a toy problem: the space of images of discs of variable radius - a one dimensional manifold.\n\nAn autoencoder based on convolutional layers with ReLU is experimented with, with a 1D embedding.\n\nIt is shown that \n1) if the bias is not included, the resulting function is homogeneous (meaning f(ax)=af(x)), and so it fails because the 1D representation should be the radius, and the relationship from radius to image is more complex than a homogeneous function.\n- if we include the bias and L2 regularise only the encoder weights, it works better in terms of interpolation for a limited data sample.\n\nThe thing is that 1) is trivial (the composition of homogeneous functions is homogeneous... so their proof is overly messy btw). Then, they continue by further analysing (see proposition 2) the solution for this case. Such analysis does not seem to shed much light on anything relevant, given that we know the autoencoder fails in this case due to the trivial proposition 1.\n\nAnother point: since the homogeneous function problem will not arise for other non-linearities (such as the sigmoid), the focus on the bias as the culprit seems arbitrary.\n\nThen, the story about interpolation and regularisation is kind of orthogonal, and then is solved by an arbitrary regularisation scheme. The lesson learned from this case is basically the second last paragraph of section 3.2. In other words, it just works.\n\nSince it's a toy problem anyway, the insights seem somewhat trivial.\n\nOn the plus side, such a toy problem seems like it might lead somewhere interesting. I'd like to see a similar setup but with a suite of toy problems. e.g. vary the aspect ratio of an oval (rather than a disc), vary the position, intensity, etc etc.", "This paper proposes a simple task (learning the manifold of all the images of disks) to study some properties of Autoencoders. They show that Autoencoders don't generalize to disks of radius not in the training set and propose several regularization to improve generalisation.\n\nThe task proposed in the paper is interesting but the study made is somewhat limited:\n\n- They only studied one choice of Autoencoder architecture, and the results shown depends heavily on the choice of the activation, in particular sigmoid should not suffer from the same problem. \n\n- It would be interesting to study the generalization in terms of the size of the gap.\n\n- The regularization proposed is quite simple and already known, and other regularization have been proposed (e.g. dropout, ...). A more detailed comparison with all previous regularization scheme would be much needed. \n\n- The choice of regularization at the end seems quite arbitrary, it works better on this example but it's not clear at all why, and if this choice would work for other tasks.\n\nAlso Denoising Autoencoders (Pascal et al.) should probably be mentioned in the previous work section, as they propose a solution to the regularization of Autoencoder.\n\nOverall nothing really new was discovered or proposed, the lack of generalization of those kind of architecture is a well known problem and the regularization proposed was already known. ", "3/ What's the effect of training data size? \n\nIn this situation, the training data size is imposed; it is the total possible number of centred disks in a 64 x 64 image.\n\n4/ How do the intermediate feature maps look like?\n\nIn our case, the feature maps resemble disks in the encoding and decoding, which is natural. However we were not able to interpret them in a meaningful way. Thus, we turned to a different approach, the ablation study.\n\n5/ Is there an effect of number of layers? Maybe the network architecture is too deep for the simple data characteristics and size of training set.\n\nIn our case, the number of layers is imposed by the problem and the subsampling coefficient (1/2). We experimented with other, more drastic subsampling, with less success; we chose the minimal architecure which worked correctly.\n\n6/ Other shapes are said to be part of future work, but I am not convinced that serious conclusions can be drawn from this study only?\n\nWe have added some further experiments on ellipses in the supplementary material \nhttps://www.dropbox.com/s/hn2akqgqh9m0qxg/autoencoders_sup_mat.pdf?dl=0\n\n7/ What about the possible effects of Batch normalization and dropout?\n\nBatch normalisation is not explored in our case, it is true . In the first part of the paper, we are concerned with an optimal theoretical decoding solution, and we show that the network without biases is able to find this solution, so batch normalisation would not help here. In the second part, we are concerned with improving robustness to missing data. This can only be remedied by regularisation; batch normalisation may speed up convergence, but if it is to an incorrect solution, it is not that useful. Dropout can improve the generalisation capacities of the network, this is true, but only when the network has an excessive amount of neurons. In this paper, we restricted the architecture to be as minimal as possible. As an illustration, if we happened to dropout at the latent layer, we may disconnect the network during the current gradient step ! Thus, dropout did not seem a good idea to us.\n\n8/ size of 'd' is critical for autoencoders, only one example in appendix does not do justice, also it seems other color channels show up in the results (fig 10), wasn't it binary input?\n\nOne of the main goals of this paper is to put ourselves in a situation where we know the optimal value of d (in this case d=1). The results referred to in the Appendix are from another work by Zhu et al. whose network attempts to learn the image manifold. The colour channels come from the fact that we used their code which is designed for colour images. We showed their results for d=1 and d=100 so that we could not be criticised for giving their algorithm too much or too little freedom in terms of dimensionality.\n", "1/ They only studied one choice of Autoencoder architecture, and the results shown depends heavily on the choice of the activation, in particular sigmoid should not suffer from the same problem.\n\nThe main idea of our paper is to study in detail the minimal generalisable architecture needed for encoding and decoding a disk, the rationale being that if the autoencoder does not work in that situation, then there is a serious problem. If it does work (which is indeed the case), then how does it work ? There is no reason that the problems discussed in our paper, such as generalisation, would be resolved using a sigmoid. We believe that this may be a misunderstanding due to the fact that we are learning with binary images. In such a case, it may indeed make sense to use a sigmoid, however this is not connected to the ablation study or the generalisation problem. We have carried out the same ablation study using sigmoids instead of leaky ReLUs and this did not resolve the observed behaviour.\n\n2/ It would be interesting to study the generalization in terms of the size of the gap.\n\nThis is indeed an interesting point, as clearly there should be some limit point at which the autonencoder cannot generalise. However, this would significantly increase the scope of the paper, making it too long for publication in the ICLR format.\n\n3/ - The regularization proposed is quite simple and already known, and other regularization have been proposed (e.g. dropout, ...). A more detailed comparison with all previous regularization scheme would be much needed.\n\nWe agree that regularising filter weights is widely known and used. However, we have not observed in the literature the asymmetric approach which consists in regularising only the encoder; if there are any such references, we would be happy if the reviewers could indicate them. In Section 3.2.4, we point out that regularising both the encoder and the decoder leads to a less stable autoencoder. We have carried out similar experiments on a 2D latent space (with ellipses), and found that this marked behaviour is observed again. We propose to add these experiments in our supplementary material, see\nhttps://www.dropbox.com/s/hn2akqgqh9m0qxg/autoencoders_sup_mat.pdf?dl=0\nConcerning comparison with other regularisation schemes, we cannot compare with sparse autoencoders, since these try to encourage sparsity in the latent space and in our case the latent space cannot be any more sparse.", "1/ It is shown that 1) if the bias is not included, the resulting function is homogeneous (meaning f(ax)=af(x)), and so it fails because the 1D representation should be the radius, and the relationship from radius to image is more complex than a homogeneous function.\n2/ The thing is that 1) is trivial (the composition of homogeneous functions is homogeneous... so their proof is overly messy btw). Then, they continue by further analysing (see proposition 2) the solution for this case.\n3/ Another point: since the homogeneous function problem will not arise for other non-linearities (such as the sigmoid), the focus on the bias as the culprit seems arbitrary.\n\nYes, the fact that without biases the output of the AE is of the form alpha(r)F (where r summarises the input disk and F is a fixed image) is trivial, however it was necessary to state it. But we showed that in this case the AE actually finds the optimal possible F and function alpha(r), which is proportional to the dot product between the input disk and the function F, and not simply the area of the disk (see figures 7 and 8). The important fact here is that the ablated AE succeeds perfectly, in that it finds the best provable solution inside the range of its capacity. In deep learning it is rare to be able to show that the network is correctly approximating the best possible solution, therefore we found this point to be noteworthy. We address the question of using sigmoids in the reply to question 1/ of ``AnonReviewer2''.", "We wish to thank the reviewers for their comments and criticisms, which we found to be useful and constructive. Before replying to the specific comments of the reviewers, we would first like to make some general points concerning the goal and scope of our work.\n\nFirstly, we would like to clarify that in Section 3.2.2. we study a situation where we can describe the optimal solution of the training problem analytically. The ablation study was carried out and analysed to show that the autoencoder finds the optimal solution in this case. To the best of our knowledge, few such optimality results exist in the deep learning literature. Nevertheless, we do recognize that the case is simple.\n\nSecondly, in Section 3.2.3. we investigate difficulties of the network to generalise. This is a very well-known problem concerning autoencoders (and GANs), which is often briefly discussed in the literature, but is rarely analysed in detail. In the case of complex images it is very difficult to decide whether a network is producing new examples or just copying examples from a database. We confirm the ubiquity of this problem by showing that it happens even in the case of the state-of-the-art work of Zhu et al. applied to disks.\n\nFinally, in Section 3.2.4, we identify a solution to this problem in the form of an assymmetric weight regularisation, the regularisation of the encoder weights. This greatly improves the autoencoder's generalisation capacity. This asymmetric regularisation has not been proposed in the literature, to the best of our knowledge (please correct us if we are wrong in this respect). It is possible that in the submitted version of the paper we did not highlight this enough, but we believe it to be significant.\n\n- New experiment : we have tested the asymmetric version of the regularization in a more complex case, with ellipses, and the improvement is even clearer than in the case of disks. We show these results in the following document\nhttps://www.dropbox.com/s/hn2akqgqh9m0qxg/autoencoders_sup_mat.pdf?dl=0 \nNumerically, this leads to an order of magnitude improvement in the l2 loss of the network on unobserved examples.\n\nBelow are our replies to the specific comments of the reviewers." ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1 ]
[ "iclr_2018_r111KtCp-", "iclr_2018_r111KtCp-", "iclr_2018_r111KtCp-", "H1s3VoOlf", "SJOrnJf-M", "SJQ2Xg2eM", "iclr_2018_r111KtCp-" ]
iclr_2018_rkhxwltab
AANN: Absolute Artificial Neural Network
This research paper describes a simplistic architecture named as AANN: Absolute Artificial Neural Network, which can be used to create highly interpretable representations of the input data. These representations are generated by penalizing the learning of the network in such a way that those learned representations correspond to the respective labels present in the labelled dataset used for supervised training; thereby, simultaneously giving the network the ability to classify the input data. The network can be used in the reverse direction to generate data that closely resembles the input by feeding in representation vectors as required. This research paper also explores the use of mathematical abs (absolute valued) functions as activation functions which constitutes the core part of this neural network architecture. Finally the results obtained on the MNIST dataset by using this technique are presented and discussed in brief.
rejected-papers
The paper proposes to use absolute value activations, in a joint supervised + unsupervised training (classification + deep autoencoder with tied encoder/decoder weights). Pros: + simple model and approach on ideas worth revisiting Cons: - The paper initially approached these old ideas as novel, missing much related prior work - It doesn't convincingly breathe novel insight into them. - Empirical methodology is not up to standards (non-standard data split, lack of strong baselines for comparison) - Empirical validation is too limited in scope (MNIST only).
train
[ "SyRcMDPeG", "S1itI_FxM", "B1J8Hw9xz", "H1-Y4cCmz", "rJy6y907M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "SUMMARY \n\nThe model is an ANN whose units have the absolute value function abs as their activation function (in place of ReLU, sigmoid, etc.). The network has bi-directional connections (with equal weights) between consecutive layers, but it operates only in one direction at a time. In the forward direction, it is a feed-forward net from image to classification (say); in the reverse direction, it is a feed-forward net from classification to image. In both directions it operates in supervised fashion, trained with backpropagation (subject to the constraint that the weight matrix is symmetric). In the forward direction, the activation vector y over the classification layer is L2-normalized so the activation of a class c is the cosine of the angle between y and the 1-hot vector for c.\nAlthough there is a reverse pass through the net, the training loss function is not adversarial; the loss is just the classification error in the forward pass plus the reconstruction error in the backward pass.\nThe generalization accuracy in classification on 42k-image MNIST is 97.4%.\n\nSTRENGTHS\n\n* Comparisons are made of the reconstruction performance with the proposed abs activation function and with ReLU on one or both passes, and a linear activation function.\n* The model has the virtue of simplicity, as the authors point out.\n\nWEAKNESSES\n\n* The discussion of evaluation of the model is weak. \n - No baselines are given. (A kaggle leaderboard shows the 50th-ranked model at 97.8% and the top 8 models at 100%.)\n - The paper talks of a training set and a \"dev\" set, but no test set, and generalization performance is given for the dev set rather than a test set.\n - No quantitative evaluation of the reconstruction (backward pass) performance is given, just by-eye comparison of the reconstruction error through figures.\n - Some explanation is needed of why the ReLU cases were fatally plagued with NaN errors.\n* Claims of interpretability advantage seem unwarranted since the claimed interpretability applies to any classification ANN, as far as I can see.\n* The work seems to be at too preliminary a stage to warrant acceptance at ICLR.", "The paper proposes using the absolute value activation function in (what seems to be) an autoencoder architecture with an additional supervised learning term in the objective function that encourages the bottleneck layer representation to be discriminative. A few examples of reconstructed images and classification performance are reported for the MNIST dataset.\n\nThe contribution of the paper is not clear. The idea of combining autoencoders with supervised learning has been explored before, see e.g., \"Learning Deep Architectures for AI\" by Bengio, 2009, and many other papers. Alternative activation functions have also been studied in many papers, see https://arxiv.org/pdf/1710.05941.pdf for a recent example. Even without novel algorithmic contributions, the paper would have been interesting if there was an extensive evaluation across several challenging datasets of different ways of combining autoencoders with supervised learning and different activation functions that gives better insight into what works and why.\n\nIt would be helpful not to introduce new terminology like \"bidirectional artificial neuron\" unless there is a significant difference from existing concepts. It is not clear from the paper how a network of bidirectional neurons is different from an autoencoder.", "This paper introduces a reversible network with absolute value used as\nthe activation function. The network is run in the forward direction\nto classify and in the reverse direction to generate.\n\nThe key points of the network are the use of the absolute value\nactivation function and the use of (free) normalization to match\ntarget output. This allows the network to perfectly map inputs to any\npoint on a vector that goes through the one-hot encoding, allowing for\ndeterministic generation from different vectors (of different lengths)\nwith the same normalized output.\n\nI think there are a lot of novel and interesting ideas in this paper\nthough they have not been fully explored. The use of the absolute\nvalue transfer function is new to me, though I was able to find a couple of old\nreferences to its use. In a paper by Gad et al. (2000), it is stated \n\" For example, the algorithm presented in Lin and\nUnbehauen (1995) < I think they mean Lin and Unbehauen 1990)> \n is used to train networks with a single hidden layer\nemploying the absolute value as the activation function of the hidden\nneuron. This algorithm was further generalized to multilayer networks\nwith cascaded structures in Batruni (1991).\" Exploring the properties \nof the abs activation function seems worth exploring.\n\nMore details on the training are needed for full clarity in the paper.\n(Though it is recognized that some of these could be determined from\nlinks when made active, they should be included in the paper). How\ndid you select the training parameters given at the bottom of page 5?\nHow many layers and units/layer did you use? And how were these\nselected? (The use of the links for providing code and visualizations (when active)\n is a nice feature of this paper).\n\nAlso, did you compare to using the leaky ReLU activation function --\nThat would be interesting as it also doesn't have any areas of zero\nslope? Did you compare the generated digits to those obtained using GANs?\n\nI am also curious, how does accuracy on digit classification differ\nwhen trained only to optimize the forward error?\n\nThe MNIST site referenced lists 60,000 training data and test data of\n10,000. How/why did you select 42,000 and then split it to 39900 in\nthe train set and 2100 in the dev set?\n\nAlso, the goal for the paper is presented as creating highly\ninterpretable representations of the input data. My interpretation of\ninterpretable is that the hidden units are \"interpretable\" and that it\nis clear how the combined hidden unit representations allow for\naccurate classification. Towards that end, it would be nice to see\nsome of the interpretations of the hidden unit representations. In\nthe abstract it states \" ...These representations are generated by\npenalizing the learning of the network in such a way that those\nlearned representations correspond to the respective labels present in\nthe labelled dataset used for supervised training\". Does this\nstatement refer only to the encoding of the representation vector or\nalso the hidden layers? If the former, isn't that true for all\nsupervised algorithms. If the latter, you should show this.\n\nBatruni, R. (1991). A multilayer neural network with piecewise-linear\nstructure and backpropagation learning. IEEE Transactions on Neural\nNetworks, 2, 395–403.\n\nLin, J.-N., & Unbehauen, R. (1995). Canonical piecewise-linear neural\nnetworks. IEEE Transactions on Neural Networks, 6, 43–50.\n\nLin, J.-N, & Unbehauen, R. (1990). Adaptive Nonlinear Digital Filter with Canonical Piecewise-Linear Structure,\nIEEE Transactions on Circuits and Systems, 37(3) 347-353.\n\nGad, E.F et al (2000). A new algorithm for learning in piecewise-linear neural networks.\nNeural Networks 13, 485-505.\n", "Thank you for the review. \n\nI would like to address the weaknesses that have been pointed out and try to give an explanation for them.\n\n1.) The choice of a baseline was a bit unclear, since all the recorded models present on the MNIST leaderboard only perform classification and do not have the reconstruction module through the same network. Besides, I perceive that comparing just the forward performance, as I have mentioned in the paper, is a bit unfair in this case.\n\n2.) The new revision of the paper has now included the details about the test set results. \n\n3.) I have presented a by eye comparison since quantitatively measuring the likeness among the reconstructed images and the original images is mathematically challenging, whilst also being susceptible to the pixel level difference in the noise smoothing caused by the reconstruction network. It has been mentioned in the paper, that a metric that takes into consideration all these factors while evaluating the backward performance faithfully is needed.\n\n4.) The mention of interpretable encodings (representations) is made since through this free normalization loss function, all the information about the digit's positions, orientations, thickness and curvedness is summerized along a positive real number range. \n\nThank you.", "To state explicitly what the intended features are:\n\n1.) Use of abs function (which has been duly noted)\n\n2.) Use of free normalization in objective function definition. It has been seen that usually the loss (objective) function is defined not only such that the prediction is close to the actual label but also so that the probabilities of other labels are minimised. My proposal is to let the network only focus on getting the correct label right while the latter is taken care of automatically if reconstruction is to be done through the same network.\n\n3.) The hypothesis that the process of reconstruction should be symbiotic with the process of classification / prediction and not adversarial. In the paper (Sabour et. al. 2017) the CapsNet uses a separate fully connected reconstruction module and uses the reconstruction loss as a regularizer similar to the technique described in this paper. By simply summing the reconstruction loss with the objective function, the process of learning becomes more symbiotic.\n\n4.) From the visualization of the reconstructed digits from the encodings, it can be seen that the forward classification function just doesn't learn discrete mappings of input - output pairs, but learns a smooth function that encodes different positions, orientations, thickness and curvedness of the input digits along a simple positive real number range.\n\n5.) All these code implementations and visualization videos are there, but I couldn't mention them in the paper due to the anonymity clause.\n\n6.) The intention of using the term 'bidirectional artificial neuron' was to give a simpler perspective at an Autoencoder. It is not different from an Autoencoder, it is merely a simpler explanatory view of it which I put forth through the article.\n\nThank you for the reviews. All the comments are very helpful and strengthen my further work. " ]
[ 2, 3, 6, -1, -1 ]
[ 3, 5, 4, -1, -1 ]
[ "iclr_2018_rkhxwltab", "iclr_2018_rkhxwltab", "iclr_2018_rkhxwltab", "SyRcMDPeG", "S1itI_FxM" ]
iclr_2018_Hyp-JJJRW
Style Memory: Making a Classifier Network Generative
Deep networks have shown great performance in classification tasks. However, the parameters learned by the classifier networks usually discard stylistic information of the input, in favour of information strictly relevant to classification. We introduce a network that has the capacity to do both classification and reconstruction by adding a "style memory" to the output layer of the network. We also show how to train such a neural network as a deep multi-layer autoencoder, jointly minimizing both classification and reconstruction losses. The generative capacity of our network demonstrates that the combination of style-memory neurons with the classifier neurons yield good reconstructions of the inputs when the classification is correct. We further investigate the nature of the style memory, and how it relates to composing digits and letters.
rejected-papers
+ Paper proposes simple joint deep autoencoder + classifier training where the hidden representation is split between (observed) class and (unobserved) style nodes. - Empirical evaluation is very limited, focusing on only qualitative evaluation of reconstructions and interpolations (on MNIST and EMNIST). - Unclear goal: if it is improving classifier robustness, then quantitative classifier robustness improvements should be experimentally demonstrated. If it is as a (conditional) generative model, then it should be compared to strong generative baselines (in the GVAE or GAN families). The paper currently has neither.
train
[ "rkWU5vQxf", "H109AKKlM", "S1yZxBslG", "ByTDsb6Qf", "SynAU-TmG", "ryjOU-TQM", "Sy1frWTQM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes to train a classifier neural network not just to classifier, but also to reconstruct a representation of its input, in order to factorize the class information from the appearance (or \"style\" as used in this paper). This is done by first using unsupervised pretraining and then fine-tuning using a weighted combination of the regular multinomial NLL loss and a reconstruction loss at the last hidden layer. Experiments on MNIST are provided to analyse what this approach learns.\n\nUnfortunately, I fail to see a significantly valuable contribution from this work. First, the paper could do a better job at motivating the problem being addressed. Why is it important to separate class from style? Should it allow better classification performance? If so, it's never measured in this work. If that's not the motivation, then what is it?\n\nSecond, all experiments were conducted on the MNIST dataset. In 2017, most would expect experiments on at least one other, more complex dataset, to trust any claims on a method.\n\nFinally, the results are not particularly impressive. I don't find the reconstructions demonstrated particularly compelling (they are generally pretty different from the original input). Also, that the \"style\" representation contain less (and I'd say slightly less, in Figure 7 b and d, we still see a lot of same class nearest neighbors) is not exactly a surprising result. And the results of figure 9, showing poor reconstructions when changing the class representation essentially demonstrates that the method isn't able to factorize class and style successfully. The interpolation results of Figure 11 are also underwhelming, though possibly mostly because the reconstructions are in general not great. But most importantly, none of these results are measured in a quantitative way: they are all qualitative, and thus subjective.\n\nFor all these reasons, I'm afraid I must recommend this paper be rejected.", "The paper proposes training an autoencoder such that the middle layer representation consists of the class label of the input and a hidden vector representation called \"style memory\", which would presumably capture non-class information. The idea of learning representations that decompose into class-specific and class-agnostic parts, and more generally \"style\" and \"content\", is an interesting and long-standing problem. The results in the paper are mostly qualitative and only on MNIST. They do not show convincingly that the network managed to learn interesting class-specific and class-agnostic representations. It's not clear whether the examples shown in figures 7 to 11 are representative of the network's general behavior. The tSNE visualization in figure 6 seems to indicate that the style memory representation does not capture class information as well as the raw pixels, but doesn't indicate whether that representation is sensible.\n\nThe use of fully connected networks on images may affect the quality of the learned representations, and it may be necessary to use convolutional networks to get interesting results. It may also be interesting to consider class-specific representations that are more general than just the class label. For example, see \"Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure\" by Salakhutdinov and Hinton, 2007, which learns hidden vector representations for both class-specific and class-agnostic parts. (This paper should be cited.)", "The paper proposes combining classification-specific neural networks with auto-encoders. This is done in a straightforward manner by designating a few nodes in the output layer for classification and few for reconstruction. The training objective is then changed to minimize the sum of the classification loss (as measured by cross-entropy for instance) and the reconstruction error (as measured by ell-2 error as is done in training auto-encoders). \n\nThe authors minimize the loss function by greedy layer-wise training as is done in several prior works. The authors then perform other experiments on the learned representations in the output layer (those corresponding to classification + those corresponding to reconstruction). For example, the authors plot the nearest-neighbors for classification-features and for reconstruction-features and observe that the two are very different. The authors also observe that interpolating between two reconstruction-feature vectors (by convex combinations) seems to interpolate well between the two corresponding images.\n\nWhile the experimental results are interesting they are not striking especially when viewed in the context of the tremendous amount of work on auto-encoders. Training the classification-features along with reconstruction-features does not seem to give any significantly new insights. ", "Thank you for your feedback. We apologize for not clearly stating the motivation behind this work. Our main motivation was to design a classifier network that also has the capacity to be generative. We believe that a generative network would be less susceptible to being fooled by adversarial inputs since it would not be able to reconstruct nonsensical input. But before showing that the network is less vulnerable to adversarial examples, we want to investigate the properties of such a network. We have added text to our paper to clarify that this is our ultimate goal.\n\nHowever, in creating a classifier/generative network, we wanted to investigate the relationship between the classification part of the encoding, and the “style memory” part of the encoding. Much of this paper is devoted to understanding this relationship.\n\nWe have added experiments that we conducted on the Extended MNIST letter dataset which contains 145,600 samples, and where uppercase and lowercase letters are included in the same class (i.e. ‘A’ and ‘a’ are in the same class). This makes the dataset more challenging than MNIST. We also expanded our discussion of figure 7 (and figure 8, which was added). The discussion argues, in quantitative terms, that style memory contains a representation that augments, but is substantially different from, the character class. The figures also illustrate that the representation in style memory is very different from the original, image-space representation.\n\nWe have modified our paper to address your comments, and feel that the paper is much improved from its original form.", "Thank you for your feedback. We apologize for not clearly stating the motivation behind this work. Our main motivation was to design a classifier network that also has the capacity to be generative. We believe that a generative network would be less susceptible to being fooled by adversarial inputs since it would not be able to reconstruct nonsensical input. But before showing that the network is less vulnerable to adversarial examples, we want to investigate the properties of such a network. We have added text to our paper to clarify that this is our ultimate goal.\n\nAlthough we have done preliminary experiments to show that the network is less vulnerable to adversarial examples, these results are not reported in this paper; we feel that further investigations and experimentations are warranted.\n\nThank you for also pointing out the paper “Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure” by Salakhutdinov and Hinton (2007). We agree that the work is relevant, and have added a discussion of the paper to our “Related Work” section.\n\nWe also changed our network to include convolutional layers, as you suggested. Lastly, we removed the tSNE visualization because we felt it did not service the main message of the paper.\n\nWe have modified our paper to address your comments, and feel that the paper is much improved. We hope that you agree.", "Thank you for your feedback. We apologize for not clearly stating the motivation behind this work. Our goal was to design a classifier network that also has the capacity to be generative. We believe that a generative network would be less susceptible to being fooled by adversarial inputs since it would not be able to reconstruct nonsensical input. But before showing that the network is less vulnerable to adversarial examples, we want to investigate the properties of such a network. However, results on adversarial inputs are not reported in this paper as we feel that further investigations and experiments are still needed.\n\nWe have modified our paper to address your feedback. We feel our paper is much better after implementing those changes.", "We appreciate the constructive comments that the reviewers made on our paper, and have revised the manuscript accordingly. In particular, we have clarified the purpose of the research. This work is a necessary stepping-stone to our goal of investigating the possibility that generative networks are less susceptible to being fooled by ambiguous or adversarial inputs. The work outlined in this paper lays the foundation for how to create networks that simultaneously perform both classification and reconstruction. We have also included a more difficult dataset, EMNIST. We also altered the design of our network so that it now has two convolutional layers, and the resulting classification performance is much improved." ]
[ 3, 3, 4, -1, -1, -1, -1 ]
[ 5, 5, 3, -1, -1, -1, -1 ]
[ "iclr_2018_Hyp-JJJRW", "iclr_2018_Hyp-JJJRW", "iclr_2018_Hyp-JJJRW", "rkWU5vQxf", "H109AKKlM", "S1yZxBslG", "iclr_2018_Hyp-JJJRW" ]
iclr_2018_By0ANxbRW
DNN Model Compression Under Accuracy Constraints
The growing interest to implement Deep Neural Networks (DNNs) on resource-bound hardware has motivated innovation of compression algorithms. Using these algorithms, DNN model sizes can be substantially reduced, with little to no accuracy degradation. This is achieved by either eliminating components from the model, or penalizing complexity during training. While both approaches demonstrate considerable compressions, the former often ignores the loss function during compression while the later produces unpredictable compressions. In this paper, we propose a technique that directly minimizes both the model complexity and the changes in the loss function. In this technique, we formulate compression as a constrained optimization problem, and then present a solution for it. We will show that using this technique, we can achieve competitive results.
rejected-papers
Proposed network compression method offers limited technical novelty over existing approaches, and empirical evaluations do not clearly demonstrate an advantage over current state-of-the-art. Paper presentation quality also needs to be improved.
train
[ "rk6muOPxG", "Hk6K4Rwlf", "HJlSjw_ez" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. Summary\n\nThis paper introduced a method to learn a compressed version of a neural network such that the loss of the compressed network doesn't dramatically change.\n\n\n2. High level paper\n\n- I believe the writing is a bit sloppy. For instance equation 3 takes the minimum over all m in C but C is defined to be a set of c_1, ..., c_k, and other examples (see section 4 below). This is unfortunate because I believe this method, which takes as input a large complex network and compresses it so the loss in accuracy is small, would be really appealing to companies who are resource constrained but want to use neural network models.\n\n\n3. High level technical\n\n- I'm confused at the first and second lines of equation (19). In the first line, shouldn't the first term not contain \\Delta W ? In the second line, shouldn't the first term be \\tilde{\\mathcal{L}}(W_0 + \\Delta W) ?\n- For CIFAR-10 and SVHN you're using Binarized Neural Networks and the two nice things about this method are (a) that the memory usage of the network is very small, and (b) network operations can be specialized to be fast on binary data. My worry is if you're compressing these networks with your method are the weights not treated as binary anymore? Now I know in Binarized Neural Networks they keep a copy of real-valued weights so if you're just compressing these then maybe all is alright. But if you're compressing the weights _after_ binarization then this would be very inefficient because the weights won't likely be binary anymore and (a) and (b) above no longer apply.\n- Your compression ratio is much higher for MNIST but your accuracy loss is somewhat dramatic, especially for MNIST (an increase of 0.53 in error nearly doubles your error and makes the network worse than many other competing methods: http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#4d4e495354). What is your compression ratio for 0 accuracy loss? I think this is a key experiment that should be run as this result would be much easier to compare with the other methods.\n- Previous compression work uses a lot of tricks to compress convolutional weights. Does your method work for convolutional layers?\n- The first paper to propose weight sharing was not Han et al., 2015, it was actually:\nChen W., Wilson, J. T., Tyree, S., Weinberger K. Q., Chen, Y. \"Compressing Neural Networks with the Hashing Trick\" ICML 2015\nAlthough they did not learn the weight sharing function, but use random hash functions.\n\n\n4. Low level technical\n\n- The end of Section 2 has an extra 'p' character\n- Section 3.1: \"Here, X and y define a set of samples and ideal output distributions we use for training\" this sentence is a bit confusing. Here y isn't a distribution, but also samples drawn from some distribution. Actually I don't think it makes sense to talk about distributions at all in Section 3.\n- Section 3.1: \"W is the learnt model...\\hat{W} is the final, trained model\" This is unclear: W and \\hat{W} seem to describe the same thing. I would just remove \"is the learnt model and\"\n\n\n5. Review summary\n\nWhile the trust-region-like optimization of the method is nice and I believe this method could be useful for practitioners, I found the paper somewhat confusing to read. This combined with some key experimental questions I have make me think this paper still needs work before being accepted to ICLR.", "The paper addresses an interesting problem of DNN model compression. The main idea is to combine the approaches in (Han et al., 2015) and (Ullrich et al., 2017) to get a loss value constrained k-means encoding method for network compression. An iterative algorithm is developed for model optimization. Experimental results on MNIST, CIFAR-10 and SVHN are reported to show the compression performance. \n\nThe reviewer would expect papers submitted for review to be of publishable quality. However, this manuscript is not polished enough for publication: it has too many language errors and imprecisions which make the paper hard to follow. In particular, there is no clear definition of problem formulation, and the algorithms are poorly presented and elaborated in the context. \n\nPros: \n\n- The network compression problem is of general interest to ICLR audience. \n\nCons:\n\n- The proposed approach follows largely the existing work and thus its technical novelty is weak. \n\n- Paper presentation quality is clearly below the standard. \n\n- Empirical results do not clearly show the advantage of the proposed method over state-of-the-arts. \n\n\n\n", "1. This paper proposes a deep neural network compression method by maintaining the accuracy of deep models using a hyper-parameter. However, all compression methods such as pruning and quantization also have this concern. For example, the basic assumption of pruning is to discard subtle parameters has little impact on feature maps thus the accuracy of the original network can be preserved. Therefore, the novelty of the proposed method is somewhat weak.\n\n2. There are a lot of new algorithms on compressing deep neural networks such as [r1][r2][r3]. However, the paper only did a very simple investigation on related works.\n[r1] CNNpack: packing convolutional neural networks in the frequency domain.\n[r2] LCNN: Lookup-based Convolutional Neural Network.\n[r3] Xnor-net: Imagenet classification using binary convolutional neural networks.\n\n3. Experiments in the paper were only conducted on several small datasets such as MNIST and CIFAR-10. It is necessary to employ the proposed method on benchmark datasets to verify its effectiveness, e.g., ImageNet.\n" ]
[ 4, 3, 3 ]
[ 3, 3, 5 ]
[ "iclr_2018_By0ANxbRW", "iclr_2018_By0ANxbRW", "iclr_2018_By0ANxbRW" ]
iclr_2018_SkiCjzNTZ
Spontaneous Symmetry Breaking in Deep Neural Networks
We propose a framework to understand the unprecedented performance and robustness of deep neural networks using field theory. Correlations between the weights within the same layer can be described by symmetries in that layer, and networks generalize better if such symmetries are broken to reduce the redundancies of the weights. Using a two parameter field theory, we find that the network can break such symmetries itself towards the end of training in a process commonly known in physics as spontaneous symmetry breaking. This corresponds to a network generalizing itself without any user input layers to break the symmetry, but by communication with adjacent layers. In the layer decoupling limit applicable to residual networks (He et al., 2015), we show that the remnant symmetries that survive the non-linear layers are spontaneously broken based on empirical results. The Lagrangian for the non-linear and weight layers together has striking similarities with the one in quantum field theory of a scalar. Using results from quantum field theory we show that our framework is able to explain many experimentally observed phenomena, such as training on random labels with zero error (Zhang et al., 2017), the information bottleneck and the phase transition out of it (Shwartz-Ziv & Tishby, 2017), shattered gradients (Balduzzi et al., 2017), and many more.
rejected-papers
The paper makes overly strong claims, too weakly supported by a hard to follow and insufficiently rigorous mathematical argument. Connections with a large body of relevant prior literature are missing.
train
[ "BknXbsdxG", "By02Louez", "Byoj85JWf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, an number of very strong (even extraordinary) claims are made:\n\n* The abstract promises \"a framework to understand the unprecedented performance and robustness of deep neural networks using field theory.\"\n* Page 8 states that this is \"This is a first attempt to describe a neural network with a scalar quantum field theory.\"\n* Page 2 promises the use of the \"Goldstone theorem\" (no less) to understand phase transition in deep learning\n* It also claim that many \"seemingly different experimental results can be explained by the presence of these zero eigenvalue weights.\"\n* Three important results are stated as \"theorem\", with a statement like \"Deep feedforward networks learn by breaking symmetries\" proven in 5 lines, with no formal mathematics.\n\nThese are extraordinary claims, but when reaching page 5, one sees that the basis of these claims seems to be the Lagrangian of a simple phi-4 theory, and Fig. 1 shows the standard behaviour of the so-called mexican hat in physics, the basis of the second-order transition. Given physicists have been working on neural network for more than three or four decades, I am surprise that this would enough to solve all these problems!\n\nI tried to understand these many results, but I am afraid I cannot really understand or see them. In many case, the explanation seems to be a vague analogy. These are not without interest, and maybe there is indeed something deep in this paper, but it is so far hidden by the hype. Still, I fail to see how the fact that phase transitions and negative direction in the landscape is a new phenomena, and how it explains all the stated phenomenology. Beside, there are quite a lot of things known about the landscape of these problems \n\nMaybe I am indeed missing something, but i clearly suspect the authors are simply overselling physics results.\n\nI have been wrong many times, but I beleive that the authors should probably precise their claim, and clarify the relation between their results and both the physics AND statistics litterature, or better, with the theoretical physics litterature applied to learning, which is ---astonishing-- absent in the paper.\n\nAbout the content:\n\nThe main problem for me is that the whole construction using field theory seems to be used to advocate for the appearence of a phase transition in neural nets and in learning. This rises three comments:\n\n(1) So we really need to use quantum field theory for this? I do not see what should be quantum here (despite the very vague remarks page 12 \"WHY QUANTUM FIELD THEORY?\")\n\n\n(2) This is not new. Phase transitions in learning in neural nets are being discussed since aboutn 40 years, see for instance all the pionnering work of Sompolinky et al. one can see for instance the nice review in https://arxiv.org/abs/1710.09553 In non aprticular order, phase transition and symmetry breaking are discussed in\n* \"Statistical mechanics of learning from examples\", Phys. Rev. A 45, 6056 – Published 1 April 1992\n* \"The statistical mechanics of learning a rule\", Rev. Mod. Phys. 65, 499 – Published 1 April 1993\n* Phase transitions in the generalization behaviour of multilayer neural networks\nhttp://iopscience.iop.org/article/10.1088/0305-4470/28/16/010/meta\n* Note that some of these results are now rigourous, as shown in \"Phase Transitions, Optimal Errors and Optimality of Message-Passing in Generalized Linear Models\", https://arxiv.org/abs/1708.03395\n* The landscape of these problems has been studied quite extensivly, see for instance \"Identifying and attacking the saddle point problem in high-dimensional non-convex optimization\", https://arxiv.org/abs/1406.2572\n\n\n(3) There is nothing particular about deep neural net and neural nets about this. Negative direction in the Hessian in learning problems appears in matrix and tensor factorizaion, where phase transition are well understood (even rigorously, see for instance, https://arxiv.org/abs/1711.05424 ) or in problems such as unsupervised learning, as e.g.:\nhttps://journals.aps.org/prl/abstract/10.1103/PhysRevLett.86.2174\nhttps://journals.aps.org/pre/pdf/10.1103/PhysRevE.50.1766\n\nHere are additional comments:\n\nPAGE 1:\n\n* \"It has been discovered that the training process ceases when it goes through an information bottleneck (ShwartzZiv & Tishby, 2017)\".\n\nWhile this paper indeed make a nice suggestion, I would not call it a discovery yet as this has never been shown on a large network. Beside, another paper in the conference is claiming exacly the opposite, see : \"On the Information Bottleneck Theory of Deep Learning\". This is still subject of discussion.\n\n* \"In statistical terms, a quantum theory describes errors from the mean of random variables. \"\n\nLast time I studied quantum theory, it was a theory that aim to explain the physical behaviours at the molecular, atomic and sub-atomic levels, usinge either on the wave function (Schrodinger) or the Matrix operatir formalism (Hesienbger) (or if you want, the path integral formalism of Feynman).\n\nIt is certainly NOT a theory that describes errors from the mean of random variables. This is, i beleive, the field of \"statistics\" or \"probability\" for correlated variables. It is certianly used in physics, and heavily both in statistical physics and in quantum thoery, but this is not what the theory is about in the first place.\n\nBeside, there is little quantum in this paper, I think most of what the authors say apply to a statistical field theory ( https://en.wikipedia.org/wiki/Statistical_field_theory )\n\n* \"In the limit of a continuous sample space, the quantum theory becomes a quantum field theory.\"\n\nAgain, what is quantum about all this? This true for a field theory, as well for continous theories of, say, mechanics, fracture, etc...\n\nPAGE 2:\n\n* \"Using a scalar field theory we show that a phase transition must exist towards the end of training based on empirical results.\"\n\nSo it is a scalar classical field theory after all. This sounds a little bit less impressive that a quantum field theory. Note that the fact that phase transition arises in learning, and in a statistical theory applied to any learning process, is an old topic, with a classical litterature. The authors might be interested by the review \"The statistical mechanics of learning a rule\", Rev. Mod. Phys. 65, 499 – Published 1 April 1993\n\nPAGE 8:\n\n\n* \"In this work we solved one of the most puzzling mysteries of deep learning by showing that deep neural networks undergo spontaneous symmetry breaking.\"\n\nI am afraid I fail to see what is so mysterious about this nor what the authors showed about it. In any case, gradient descent break symmetry spontaneously in many systems, including phi-4, the Ising model or (in learning problems) the community detection problem (see eg https://journals.aps.org/prx/abstract/10.1103/PhysRevX.4.011047). I am afraid I miss what is new there...\n\n* \"This is a first attempt to describe a neural network with a scalar quantum field theory.\"\n\nGiven there seems to be little quantum in the paper, I fail to see the relevance of the statement. Secondly, I beleive that field theory has been used, many times and in greater lenght, both for statistical and dynamical problems in neural nets, see eg.\n* http://iopscience.iop.org/article/10.1088/0305-4470/27/6/016/meta\n* https://arxiv.org/pdf/q-bio/0701042.pdf\n* http://www.lps.ens.fr/~derrida/PAPIERS/1987/gardner-zippelius-87.pdf\n* http://iopscience.iop.org/article/10.1088/0305-4470/21/1/030/meta\n* https://arxiv.org/pdf/cond-mat/9805073.pdf\n", "The paper makes a mathematical analogy between deep neural networks and quantum field theory, and claims that this explains a large number of empirically observed phenomena.\n\nI have a solid grasp of the relevant mathematics, and a superficial understanding of QFT, but I could not really make sense of this paper. The paper uses mathematics in a very loose manner. This is not always bad (an overly formal treatment can make a paper hard to read), but in this case it is not clear to me that the results are even \"correct modulo technicalities\" or have much to do with the reality of what goes on in deep nets.\n\nThe first thing I'm confused about is the nature and significance of the symmetries considered in this paper. At a very high level, there are two kinds of symmetries one could consider in DL: transformations of the input space that leave invariant the desired output, and transformations of the weight space that leave invariant the input/output mapping. These are not necessarily related. For instance, a translation or rotation of an image is an example of the former, whereas an arbitrary permutation of hidden units (and corresponding rows/columns of weight matrices) is an example of the latter. This paper is apparently dealing with groups that act on the input as well as the weight space, seemingly conflating the two.\n\nSection 2.2 defines the action of symmetries on the input and weight space. For each layer t, we have a matrix Q_t in G, where G is an unspecified Lie group. Since all Q_t are elements of the same group, they have the same dimension, so all layers must have the same dimension as well. This is somewhat unrealistic. Furthermore, from the definitions in 2.2 it seems that in order to get covariance, the Q_t would have to be the same for all t, which is probably not what the authors had in mind.\n\nFor symmetries like rotation/translation of images, a better setup would probably involve a single group with different group actions or linear group representations for each layer. In that case, covariance of the weight layers is not automatic, but only holds for certain subspaces of weight space. For permutation or scale symmetries in weight space, a more sensible setup would be to say that each layer has a different group of symmetries, and the symmetry group of the whole network is the direct product of these groups.\n\nIt is stated that transformations in the affine group may not commute with nonlinearities, but rotations of feature maps do. This is correct (at least up to discretization errors), but the paper continues to talk about affine and orthogonal group symmetries. Later on an attempt is made to deal with this issue, by splitting the feature vectors into a part that is put to zero by a ReLU, and a part that is not, and the group is split accordingly. However, this does not make any sense because the pattern of zeros/non-zeros is different for each input, so one cannot speak of a \"remnant symmetry\" for a layer in general.\n\nThe connection between DL and QFT described in 2.3 is based on some kind of \"continuous limit\" of units and layers, i.e. having an uncountably infinite number of them. Even setting aside the enormous amount of technical difficulty involved in doing this math properly, I'm a bit skeptical that this has anything to do with real networks.\n\nAs an example of how loose the math is, \"theorem 1\" is only stated in natural language: \"Deep feedforward networks learn by breaking symmetries\". The proof involves assuming that the network is a sequence of affine transformations (no nonlinearities). Then it says that if we include a nonlinearity, it breaks the symmetry. Thus, since neural nets use nonlinearities, they break symmetries, and therefore learning works by breaking symmetries and the layers can learn a \"more generalized representation\" than an affine network could. The theorem is so vaguely stated that I don't know what it means, and the proof is inscrutable to me.\n\nTheorem 2 states \"Let x^T x be an invariant under Aff(D)\". Clearly x^T x is not invariant under Aff(D).\n\nThe paper claims to explain many empirical facts, but it is not exactly clear which are the conspicuous and fundamental facts that need explaining. For instance, the IB phase transition claimed to happen in deep learning was recently called into question [1]. It appears that this phenomenon does not occur in ReLU nets but only in sigmoid nets, but the current paper purports to explain the phenomenon while assuming ReLUs. I would further note that the paper claims to explain a suspiciously large number of previously observed phenomena (Appendix A), but as far as I can tell does not make novel testable predictions.\n\nThe paper makes several strong claims, like \"we [...] illustrate that spontaneous symmetry breaking of affine symmetries is the sufficient and necessary condition for a deep network to attain its unprecedented power\", \"This phenomenon has profound implications\", \"we have solved one of the most puzzling mysteries of deep learning\", etc. In my opinion, unless it is completely obvious that this is indeed a breakthrough, one should refrain from making such statements.\n\n[1] On the information bottleneck theory of deep learning. Anonymous ICLR2018 submission.", "The paper promises quite a few intriguing connections between information bottleneck, phase transitions and deep learning. While we think that this is a worthwhile bridge to build between machine learning and statistical field theory, the exposition of the paper leaves much to be desired. Had it been a clean straightforward application of QFT, as well-trained theoretical physicists, we would have been able to evaluate the paper.\nGenerally, it would help the reader to have an overall map and indication of the steps that would be taken formally. \nSpecifically, starting from Section 2.3, especially around the transition to continuous layers, very little information is provided how one is dealing with the cost function and the results are derived. Section 2.3 would benefit from expanded discussion with examples and detailed explanations.\nMinor:\nThe following sentence in third paragraph of the Introduction is incomplete:\nBecause the ResNet does\nnot contain such symmetry breaking layers in the architecture." ]
[ 3, 3, 3 ]
[ 4, 3, 3 ]
[ "iclr_2018_SkiCjzNTZ", "iclr_2018_SkiCjzNTZ", "iclr_2018_SkiCjzNTZ" ]
iclr_2018_SJa1Nk10b
Anytime Neural Network: a Versatile Trade-off Between Computation and Accuracy
We present an approach for anytime predictions in deep neural networks (DNNs). For each test sample, an anytime predictor produces a coarse result quickly, and then continues to refine it until the test-time computational budget is depleted. Such predictors can address the growing computational problem of DNNs by automatically adjusting to varying test-time budgets. In this work, we study a \emph{general} augmentation to feed-forward networks to form anytime neural networks (ANNs) via auxiliary predictions and losses. Specifically, we point out a blind-spot in recent studies in such ANNs: the importance of high final accuracy. In fact, we show on multiple recognition data-sets and architectures that by having near-optimal final predictions in small anytime models, we can effectively double the speed of large ones to reach corresponding accuracy level. We achieve such speed-up with simple weighting of anytime losses that oscillate during training. We also assemble a sequence of exponentially deepening ANNs, to achieve both theoretically and practically near-optimal anytime results at any budget, at the cost of a constant fraction of additional consumed budget.
rejected-papers
The paper received mixed reviews with scores of 5 (R1), 5 (R2), 7 (R3). All three reviewers raise concerns about the lack of comparisons to other methods. The rebuttal is not compelling on this point. There are quite a few methods that could be used for this application available (often with source code) and should be compared to, e.g. DenseNets (Huang et al.). Given that the proposed method isn't in of itself hugely novel, a thorough experimental evaluation is crucial to the justifying the approach. The AC has closely looked at the rebuttal and the paper and feels that it cannot be accepted for this reason at this time.
train
[ "HJMEt_FeM", "rJeC7UixM", "ry-xL5Dfz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an anytime neural network, which can predict anytime while training. To achieve that, the model includes auxiliary predictions which can make early predictions. Specifically, the paper presents a loss weighting scheme that considers high correlation among nearby predictions, an oscillating loss weighting scheme for further improvement, and an ensemble of anytime neural networks. In the experiments, test error of the proposed model was shown to be comparable to the optimal one at each time budget. \n\nIt is an interesting idea to add auxiliary predictions to enable early predictions and the experimental results look promising as they are close to optimal at each time budget. \n\n1. In Section 3.2, there are some discussions on the parallel computations of EANN. The parallel training is not clear to me and it would be great to have more explanation on this with examples. \n\n2. It seems that EANN is not scalable because the depth is increasing exponentially. For example, given 10 machines, the model with the largest depth would have 2^10 layers, which is difficult to train. It would be great to discuss this issue.\n\n3. In the experiments, it would be great to add a few alternatives to be compared for anytime predictions. \n\n\n\n\n\n\n\n", "1. Paper Summary\n\nThis paper adds a separate network at every layer of a residual network that performs classification. They minimize the loss of every classifier using two proposed weighting schemes. They also ensemble this model.\n\n\n2. High level paper\n\nThe organization of this paper is a bit confusing. Two weighing schemes are introduced in Section 3.1, then the ensemble model is described in Section 3.2, then the weighing schemes are justified in Section 4.1.\nOverall this method is essentially an cascade where each cascade classifier is a residual block. Every input is passed through as many stages as possible until the budget is reached. While this model is likely quite useful in industrial settings, I don't think the model itself is wholly original.\nThe authors have done extensive experiments evaluating their method in different settings. I would have liked to see a comparison with at least one other anytime method. I think it is slightly unfair to say that you are comparing with Xie & Tu, 2015 and Huang et al., 2017 just because they use the CONSTANT weighing schemes.\n\n\n3. High level technical\n\nI have a few concerns:\n- Why does AANN+LINEAR nearly match the accuracy of EANN+SIEVE near 3e9 FLOPS in Figure 4b but EANN+LINEAR does not in Figure 4a? Shouldn't EANN+LINEAR be strictly better than AANN+LINEAR?\n- Why do the authors choose these specific weighing schemes? Section 4.1 is devoted to explaining this but it is still unclear to me. They talk about there being correlation between the predictors near the end of the model so they don't want to distribute weight near the final predictors but this general observation doesn't obviously lead to these weighing schemes, they still seem a bit adhoc.\n\nA few other comments:\n- Figure 3b seems to contain strictly less information than Figure 4a, I would remove Figure 3b and draw lines showing the speedup you get for one or two accuracy levels.\n\nQuestions:\n- Section 3.1: \"Such an ideal θ* does not exist in general and often does not exist in practice.\" Why is this the case? \n- Section 3.1: \" In particular, spreading weights evenly as in (Lee et al., 2015) keeps all i away from their possible respective minimum\" Why is this true?\n- Section 3.1: \"Since we will evaluate near depth b3L/4e, and it\nis the center of L/2 low-weight layers, we increase it weight by 1/8.\" I am completely lost here, why do you do this?\n\n\n4. Review summary\n\nUltimately because the model itself resembles previous cascade models, the selected weighings have little justification, and there isn't a comparison with another anytime method, I think this paper isn't yet ready for acceptance at ICLR.", "This paper aims to endow neural networks the ability to produce anytime prediction. The authors propose several heuristics to reweight and oscillate the loss to improve the anytime performance. In addition, they propose to use a sequence of exponentially deepening anytime neural networks to reduce the performance gap for early classifiers. The proposed approaches are validated on two image classification datasets.\nPros:\n- The paper is well written and easy to follow. \n- It addresses an interesting problem with reasonable approaches. \nCons:\n- The loss reweighting and oscillating schemes appear to be just heuristics. It is not clear what the scientific contributions are. \n- I do not fully agree with the explanation given for the “alternating weights”. If the joint loss leads to zero gradient for some weights, then why would you consider it problematic?\n- There are few baselines compared in the result section. In addition, the proposed method underperforms the MSDNet (Huang et al., 2017) on ILSVRC2012.\n- The EANN is similar to the method used by Adaptive Networks (Bolukbasi et al., 2017), and the baseline “Ensemble of ResNets (varying depth)” in the MSDNet paper. \n- Could you show the error bar In Figure 2(a)? Usually an error difference less than 0.5% on CIFAR-100 is not considered as significant. \n- I’m not convinced that AANN really works significantly better than ANN according to the results in Table 1(a). It seems that ANN still outperform AANN in many cases.\n- I would suggest to show the results in Table 1(b) with a figure." ]
[ 7, 5, 5 ]
[ 2, 3, 4 ]
[ "iclr_2018_SJa1Nk10b", "iclr_2018_SJa1Nk10b", "iclr_2018_SJa1Nk10b" ]
iclr_2018_B1spAqUp-
Pixel Deconvolutional Networks
Deconvolutional layers have been widely used in a variety of deep models for up-sampling, including encoder-decoder networks for semantic segmentation and deep generative models for unsupervised learning. One of the key limitations of deconvolutional operations is that they result in the so-called checkerboard problem. This is caused by the fact that no direct relationship exists among adjacent pixels on the output feature map. To address this problem, we propose the pixel deconvolutional layer (PixelDCL) to establish direct relationships among adjacent pixels on the up-sampled feature map. Our method is based on a fresh interpretation of the regular deconvolution operation. The resulting PixelDCL can be used to replace any deconvolutional layer in a plug-and-play manner without compromising the fully trainable capabilities of original models. The proposed PixelDCL may result in slight decrease in efficiency, but this can be overcome by an implementation trick. Experimental results on semantic segmentation demonstrate that PixelDCL can consider spatial features such as edges and shapes and yields more accurate segmentation outputs than deconvolutional layers. When used in image generation tasks, our PixelDCL can largely overcome the checkerboard problem suffered by regular deconvolution operations.
rejected-papers
The paper received borderline-negative reviews with scores of 5,5,6. A consistent issue was the weakness of the experiments: (i) lack of comparison to appropriate baselines, (ii) differences between published/reported numbers for DeepLab-ResNet (R3) and (iii) related work, e.g. Wojna paper, as raised by R1. The AC did not find the author's responses to these issues convincing. For (ii) the gap between 73 and 79 is large and the author's explanation for the difference doesn't seem plausible. For (iii), the response promised comparisons/discussion but there were not added to the draft. Given this, the paper cannot be accepted in it current form. The experiments should be improved before the paper is resubmitted.
train
[ "B1YorpYxz", "B1L5VaYgG", "BkZQtx5lz", "BJ3XoYe-M", "SJnHnYg-z", "BkqscKe-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper is well written and easy to follow. The authors propose pixel deconvolutional layers for convolutional neural networks. The motivation of the proposed method, PixelDCL, is to remove the checkerboard effect of deconvolutoinal layers. \nThe method consists of adding direct dependencies among the intermediate feature maps generated by the deconv layer. PixelDCL is applied sequentially, therefore it is slower than the original deconvolutional layer. The authors evaluate the model in two different problems: semantic segmentation (on PASCAL VOC and MSCOCO datasets) and in image generation VAE (with the CelebA dataset). \n\nThe authors justify the proposed method as a way to alleviate the checkerboard effect (while introducing more complexity to the model and making it slower). In the experimental section, however, they do not compare with other approaches to do so For example, the upsampling+conv approach, which has been shown to remove the checkerboard effect while being more efficient than the proposed method (as it does not require any sequential computation). Moreover, the PixelDCL does not seem to bring substantial improvements on DeepLab (a state-of-the-art semantic segmentation algorithm). More comments and further exploration on this results should be done. Why no performance boost? Is it because of the residual connection? Or other component of DeepLab? Is the proposed layer really useful once a powerful model is used?\n\nI also think the experiments on VAE are not conclusive. The authors simply show set of generated images. First, it is difficult to see the different of the image generated using deconv and PixelDCL. Second, a set of 20 qualitative images does not (and cannot) validate any research idea.", "Paper summary:\nThis paper proposes a technique to generalize deconvolution operations used in standard CNN architectures. Traditional deconvolution operation uses independent filter weights to compute output features at adjacent pixels. This work proposes to do sequential prediction of adjacent pixel features (via intermediate feature maps) resulting in more spatially smooth outputs for deconvolution layer. This new layer is referred to as ‘pixel deconvolution layer’ and it is demonstrated on two tasks of semantic segmentation and face generation.\n\n\nPaper Strengths:\n- Despite being simple technique, the proposed pixel deconvolution layer is novel and interesting.\n- Experimental results on two different tasks demonstrating the general use of the proposed deconvolution layer.\n\n\nMajor Weaknesses:\n- The main weakness of this paper lies in its weak experiments. Although authors say that several possibilities exist for the dependencies between intermediate feature maps, there are no systematic ablation studies on what type of connectivities work best for the proposed layer. Authors experimented with two randomly chosen connectivities which is not enough to understand what type of connectivities work best. This is important as this forms the main contribution of the paper.\n- Also, several quantitative results seem incomplete. Why is the DeepLab-ResNet performance so low? A quick look at PascalVOC results indicate that DeepLab-ResNet has IoU of over 79 on this dataset, but the reported numbers in this paper are only around 73 IoU. There is no mention of IoU for base DeepLab-ResNet model and the standard DeepLab+CRF technique. And, there are no quantitative results on image generation.\n\n\nMinor Weaknesses:\n- Although the paper is easy to understand, several parts of the paper are poorly written. Several sentences are repeated multiple times across the paper. Some statements need corrections/refinements such as “mean IoU is a more accuracy evaluation measure”. And, it is better to under-tone some statements such as changing “solving” to “tackling”.\n- The illustration of checkerboard artifacts from standard deconvolution technique is not clear. For example, the results presented in Figure-4 indicate segmentation mistakes of the network rather than checkerboard artifacts.\n\n\nClarifications:\n- Why authors choose to ‘resize’ the images for training semantic segmentation networks, instead of generally used ‘cropping’ to create batches?\n- I can not see the ‘red’ in Figure-5. I see the later feature map more as ‘pinkish’ color. It is probably due to my color vision. In any case, it is better to use different color scheme to distinguish.\n\n\nSuggestions:\n- I strongly advice authors to do some ablation studies on connectivities to make this a good paper. Also, it would be great if authors can revise the writing thoroughly to make this a more enjoyable read.\n\n\nReview Summary:\nThe proposed technique, despite being simple, is novel and interesting. But, the weak and incomplete experiments make this not yet ready for publication.", "This paper proposed the new approach for feature upsampling called pixel deconvolution, which aims to resolve checkboard artifact of conventional deconvolution. By sequentially applying a series of decomposed convolutions, the proposed method explicitly enforces the model to consider the relation between pixels thus effectively improve the deconvolution network with an increased computational cost to some extent.\n\nOverall, the paper is clearly written and easy to understand the main motivation and methods. However, the checkboard artifact is a well-known problem of deconvolution network, and has been addressed by several approaches which are simpler than the proposed pixel deconvolution. For example, it is well known that simple bilinear interpolation optionally followed by convolutions effectively removes checkboard artifact to some extent, and bilinear additive upsampling proposed in Wonja et al., 2017 also demonstrated its effectiveness as an alternative for deconvolution. Comparisons against these approaches would make the paper stronger. Besides, comparisons/discussions based on extensive analysis on various deconvolution architectures presented in Wonja et al., 2017 would also be interesting.\n\nWonja et al, The Devil is in the Decoder, In BMVC, 2017\n", "Thank you for your comments! Since our main objective in this paper is to solve the checkerboard problem suffered by deconvolutional layers, the experiments are mainly designed to show the performance improvement compared to traditional deconvolutional layer. In both segmentation experiments, we use convolutional layer after deconvolutional layer as the baseline setting. For the training-from-scratch experiments, we use one deconvolutional layer followed by two convolutional layers, which is the default setting in U-Net architecture. We replace the deconvolutional layer with our PixelDCL with the same number of parameters. For fine-tuning experiments, each block is composed of one deconvolutional layer followed by one convolution layer. From the result, we can see the convolutional layer is not powerful enough to remove the checkerboard effect. At the same time, we want to solve this problem by improving the deconvolution operation itself, without adding more layers. This has added benefit that the proposed method can be made plug-and-play and becomes a standard layer in common deep learning libraries.\n\nFor the DeepLab model, actually there is no deconvolutional layer involved in the original DeepLab_v2 architecture. The size of the predictions is (1/8)*(1/8) of that of the labels. They employed a simple bilinear up-sampling operation on the predictions to have the same size as the labels. The reason is that for PASCAL VOC dataset, the shapes of most objects in the labels are very regular and down-sampling the labels does not hurt the upper bound of mIoU much (according to Long_2015_CVPR, 100->96.4) . It brings a significant advantage for bilinear interpolation. For example, if the original label contains a 16*16 square object. The model only needs to predict a 2*2 square correctly before bilinear up-sampling. In contrast, a model whose prediction has the same size as the original label needs to get 64 times more outputs correctly. However, in order to compare deconvolution with our pixel deconvolution, we added three blocks to up-sample the labels to original size. The results obtained achieve similar performance with the original model. And in this setting, the proposed layer improved the mIoU. We aimed to prove that our proposed method is better than the deconvolution operation in different models and datasets instead of getting the best result for any specific task.\n\nThe deconvolutional layer sometimes is irreplaceable for some tasks such as generative model, where bilinear interpolating does not help at all. We didn’t show too many VAE results in paper due to page limitations. These images are all generated randomly. By looking into the details, there are apparent checkerboard artifacts on images generated by original VAE model. The results of our model effectively remove them without using more parameters.\n", "Thank you for your comments! We think there may be some misunderstanding by this reviewer. Firstly, the connectivity is not randomly chosen in our experiment. Per to analysis of deconvolutional layer in figure 3, a 2D deconvolutional layer with up-sampling factor 2 could be decomposed into four independent convolutional layer. The outputs of these four convolutional layers are periodically shuffled and combined. In the experiment part, Figure 6 have a clearer illustration for building connectivity among these four feature maps. Now let’s consider only a small part on final output, the 2x2 pixels on the left-up corner. The purple pixel (left-up pixel) is firstly generated depending on input feature map. After that, the orange pixel (right-down pixel) is then generated depending on purple pixel. The green pixels (left-down and right-up pixels) are generated depending on purple and orange pixels. We use this connectivity because it can make all four pixels related to each other with only three steps: left up -> right down -> left down and right up. So, the connectivity in experiment are carefully designed by considering computational efficiency. \n\nFor the DeepLab-ResNet, we used the original training set and tested on the original validation set. On the other hand, the published PascalVOC IoU is obtained by testing on the testing dataset while training on both training dataset and validation dataset. Meanwhile, DeepLab-Resnet also employs some other engineering tricks for image segmentation tasks such as multi-scale inference during testing, which is not related to what we aimed to improve. The performance gap is reasonable and we intended to prove that we improve the deconvolution operation instead of a segmentation model.\n\nThe images in VAE experiment results are all generated randomly. By looking into the details, there are apparent checkerboard in original VAE model. The results of our model effectively remove them without using more parameters. Since performance of new layer could be reflected apparently from the imaged generated, we didn’t show the quantitative results in paper.\n", "Although alternative approaches for upsampling have been developed, we believe our work is the first attempt to improve deconvolution itself. We do not think other similar approaches are simpler than ours. Our approach is as simple as the original deconvolutional layer both conceptually and computationally as demonstrated by timing results. We are aware of Wonja et al. 2017, but it was published after our work was completed. We will add comparisons and discussions in a revised version of our paper." ]
[ 5, 5, 6, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_B1spAqUp-", "iclr_2018_B1spAqUp-", "iclr_2018_B1spAqUp-", "B1YorpYxz", "B1L5VaYgG", "BkZQtx5lz" ]
iclr_2018_ryzm6BATZ
Image Quality Assessment Techniques Improve Training and Evaluation of Energy-Based Generative Adversarial Networks
We propose a new, multi-component energy function for energy-based Generative Adversarial Networks (GANs) based on methods from the image quality assessment literature. Our approach expands on the Boundary Equilibrium Generative Adversarial Network (BEGAN) by outlining some of the short-comings of the original energy and loss functions. We address these short-comings by incorporating an l1 score, the Gradient Magnitude Similarity score, and a chrominance score into the new energy function. We then provide a set of systematic experiments that explore its hyper-parameters. We show that each of the energy function's components is able to represent a slightly different set of features, which require their own evaluation criteria to assess whether they have been adequately learned. We show that models using the new energy function are able to produce better image representations than the BEGAN model in predicted ways.
rejected-papers
The paper received borderline-negative scores (6,5,5) with R1 and R2 having significant difficulty with the clarity of the paper. Although R3 was marginally positive, they pointed out that the experiments are "extremely weak". The AC look at the paper and agrees with R3 on this point. Therefore the paper cannot be accepted in its current form. The experiments and clarity need work before resubmission to another venue.
train
[ "Bk8udEEeM", "HJZIu0Kef", "H1NEs7Clz", "By1yEN7bG", "SJfxhQ7WG", "BJq66m7Wz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Quick summary:\nThis paper proposes an energy based formulation to the BEGAN model and modifies it to include an image quality assessment based term. The model is then trained with CelebA under different parameters settings and results are analyzed.\n\nQuality and significance:\nThis is quite a technical paper, written in a very compressed form and is a bit hard to follow. Mostly it is hard to estimate what is the contribution of the model and how the results differ from baseline models.\n\nClarity:\nI would say this is one of the weak points of the paper - the paper is not well motivated and the results are not clearly presented. \n\nOriginality:\nSeems original.\n\nPros:\n* Interesting energy formulation and variation over BEGAN\n\nCons:\n* Not a clear paper\n* results are only partially motivated and analyzed", "This paper proposed some new energy function in the BEGAN (boundary equilibrium GAN framework), including l_1 score, Gradient magnitude similarity score, and chrominance score, which are motivated and borrowed from the image quality assessment techniques. These energy component in the objective function allows learning of different set of features and determination on whether the features are adequately represented. experiments on the using different hyper-parameters of the energy function, as well as visual inspections on the quality of the learned images, are presented. \n\nIt appears to me that the novelty of the paper is limited, in that the main approach is built on the existing BEGAN framework with certain modifications. For example, the new energy function in equation (4) larges achieves similar goal as the original energy (1) proposed by Zhao et. al (2016), except that the margin loss in (1) is changed to a re-weighted linear loss, where the dynamic weighting scheme of k_t is borrowed from the work of Berthelot et. al (2017). It is not very clear why making such changes in the energy would supposedly make the results better, and no further discussions are provided. On the other hand, the several energy component introduced are simply choices of the similarity measures as motivated from the image quality assessment, and there are probably a lot more in the literature whose application can not be deemed as a significant contribution to either theories or algorithm designs in GAN.\n\nMany results from the experimental section rely on visual evaluations, such as in Figure~4 or 5; from these figures, it is difficult to clearly pick out the winning images. In Figure~5, for a fair evaluation on the performance of model interploations, the same human model should be used for competing methods, instead of applying different human models and different interpolation tasks in different methods. \n ", "Summary: \nThe paper extends the the recently proposed Boundary Equilibrium Generative Adversarial Networks (BEGANs), with the hope of generating images which are more realistic. In particular, the authors propose to change the energy function associated with the auto-encoder, from an L2 norm (a single number) to an energy function with multiple components. Their energy function is inspired by the structured similarity index (SSIM), and the three components they use are the L1 score, the gradient magnitude similarity score, and the chromium score. Using this energy function, the authors hypothesize, that it will force the generator to generate realistic images. They test their hypothesis on a single dataset, namely, the CelebA dataset. \n\nReview: \nWhile the idea proposed in the paper is somewhat novel and there is nothing obviously wrong about the proposed approach, I thought the paper is somewhat incremental. As a result I kind of question the impact of this result. My suspicion is reinforced by the fact that the experimental section is extremely weak. In particular the authors test their model on a single relatively straightforward dataset. Any reason why the authors did not try on other datasets involving natural images? As a result I feel that the title and the claims in the paper are somewhat misleading and premature: that the proposed techniques improves the training and evaluation of energy based gans. \n\nOver all the paper is clearly written and easy to understand. \n\nBased on its incremental nature and weak experiments, I'm on the margin with regards to its acceptance. Happy to change my opinion if other reviewers strongly think otherwise with good reason and are convinced about its impact. ", "Thank you for your review and comments.\n\nCould you unpack what you mean by, \"It is not very clear why making such changes in the energy would supposedly make the results better, and no further discussions are provided\"? We explicitly state in section 2.1 that:\n\n\"It is not particularly surprising that these modifications to Equation 2 show improvements. Zhao et al. (2016) devote an appendix section to the correct selection of m and explicitly mention that the “balance between... real and fake samples[s]” (italics theirs) is crucial to the correct selection of m. Unsurprisingly, a dynamically updated parameter that accounts for this balance is likely to be the best instantiation of the authors’ intuitions and visual inspection of the resulting output supports this (see Berthelot et al., 2017).\"\n\nWhat kind of discussion would you have liked to see? If you're looking for a formal analysis, we would suggest reviewing Berthelot et al., (2017) and Arjovsky, Chintala, and Bottou (2017) for their discussions of the advantages of the Wasserstein distance over the alternatives. Section 5 bullet 3 in the latter explicitly addresses the differences between the original EBGAN margin loss and the Wasserstein distance, if you are interested. Sections 3.3 and 3.4 of the former address the equilibrium hyper-parameters of the BEGAN model (e.g., gamma, k_t).\n\nCould you also unpack what you mean by, \"there are probably a lot more [similarity measures] in the [IQA] literature whose application can not be deemed as a significant contribution to either theories or algorithm designs in GAN\"? We assume that you are not saying that the mere existence of other methods is damning to the scientific study of some subset of those methods. Please clarify how our modification of the energy-based formulation of GANs to emphasize a more important role for the energy function (generally assumed to be an l1 or l2 norm across many studies) is not a significant contribution to GAN research?\n\nCould you also unpack what you mean by, \"human model\"? We would like to clarify that the function of Figure 5 is to illustrate how image diversity has not been lost when using the new evaluation. It is not trying to show how one set of images are better than another.", "Thank you for your review and comments.\n\nWe have been working on extending our research to include other datasets. The primary challenge is that the stock BEGAN model does rather poorly on datasets that do not have a lot of regular structure like the CelebA dataset. Consequently, we have preliminary results that are suggestive for MNIST and the msceleb dataset, but we've been unable to show any interesting results on Imagenet or the LSUN bedrooms dataset. \n\nOur suspicion is that these are issues with the stock network design/structure. We are currently working with an EBM-based modification of the model from \"Progressive Growing of GANs for Improved Quality, Stability, and Variation\" (this conference), which seems to replicate our results on other datasets. Our research is still very preliminary, though.\n\nWe are curious what the 'correct' number of datasets is for a conference proceedings paper (which is extremely short). The original BEGAN paper only uses one, custom dataset. The EBGAN paper has 4 (if you count MNIST). The WGAN-GP paper has 2 plus some artificial datasets. Consequently, there doesn't seem to be any consensus in the community on this point.", "Thank you for your review and comments.\n\nCould you be more specific about what needs greater clarity?\n\nAll of the models are modifications upon the original BEGAN model except model 1 (which is the original model). All of the modifications are based upon different hyper-parameter sets of equation 8 which are outlined in Table 1. Sections 2.2 and 2.3 motivate the modifications we made." ]
[ 5, 5, 6, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1 ]
[ "iclr_2018_ryzm6BATZ", "iclr_2018_ryzm6BATZ", "iclr_2018_ryzm6BATZ", "HJZIu0Kef", "H1NEs7Clz", "Bk8udEEeM" ]
iclr_2018_BJjBnN9a-
Continuous Convolutional Neural Networks for Image Classification
This paper introduces the concept of continuous convolution to neural networks and deep learning applications in general. Rather than directly using discretized information, input data is first projected into a high-dimensional Reproducing Kernel Hilbert Space (RKHS), where it can be modeled as a continuous function using a series of kernel bases. We then proceed to derive a closed-form solution to the continuous convolution operation between two arbitrary functions operating in different RKHS. Within this framework, convolutional filters also take the form of continuous functions, and the training procedure involves learning the RKHS to which each of these filters is projected, alongside their weight parameters. This results in much more expressive filters, that do not require spatial discretization and benefit from properties such as adaptive support and non-stationarity. Experiments on image classification are performed, using classical datasets, with results indicating that the proposed continuous convolutional neural network is able to achieve competitive accuracy rates with far fewer parameters and a faster convergence rate.
rejected-papers
The paper received borderline negative scores: 5,6,4. The authors response to R1 question about the motivations was "...thus can achieve similar classification results with much smaller network sizes. This translates into smaller memory requirements, faster computational speeds and higher expressivity." If this is really the case, then some experimental comparison to compression methods (e.g. Song Han's PhD work at Stanford) is needed to back up this. R4 raises issues with the experimental evaluation and the AC agrees with them that they are disappointing. In general R4 makes some good suggestions for improving the paper. The author's rebuttal also makes the general point that the paper should be accepted as it contains ideas, that these are sufficient alone: "We strongly believe that with some fine-tuning it could achieve considerably better results, however we also believe that this is not the point in a first submission...". The AC disagrees with this. Ideas are cheap. *Good ideas*, i.e. those that work, as in get good performance on standard benchmarks are valuable however. The reason for having benchmarks is to give some of objective way of seeing if an idea has any merit to it. So while the reviewers and the AC accept that the paper has some interesting ideas, this is not enough for warrant acceptance.
train
[ "rklk7M9ez", "S1_ETHcef", "rknFtmdff", "H1N0Bs5Xz", "Sk_lrgkmz", "SkC64gJ7M", "ByIjVlJ7z", "HkS27lkmz", "Hys4mg1mz", "ByTIk_5pZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public" ]
[ "The paper introduces the notion of continuous convolutional neural networks. \nThe main idea of the paper is to project examples into an RK Hilbert space\nand performs convolution and filtering into that space. Interestingly, the\nfilters defined in the Hilbert space have parameters that are learnable.\n\nWhile the idea may be novel and interesting, its motivation is not clear for\nme. Is it for space? for speed? for expressivity of hypothesis spaces? \nMost data that are available for learning are in discrete forms and hopefully,\nthey have been digitalized according to Shannon theory. This means that they bring\nall necessary information for rebuilding their continuous counterpart. Hence, it is\nnot clear why projecting them back into continuous functions is of interest. \n\nAnother point that is not clear or at least misleading is the so-called Hilbert Maps.\nAs far as I understand, Equation (4) is not an embedding into an Hilbert space but\nis more a proximity space representation [1]. Hence, the learning framework of the\nauthors can be casted more as a learning with similarity function than learning\ninto a RKHS [2]. A proper embedding would have mapped $x$ into a function\nbelonging to $\\mH$. In addition, it seems that all computations are done\ninto a \\ell^2 space instead of in the RKHS (equations 5 and 11). \nLearning good similarity functions is also not novel [3] and Equations\n(6) and (7) corresponds to learning these similarity functions.\nAs far as I remember, there exists also some paper from the nineties that\nlearn the parameters of RBF networks but unfortunately I have not been able to\ngoogle some of them.\n\n\nPart 3 is the most interesting part of the paper, however it would have been\ngreat if the authors provide other kernel functions with closed-form convolution \nformula that may be relevant for learning.\nThe proposed methodology is evaluated on some standard benchmarks in vision. While\nresults are pretty good, it is not clear how the various cluster sets have been obtained\nand what are their influence on the performances (if they are randomly initialized, it \nwould be great to see standard deviation of performances with respect to initializations).\nI would also be great to have intuitions on why a single continuous filter works betters\nthan 20 discrete ones (if this behaviour is consistent accross initialization).\n\nOn the overall, while the idea may be of interested, the paper lacks in motivations\nin connecting to relevant previous works and in providing insights on why it works.\nHowever, performance results seem to be competitive and that's the reader may\nbe eager for insights.\n\n\nminor comments\n---------------\n\n* the paper employs vocabulary that is not common in ML. eg. I am not sure what\noccupancy values, or inducing points are. \n\n* Supposingly that the authors properly consider computation in RKHS, then \\Sigma_i\nshould be definite positive right? how update in (7) is guaranteed to be DP? \nThis constraints may not be necessary if instead they used proximity space representation.\n\n\n\n\n\n[1] https://alex.smola.org/papers/1999/GraHerSchSmo99.pdf\n[2] https://www.cs.cmu.edu/~avrim/Papers/similarity-bbs.pdf\n[3] A. Bellet, A. Habrard and M. Sebban. Similarity Learning for Provably Accurate Sparse Linear Classification. ", "This paper aims to provide a continuous variant of CNN. The main idea is to apply CNN on Hilbert maps of the data. The data is mapped to a continuous Hilbert space via a reproducing kernel and a convolution layer is defined using the kernel matrix. A convolutional Hilbert layer algorithm is introduced and evaluated on image classification data sets.\n\nThe paper is well written and provides some new insights on incorporating kernels in CNN.\n\nThe kernel matrix in Eq. 5 is not symmetric and the kernel function in Eq. 3 is not defined over a pair of inputs. In this case, the projections of the data via the kernel are not necessarily in a RKHS. The connection between Hilbert maps and RKHS in that sense is not clear in the paper.\n\nThe size of a kernel matrix depends on the sample size. In large scale situations, working with the kernel matrix can be computational expensive. It is not clear how this issue is addressed in this paper.\n\nIn section 2.2, how \\mu_i and \\sigma_i are computed?\n\nHow the proposed approach can be compared to convolutional kernel networks (NIPS paper) of Mairal et al. (2014)?", "This paper formulates a variant of convolutional neural networks which models both activations and filters as continuous functions composed from kernel bases. A closed-form representation for convolution of such functions is used to compute in a manner than maintains continuous representations, without making discrete approximations as in standard CNNs.\n\nThe proposed continuous convolutional neural networks (CCNNs) project input data into a RKHS with a Gaussian kernel function evaluated at a set of inducing points; the parameters defining the inducing points are optimized via backprop. Filters in convolutional layers are represented in a similar manner, yielding a closed-form expression for convolution between input and filters. Experiments train CCNNs on several standard small-scale image classification datasets: MNIST, CIFAR-10, STL-10, and SVHN.\n\nWhile the idea is interesting and might be a good alternative to standard CNNs, the paper falls short in terms of providing experimental validation that would demonstrate the latter point. It unfortunately only experiments with CCNN architectures with a small number (eg 3) layers. They do well on MNIST, but MNIST performance is hardly informative as many supervised techniques achieve near perfect results. The CIFAR-10, STL-10, and SVHN results are disappointing. CCNNs do not outperform the prior CNN results listed in Table 2,3,4. Moreover, these tables do not even cite more recent higher-performing CNNs. See results table in (*) for CIFAR-10 and SVHN results on recent ResNet and DenseNet CNN designs which far outperform the methods listed in this paper.\n\nThe problem appears to be that CCNNs are not tested in a regime competitive with the state-of-the-art CNNs on the datasets used. Why not? To be competitive, deeper CCNNs would likely need to be trained. I would like to see results for CCNNs with many layers (eg 16+ layers) rather than just 3 layers. Do such CCNNs achieve performance compatible with ResNet/DenseNet on CIFAR or SVHN? Given that CIFAR and SVHN are relatively small datasets, training and testing larger networks on them should not be computationally prohibitive.\n\nIn addition, for such experiments, a clear report of parameters and FLOPs for each network should be included in the results table. This would assist in understanding tradeoffs in the design space.\n\nAdditional questions:\n\nWhat is the receptive field of the CCNNs vs those of the standard CNNs to which they are compared? If the CCNNs have effectively larger receptive field, does this create a cost in FLOPs compared to standard CNNs?\n\nFor CCNNs, why does the CCAE initialization appear to be essential to achieving high performance on CIFAR-10 and SVHN? Standard CNNs, trained on supervised image classification tasks do not appear to be dependent on initialization schemes that do unsupervised pre-training. Such dependence for CCNNs appears to be a weakness in comparison.", "Thank you very much for this comment, we have read the paper and incorporated some discussion of these approaches into our own paper, including how they are connected and how our proposed framework differs from previous methods. ", "------------------\n[1] https://alex.smola.org/papers/1999/GraHerSchSmo99.pdf \n[2] https://www.cs.cmu.edu/~avrim/Papers/similarity-bbs.pdf \n[3] A. Bellet, A. Habrard and M. Sebban. Similarity Learning for Provably Accurate Sparse Linear Classification. International Conference on Machine Learning (ICML), 2012.\n[4] S. Vasudevan, F. Ramos, E. Nettleton, and H. Durrant-Whyte. Gaussian process modeling of large-scale terrain. Journal of Field Robotics (JFR), 26(10):812–840, 2010.\n[5] J. Mairal, P. Koniusz, Z. Harchaoui, and C. Schmid. Convolutional kernel networks. In\narXiv:1406.3332, 2014.\n[6] J. Mairal. End-to-end kernel learning with supervised convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS), 2016.\n[7] F. Ramos and L. Ott. Hilbert maps: Scalable continuous occupancy mapping with stochastic gradient descent. In Proceedings of Robotics: Science and Systems (RSS), 2015.\n[8] V. Guizilini and F. Ramos. Large-scale 3d scene reconstruction with Hilbert maps. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS), 2016.", "----------\n– Q: While the idea may be novel and interesting, its motivation is not clear for me. Is it for space? for speed? for expressivity of hypothesis spaces? Most data that are available for learning are in discrete forms and hopefully, they have been digitalized according to Shannon theory. This means that they bring all necessary information for rebuilding their continuous counterpart. Hence, it is not clear why projecting them back into continuous functions is of interest. \n– A: We have shown in the paper that continuous kernels produce much more descriptive convolutional filters, and thus can achieve similar classification results with much smaller network sizes. This translates into smaller memory requirements, faster computational speeds and higher expressivity, all very attractive properties in Deep Learning applications. Furthermore, projecting information as continuous functions in the RKHS produces an alternative form of data representation, that can be exploited for learning purposes. For example, spatial dependencies are modeled using kernels functions and thus are not constrained to grid representations, producing properties such as adaptive support, non-stationarity and automatic relevance determination. We are not arguing that our approach is better or worse than standard discrete convolution, but rather that it is different and worth exploring by the scientific community. Also, because it introduces a novel data representation for learning purposes, ICLR would be the perfect vehicle for a first submission. \n----------\n– Q: Another point that is not clear or at least misleading is the so-called Hilbert Maps. As far as I understand, Equation (4) is not an embedding into an Hilbert space but is more a proximity space representation [1]. Hence, the learning framework of the authors can be casted more as a learning with similarity function than learning into a RKHS [2]. A proper embedding would have mapped $x$ into a function belonging to $\\mH$. In addition, it seems that all computations are done into a \\ell^2 space instead of in the RKHS (equations 5 and 11). Learning good similarity functions is also not novel [3] and Equations (6) and (7) corresponds to learning these similarity functions. As far as I remember, there exists also some paper from the nineties that learn the parameters of RBF networks but unfortunately I have not been able to google some of them.\n– A: Equation (4) was introduced in the original Hilbert Maps paper [7] as one of three possible feature vectors, alongside Random Fourier and Nyström features; and further explored in [8] to include non-stationarity and inducing point placement using clustering. Further theoretical details on how this is an embedding into a Hilbert space were left out for conciseness, since the original papers are cited accordingly. While similarity functions try to approximate input values in a high-dimensional space, for tasks such as clustering, a feature vector is used to approximate popular kernels using dot products (Section 2.2 was rewritten to clarify this property). Equations (5) and (11) use block-matrices K containing, in each row, the feature vector corresponding to each input point, which is then multiplied by the weights to produce a single scalar for each row, which is then added to the bias. This is standard procedure for the Hilbert Maps training and inference process, but rewritten as a neural network layer to allow back-propagation and joint kernel parameter learning during the training process. The parameters that perform this projection into a RKHS (i.e. mean and variance) are learned using the proposed framework, and Equations (6) and (7) provide the gradients for mean and variance values, respectively, to be used during back-propagation.\n----------\n– Q: Part 3 is the most interesting part of the paper, however it would have been great if the authors provide other kernel functions with closed-form convolution formula that may be relevant for learning. \n– A: Although a more detailed study of different kernel functions for continuous convolution would be interesting, it does not add to the core theoretical aspects of the proposed framework, so it was left out to produce a more concise text. We are currently working on a survey of such kernels, describing their advantages and shortcomings for different learning tasks, and will include it as an appendix on the final version of the paper. ", "----------\n– Q: The proposed methodology is evaluated on some standard benchmarks in vision. While results are pretty good, it is not clear how the various cluster sets have been obtained and what are their influence on the performances (if they are randomly initialized, it would be great to see standard deviation of performances with respect to initializations). \n– A: Although not required, in all experiments the cluster set was initialized as a grid, with mean values equally spaced and the same variance value, so that the distance between mean values is equal to two standard deviations (weight values are initialized randomly, using a Gaussian distribution with mean 0 and variance 0.1). This was done to guarantee a good coverage of the entire input space even with a small number of clusters. These values were then optimized accordingly (input data using the joint kernel learning methodology from Section 2.3 and feature maps using the classification methodology from Section 3). This was clarified on the paper, to facilitate the reader’s understanding. \n----------\n– Q: I would also be great to have intuitions on why a single continuous filter works betters than 20 discrete ones (if this behaviour is consistent accross initialization). On the overall, while the idea may be of interested, the paper lacks in motivations in connecting to relevant previous works and in providing insights on why it works. However, performance results seem to be competitive and that's the reader may be eager for insights. \n– A: Projecting discrete data into a continuous function in a RKHS provides an alternative method of data representation, which we can exploit to produce more descriptive feature maps. For example, we are not constrained to a fixed-size grid map, but rather have inducing points that are free to move around the input space, and these positions, alongside other kernel parameters (i.e. variance values) are learned during the training process in conjunction with the more traditional weight values. This produces certain degrees of freedom in the learning process that cannot be achieved with standard discrete convolutional kernels, especially when dealing with such narrow and shallow topologies. We provide connections with previous works on Hilbert Maps, and with tangentially similar works on RKHS projection for convolution, however the proposed methodology is novel and still has not been explored in a neural network context, for deep learning purposes. \n----------\n– Q: The paper employs vocabulary that is not common in ML. eg. I am not sure what occupancy values, or inducing points are. \n– A: Occupancy values are simply the probability of a given input point to be occupied or not, varying from 0.0 (not occupied) to 0.5 (unknown) and 1.0 (occupied). They are given by the classifier used as the occupancy model, based on input points projected into the RKHS. The inducing set is used to approximate training data using a smaller subset of points, for computational purposes (the number M of inducing points is much smaller than the number N of training points, M << N). Once optimization is complete, the training data can be discarded and only the inducing set is maintained, which greatly decreases memory requirements. These terms were clarified in the paper, to facilitate the reader’s understanding. \n----------\n– Q: Supposing that the authors properly consider computation in RKHS, then \\Sigma_i should be definite positive right? how update in (7) is guaranteed to be DP? This constraints may not be necessary if instead they used proximity space representation. \n– A: To guarantee positive-definiteness, we are in fact learning a lower triangular matrix V, which is then used to produce \\Sigma_i = U^T . U, a positive-definite matrix. U is assumed to be invertible (i.e. it has no zeros on the main diagonal), which indeed cannot be guaranteed during the optimization process, however that was never the case in any of the experiments. We attribute this behavior to the initialization procedure, which places a large variance value for each kernel, so it stabilizes before reaching values close to zero (the noisy nature of stochastic gradient descent also naturally avoids exact values of zero for trainable parameters). This has been clarified on the paper, to facilitate the reader’s understanding. ", "----------\n– Q: The kernel matrix in Eq. 5 is not symmetric and the kernel function in Eq. 3 is not defined over a pair of inputs. In this case, the projections of the data via the kernel are not necessarily in a RKHS. The connection between Hilbert maps and RKHS in that sense is not clear in the paper. \n– A: Equation (3) is defined over pairs of inputs in the sense that they correlate input points with inducing points, according to a covariance matrix that acts as length-scale. The equation was rewritten for clarity, and a better explanation on this behavior was provided to facilitate the reader’s understanding. The kernel matrix in Equation (5) is not square, so it cannot be symmetrical (i.e. it is not a covariance matrix). It is a N x M matrix containing, in each row, the feature vector corresponding to each input point, which is then multiplied by the weights to produce a single scalar for each row, which is then added to the bias. This is standard procedure for the Hilbert Maps training and inference process, but rewritten as a neural network layer to allow back-propagation and joint kernel parameter learning during the training process.\n----------\n– Q: The size of a kernel matrix depends on the sample size. In large scale situations, working with the kernel matrix can be computational expensive. It is not clear how this issue is addressed in this paper. \n– A: The number of inducing points used for RKHS projection is typically much smaller than the number of training points (especially at higher dimensions), which alleviates large-scale issues. Additionally, the proposed framework can be sparsified, by considering only a subset of inducing points when calculating the feature vector for each input point. This strategy has been successfully applied in a Gaussian process context [4], and can be easily extended to the proposed framework without minimal modifications. This was not addressed in this paper due to software limitations when dealing with back-propagation through sparse matrices, however as mentioned it is planned for future work and stable code release. \n----------\n– Q: In section 2.2, how \\mu_i and \\sigma_i are computed? \n– A: In the original Hilbert Maps paper, the authors cluster input data and use each subset of points to calculate statistical mean and variance values. In this paper, these values are obtained using the joint kernel learning methodology proposed in Section 2.3 to produce optimal weight, mean and variance values from initial guesses. In all experiments, the clusters were initialized as a grid, with mean values equally spaced and the same variance value, so that the distance between mean values is equal to two standard deviations (weight values are initialized randomly, using a Gaussian distribution with mean 0 and variance 0.1). This has been clarified on the paper, to facilitate the reader’s understanding. \n----------\n– Q: How the proposed approach can be compared to convolutional kernel networks (NIPS paper) of Mairal et al. (2014)?\n– A: To the best of our knowledge, the works of [5,6] are the most similar to the proposed methodology, in the sense that both apply RKHS projections using kernels to produce convolutional results in a a multi-layer neural network. However, there are key differences in how this is achieved, most notably because Convolutional Kernel Networks (CKN) still rely on discrete image patches, that are projected individually into the RKHS via the kernel function, and its parameters are the same as in standard discrete convolution (number of layers, number of filters, shape of filters and size of feature maps), while the others (\\beta_k and \\sigma_k) are automatically chosen. On the other hand, the proposed methodology first projects the entire input data into the RKHS via the kernel functions, and then performs convolution directly in this projected continuous function, without ever touching the original dataset again. Additionally, the proposed methodology also learns extra kernel parameters (i.e. mean and variance) on top of the standard discrete convolution parameters. This analysis has been added to the paper, for a better understanding of the differences between these two techniques. ", "----------\n– Q: Experiments train CCNNs on several standard small-scale image classification datasets: MNIST, CIFAR-10, STL-10, and SVHN. While the idea is interesting and might be a good alternative to standard CNNs, the paper falls short in terms of providing experimental validation that would demonstrate the latter point. It unfortunately only experiments with CCNN architectures with a small number (eg 3) layers. CCNNs do not outperform the prior CNN results listed in Table 2,3,4. Moreover, these tables do not even cite more recent higher-performing CNNs. The problem appears to be that CCNNs are not tested in a regime competitive with the state-of-the-art CNNs on the datasets used. Why not? \n– A: We agree that experimental results are not on par with the latest achievements in these datasets, however we would like to point out that the CCNN topologies used in this paper are much simpler than standard CNN state-of-the-art counterparts, containing only a fraction of the number of trainable parameters, and do not include many of the regularization techniques and optimization tricks commonly used to avoid these shortcomings. This was a choice, so we can analyze this novel technique by itself in a more pure state, without relying on quick fixes that are already available in the literature and can be easily incorporated regardless of which convolutional layer (continuous or discrete) is utilized. Additionally, the proposed framework consistently outperforms Convolutional Kernel Networks [6], which is currently the most well-known deep learning approach that relies on kernel functions and RKHS projections. Stable code will be released with the paper, and we will encourage and work alongside interested parties in order to test the proposed framework under different conditions, but we believe a first submission should focus more on the theoretical aspects and less on fine-tuning for optimal performance. And, as a conference on learning representations, ICLR would be the perfect vehicle to introduce a novel methodology for data modeling in deep learning tasks.\n----------\n– Q: What is the receptive field of the CCNNs vs those of the standard CNNs to which they are compared? If the CCNNs have effectively larger receptive field, does this create a cost in FLOPs compared to standard CNNs? \n– A: The proposed framework does not have a fixed receptive field, but rather a fixed number of inducing points that compose each feature map. The location (and variance) of these inducing points is optimized during training, so they can be further or nearer the center of the feature map as needed, in order to minimize the cost function. Therefore, a CCNN can have a larger receptive field in comparison to a CNN without necessarily increasing FLOPs. The number of inducing points for the proposed classification topology is described in the experiments section, and vary for each layer of the neural network (25-16-9). If converted to receptive field sizes, these are within the standard sizes for CNNs (5x5, 4x4 and 3x3).\n----------\n– Q: For CCNNs, why does the CCAE initialization appear to be essential to achieving high performance on CIFAR-10 and SVHN? Standard CNNs, trained on supervised image classification tasks do not appear to be dependent on initialization schemes that do unsupervised pre-training. Such dependence for CCNNs appears to be a weakness in comparison.\n– A: The convolutional filters in a CCNN are more expressive than in a standard CNN, and therefore have more degrees of freedom, which creates more stable suboptimal solutions during the optimization process. The CCAE initialization provides better starting points for these convolutional filters, so they can converge to more optimal solutions. We agree that this is a weakness, however it is worth mentioning that the CCNN topologies used in experiments are much simpler than standard CNN state-of-the-art counterparts, and do not include many of the regularization techniques and optimization tricks commonly used to avoid these shortcomings. This was a choice, so we can analyze this novel technique by itself in a more pure state, without relying on quick fixes that are already available in the literature and can be easily incorporated to mask otherwise interesting behaviors (such as this one). ", "Very interesting approach to continuously relax convolutional filters. There's been substantial literature on infinite dimensional neural networks, such as your proposal, since the early 90s, that you might want to cite in your work. This paper (https://openreview.net/forum?id=H1pri9vTZ) outlines all of these approaches and gives some theoretical justification for your relaxation and others like it.\n\n\nBest of luck!" ]
[ 5, 6, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 2, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJjBnN9a-", "iclr_2018_BJjBnN9a-", "iclr_2018_BJjBnN9a-", "ByTIk_5pZ", "iclr_2018_BJjBnN9a-", "rklk7M9ez", "rklk7M9ez", "S1_ETHcef", "rknFtmdff", "iclr_2018_BJjBnN9a-" ]
iclr_2018_rkQu4Wb0Z
DNN Representations as Codewords: Manipulating Statistical Properties via Penalty Regularization
Performance of Deep Neural Network (DNN) heavily depends on the characteristics of hidden layer representations. Unlike the codewords of channel coding, however, the representations of learning cannot be directly designed or controlled. Therefore, we develop a family of penalty regularizers where each one aims to affect one of representation's statistical properties such as sparsity, variance, or covariance. The regularizers are extended to perform class-wise regularization, and the extension is found to provide an outstanding shaping capability. A variety of statistical properties are investigated for 10 different regularization strategies including dropout and batch normalization, and several interesting findings are reported. Using the family of regularizers, performance improvements are confirmed for MNIST, CIFAR-100, and CIFAR-10 classification problems. But more importantly, our results suggest that understanding how to manipulate statistical properties of representations can be an important step toward understanding DNN and that the role and effect of DNN regularizers need to be reconsidered.
rejected-papers
The paper received scores of 5,5,5, with the reviewers agreeing the paper was marginally below the acceptance threshold. The main issue, raised by both R2 and R3 was that connection between representation learning in deep nets and coding theory was not fully justified/made. With no reviewer advocating acceptance, it is not possible to accept the paper unfortunately.
test
[ "B1H8H5YxG", "HkSWIWqez", "BkuLn46ez", "r18LCpbQG", "H1uR66W7G", "SkTKap-Qf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper presents a set of regularizers which aims for manipulating the statistical properties like sparsity, variance and covariance. While some of the proposed regularizers are applied to weights, most are applied to hidden representations of neural networks. Class-wise regularizations are also investigated for the purpose of fine-grained control of statistics within each class. Experiments over MNIST, CIFAR10 and CIFAR100 demonstrate the usefulness of this technique.\n\nThe following related work also studied the regularizations on hidden representations which are motivated from clustering perspective and share some similarities with the proposed one. It would be great to discuss the relationship.\n\nLiao, R., Schwing, A., Zemel, R. and Urtasun, R., 2016. Learning deep parsimonious representations. NIPS.\n\nPros:\n(1) The paper is clearly written.\n\n(2) The visualizations of hidden activations are very helpful in understanding the effect of different regularizers.\n\n(3) The proposed regularizations are simple and computationally efficient.\n\nCons:\n(1) The novelty of the paper is limited as most of the proposed regularizers are more or less straightforward modifications over DeCov.\n\n(2) When we manipulate the statistics of representations we aim for something, like improving generalization, interpretability. But as pointed out by authors, improvement of generalization performance is not the main focus. I also do not find significant improvement from all experiments. Then the question is what is the main benefit of manipulating various statistics? \n\nI have an additional question as below:\nIn measuring the ratio of dead units, I notice authors using the criterion of “not activated on all classes”. However, do you check this criterion over the whole epoch or just some mini-batches?\n\nOverall, I think the paper is technically sound. But the novelty and significance are a bit unsatisfactory. I would like to hear authors’ feedback on the issues I raised.\n", "1. Summary\nThe authors of the paper compare the learning of representations in DNNs with Shannons channel coding theory, which deals with reliably sending information through channels. In channel coding theory the statistical properties of the coding of the information can be designed to fit the task at hand. With DNNs the representations cannot be designed in the same way. But the representations, learned by DNNs, can be affected indirectly by applying regularization. Regularizers can be designed to affect statistical properties of the representations, such as sparsity, variance, or covariance. The paper extends the regularizers to perform per-class regularization. This makes sense, because, for example, forcing the variance of a representation to go towards zero is undesirable as it would state that the unit always has the same output no matter the input. On the other hand having zero variance for a class is desirable as it means that the unit has a consistent activation for all samples of the same class. The paper compares different regularization techniques regarding their error performance. They find that applying representation regularization outperforms classical approaches such as L1 and L2 weight regularization. They also find, that performing representation regularization on the last layer achieves the best performance. Class-wise methods generally outperform methods that apply regularization on all classes.\n\n2. Remarks\nShannons channel coding theory was used by the authors to derive regularizers, that manipulate certain statistical properties of representations learned by DNNs. In the reviewers opinion, there is no theoretical connection between DNNs and channel theory. For one, DNNs are no channels in the sense that they transmit information. DNNs are rather pipes that transform information from one domain to another, where representations are learned as an intermediate model as the information is being transformed. Noise introduced in the process is not due to a faulty channel but due to the quality of the learned representations themselves. The paper falls short in explaining how DNNs and Shannons channel coding theory fit together theoretically and how they used it to derive the proposed regularizers. Despite the theoretical gap between the two was not properly bridged by the authors, channel coding theory is still a good metaphor for what they were trying to achieve.\nThe authors recognize that there is similar research being done independently by Belharbi et al. (2017). The similarities and differences between the proposed work and Belharbi et al. should be discussed in more detail.\nThe authors conclude that it is unclear which statistical properties of representations are generally helpful when being strengthened. It would be nice if they had derived at least a set of rules of thumb. Especially because none of the regularizers described in the paper only target one specific statistical property but multiple. One good example that was provided, is that L1-rep consistently failed to train on CIFAR-100, because too much sparsity can hurt performance, when having many different classes (100 in this case). These kinds of conclusions will make it easier to transfer the presented theory into practice.\n\n3. Conclusion\nThe comparison between DNNs and Shannons channel coding theory stands on shaky ground. The proposed regularizes are rather simple, but perform well in the experiments. The effect of each regularizer on the statistical properties of the representation and the relations to previous work (especially Belharbi et al. (2017)) should be discussed in more detail. ", "This is a well-written paper which starts on a good premise: DNNs learn representations; good representations are good for prediction; they can be seen as codes of the input information, so let us look at coding/communication/information theory for inspiration. This is fine and recent work from N Tishby's group develop some intriguing observations from this. But this paper doesn't follow through the information/communication story in any persuasive way. All that is derived is that it may be a good idea to penalise large variations in the representation -- within-class variations, in particular. The paper does a good job of setting up and comparing empirical performance of various regularizers (penalties on weights and penalties on hidden unit representations) and compares results against a baseline. Error rates (on MNIST, for example) are very small (baseline 3% versus the best in this paper 2.43%), but, if I am right, these are results quoted on a single test set. The uncertainties are over different runs of the algorithm and not over different partitions of the data into training and test sets. I find this worrying -- is there a case (in these datasets that have been around for so long and so widely tested), there is a commmunity-wide hill climbing on the test set -- reporting results that just happen to be better than a previous attempt on the specific test set? Is it not time to pool all the data and do cross validation (by training and testing on different partitions) so that we can evaluate the uncertainty in these results more accurarately?", "Thanks for your feedback. As you can see from the above responses to the other two reviewers, we fully agree with the feedbacks from the three reviewers.\n\nThank you for your comment on the visualizations. Actually, we were originally investigating mutual information related problems but have noticed a few unexpected shapes while performing visualizations. The observation has lead us to the idea of manipulating statistical properties of representations as summarized in this work. When this work is viewed as an effort to introduce a few new regularizers, indeed our new regularizers are just extensions of the existing ideas (well, but not trivial at all...). Rather, our intention was to investigate how statistical properties of representations are generally related to the learning. Our contributions were not clear in the original writing, and therefore we have added a paragraph at the end of conclusion to summarize the contributions. For your convenience, they are repeated below. \n\n*** The contributions of this work can be summarized as follow. First, a complete set of very simple regularizers for controlling sparsity, variance, and covariance of representations was presented. Among them, VR, cw-VR, and cw-CR have been designed and used for the first time and they work very well. The visualizations clearly show that the new regularizers are effective for manipulating statistical properties of representations in new ways. Secondly, by analyzing statistical properties in a quantitative way, we have shown that none of the popular regualrizers works in a distinct way. Even the well-known dropout does not control co-adaptation(covariance) only. In fact, sparsity and class-wise variance are affected together by dropout, and therefore it is difficult to claim if indeed reduction in co-adaptation is why dropout works well. Thirdly, we have provided partial results on which statistical properties can be helpful or harmful for different learning tasks (tasks with more labels, with more complexity, etc.). This part needs to be further investigated to see if general rules can be derived. ***\n\nWe regret that we have submitted a bit premature result, and we will work on a better version of this study. In any case, we truly appreciate your helpful and thoughtful comments.\n\n>> In measuring the ratio of dead units, I notice authors using the criterion of “not activated on all classes”. However, do you check this criterion over the whole epoch or just some mini-batches?\n==> The dead units were checked by using the entire test data set. The equations for deriving them can be found in Appendix C, and the writing in there was revised to improve its readability. \n", "Thanks for your feedback. We fully agree that DNN is different from channel coding problems, and a few reasons can be found in the original writing of the introduction section. Furthermore, we agree that our paper lacks enough analysis that can connect between coding theories and DNN (please see the above responses to AnonReviewer2). Nonetheless, our work provides a few contributions including 1. a complete set of very simple regularizers for controlling sparsity, variance, and covariance of representations (VR, cw-CR, and cw-CR have been designed and used for the first time and they work very well; as pointed out by AnonReviewer1, the visualizations show that they are very effective for manipulating statistical properties of representations), 2. observation/analysis for showing that none of the popular regualrizers works in a distinct way (even well-known dropout does not control co-adaptation(covariance) only; in fact, sparsity and class-wise variance are affected together by dropout, and therefore it is impossible to tell if indeed reduction in co-adaptation is why dropout works well), and 3. partial results on which statistical properties can be helpful or harmful for different learning tasks (tasks with more labels, with more complexity, etc.). We agree with your feedback that the contribution #3 should have been further investigated and better summarized. We are currently working on the issue. Overall, we agree with your assessment, and have made an update at the end of conclusion section to clarify the contributions. We will try to have a better version written in the coming few months. :) We truly appreciate your careful review. \n\nRegarding Belharbi et al. (2017), indeed the paper has a few overlappings with our work. The work focuses on reducing variation among same-class representations using a regularizer, and it also investigates layer dependency on applying the regularizer. The findings on layer dependency is consistent with our findings. From the technical perspective, our work is different because our regularizers are much simpler to implement (no sample pair-wise calculation) and because we designed and investigated a set of regularizers instead of a single regularizer. Our focus was to investigate statistical properties of representations and their effects on learning. We have updated our writing to introduce Belharbi et al. (2017) in a proper way, and we have also included Liao et al. (2016) that was pointed out by AnonReviewer1.\n", "Thanks for your feedback. The paper was motivated by well-known facts from coding theorems, and thus our main interest was to investigate the statistical properties of representations. The writing, however, was completed in a haste, and we agree that we failed to identify a strong connection to the coding theorem in a persuasive way. Unfortunately, this became clear to us only after reading feedbacks from the three reviewers. We actually had several different options for summarizing our findings, and now it looks like the current writing was not a good choice and needs a major revision. Such a revision might not be adequate for what is allowed during this review process, and therefore we have made only minor revisions for now.\n\n>> but, if I am right, these are results quoted on a single test set.\nFor MNIST, 5 tests were performed. For the others, single test was performed. We paid less attention to the number of tests, because our main point was not on performance improvement but on how a variety of regularizers (including a few that we have designed for the first time) behave in similar/different ways in terms of statistical properties while achieving a comparable or superior performance.\n" ]
[ 5, 5, 5, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1 ]
[ "iclr_2018_rkQu4Wb0Z", "iclr_2018_rkQu4Wb0Z", "iclr_2018_rkQu4Wb0Z", "B1H8H5YxG", "HkSWIWqez", "BkuLn46ez" ]
iclr_2018_HkMhoDITb
Reinforcement Learning via Replica Stacking of Quantum Measurements for the Training of Quantum Boltzmann Machines
Recent theoretical and experimental results suggest the possibility of using current and near-future quantum hardware in challenging sampling tasks. In this paper, we introduce free-energy-based reinforcement learning (FERL) as an application of quantum hardware. We propose a method for processing a quantum annealer’s measured qubit spin configurations in approximating the free energy of a quantum Boltzmann machine (QBM). We then apply this method to perform reinforcement learning on the grid-world problem using the D-Wave 2000Q quantum annealer. The experimental results show that our technique is a promising method for harnessing the power of quantum sampling in reinforcement learning tasks.
rejected-papers
All three reviewers agreed that the paper was an interesting, giving a demonstration of what quantum computer could achieve. However, they all also felt that the topic was outside the main interests of the conference and better suited to other venues, e.g. a quantum computation workshop. The AC agrees with them. Thus unfortunately, the paper cannot be accepted.
train
[ "BysYuqXxz", "rkW8HjOlz", "SJ22faFez", "BJsN2Ia7G", "rylNA8a7z", "r1Hv6IamG", "H12QoLpQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Summary: The paper demonstrates the use of a quantum annealing machine to solve a free-energy based reinforcement learning problem. Experimental results are demonstrated on a toy gridworld task, where if I understand correctly it does better than a DQN and a method based on RBM-free-energy approximation (Sallans and Hinton, 2004)\n\nClarity: The paper is very hard to read. It seems to be targeted towards a physics/quantum hardware crowd rather than a machine learning audience. I think most readers, even those very familiar with probabilistic models and RL, would find reading the paper difficult due to jargon/terminology and poorly explained concepts. The paper would need a major rewrite to be of interest to the ML community.\n\nRelevance: RL, probabilistic models, and function approximators are all relevant topics. However, the focus of the paper seems to be on parts (like hardware aspects) that are not particularly relevant to the ML community. I have a hard time imagining follow-up work on this, given that the experiments are run on a toy task and require specialized hardware (so they would be extremely difficult to reproduce/improve upon).\n\nSoundness: I can't judge the technical soundness as it is mostly outside my expertise. I wonder if the main algorithm the work is based on (Crawford et al, 2016) has been peer-reviewed (citation appears to be a preprint, and couldn't find a conference/journal version).\n\nSignificance: It's good to know that the the quantum annealing machine can be used for RL. However, the method of choice (free-energy based Q-function approximation) seems a bit exotic, and the experimental results are extremely underwhelming (5x3 gridworld).", "The paper is easy to read for a physicist, but I am not sure how useful it would be for ICLR... it is not clear for me it there is an interest for quantum problems in this conference. This is something I will let to the Area Chair to deceede. Other than this, the paper is interesting, certainly correct, and provides a nice perspective on the future of learning with quantum computers. I like the quantum \"boltzmann machine\" problems. \n\nI feel, however, but it might be a bit far from the main interest of the conference.\n\nComments:\n\n* What the authors called \"Free energy-based reinforcement learning\" seems to me just the minimization / maximiation of the free energy. This is simply maximum likelihood applied to the free energy and I think that calling it \"reinforcement learning\" is not only wrong, but also is very confusing, given this is usually reserved to an entirely different learning process.\n\n* While i liked the introduction of the quantum Boltzmann machine, I would be happy to learn what they can do? Are these useful, for instance, to study correlated fermions/bosons? The paper does not explain why one should be concerns with these devices.\n\n* The fact that the simulation on a classical computer agrees with the one on a quantum computer is promising, but I would say that this shows that, so far, there is not yet a clear advantage in using a quantum computer. This might change, but in the mean time, what is the benefits for the ICLR community?\n", "There is no scientific consensus on whether quantum annealers such as the D-Wave 2000Q that use the transverse-field Ising models yield any gains over classical methods (c.f. https://arxiv.org/abs/1703.00622). However, it is an exciting research area and this paper is an interesting demonstration of the feasibility of using quantum annealers for reinforcement learning. \n\nThis paper builds on Crawford et al. (2016), an unpublished preprint, who develop a quantum Boltzmann machine reinforcement learning algorithm (QBM-RL). A QBM consists of adding a transverse field term to the RBM Hamiltonian (negative log likelihood), but the benefits of this for unsupervised tasks are unclear (c.f. https://arxiv.org/abs/1601.02036, another unpublished preprint). QBM-RL consists of using a QBM to model the state-action variables: it is an undirected graphical model whose visible nodes are clamped to observed state-action pairs. The hidden nodes model dependencies between states and actions, and the weights of the model are updated to maximize the free energy or Q function (value of the state-action pair).\n\nThe authors extend QBM-RL to work with quantum annealers such as the D-Wave 2000Q, which has a specific bipartite graph structure and requires special consideration because it can only yield samples of hidden variables in a fixed basis. To overcome this, the authors develop a Suzuki-Trotter expansion and call it 'replica stacking', where a classical Hamiltonian in one dimension higher is used to approximate the quantum Hamiltonian. This enables the use of quantum annealers. The authors compare their method to standard baselines in a grid world environment.\n\nOverall, I do not want to criticize the work. It is an interesting proof of concept. But given the high price of quantum annealers, limited applicability of the technique, and unclear benefits of the authors' method, I do not think it is relevant to this specific conference. It may be better suited to a workshop specific to quantum machine learning methods. \n=======================================\n+ please add an algorithm box for your method. It deviates significantly from QBM-RL. For example, something like: (1) init weights of boltzmann machine randomly (2) sample c_eff ~ C from the pool of configurations sampled from the transverse-field Ising model using a quantum annealer with chimera graph (3) using the samples, calculate effective classical hamiltonian used to approximate the quantum system (4) use the weight update rules derived from Bellman equations (spell out the rules). \n\n+ moving the details of sampling into the appendix would help; they are not important for understanding the main ingredients of your method\n\nThere are so many moving parts in your system, and someone without a physics background will struggle to understand it. Clarifying the algorithm in terms familiar to machine learning researchers will go a long way toward helping people understand your method. \n\n+ the benefits of your method is unclear - it looks like the method works, but doesn't outperform the others. this is fine, but it is better to be straightforward about this and bill it as a 'proof of concept' \n\n+ perhaps consider rebranding the paper as something like 'RL using replica stacking for sampling from quantum boltzmann machines with quantum annealers'. Elucidating why replica stacking is a crucial contribution of your work would be helpful, and could be of broad interest in the machine learning community. Right now it is too dense to be useful for the average person without a physics background: what difficulties are intrinsic to a quantum Hamiltonian? What is the intuition behind the Suzuki-Trotter decomposition you develop? What is the 'quantum' Boltzmann machine in machine learning terms (hidden-hidden connections in an undirected graphical model!)? What is replica-stacking in graphical model terms (this would be a great ML contribution in its own right!)? Really spelling these things out in detail (or in the appendix) would help\n==========================================\n1) eq 14 is malformed\n\n2) references are not well-formatted\n\n3) need factor of 1/2 to avoid double counting in sums over nearest neighbors (please be precise)", "In response to your very helpful comments and suggestions, we provide a list of responses:\n\n- Crawford et al (2016) is peer reviewed, published in Quantum Information & Computation QIC Vol.18 No1&2, we have updated this reference in the bibliography to reflect this publication. This paper was also accepted and presented as a talk at the `Theory of Quantum Computation 2017’, and accepted and presented as a poster at `Adiabatic Quantum Computation 2017’.\n\n- On our toy model, our method outperforms the classical approaches investigated in this paper: the RMB-based method and the deep Q networks. It is worth clarifying that in Fig 6. SA Bipartite, SA Chimera, SQA Bipartite and SQA Chimera labels indicate `simulators’ of the behaviour of the quantum device and are not efficient as classical methods. The RBM curve in this figure is the only classical method. Fig 4 demonstrates the performance of the deep Q network which is another classical method considered for comparison. We agree that our experiments do not prove superiority of the proposed algorithm, as these benchmarks are on a small grid-world problem used to demonstrate a proof of concept. We have added addition comments into the body of the paper to emphasize this point.\n\n=======================================\n\n- We have added two algorithms, one for the FERL-QBM method, another explaining the replica stacking algorithm in detail.\n\nThank you for all the useful suggestions! We hope to have been able to address all of them by revisions to the paper, including the title change you advised. We have also edited our paper to reflect the technical formatting issues in formulas and references. With respect to the missing 1/2 factor to avoid double counting in sums over nearest neighbours, we have clarified that the summations run on choices of pairs to avoid possible confusions. \n", "\"Clarity: The paper is very hard to read. It seems to be targeted towards a physics/quantum hardware crowd rather than a machine learning audience. I think most readers, even those very familiar with probabilistic models and RL, would find reading the paper difficult due to jargon/terminology and poorly explained concepts. The paper would need a major rewrite to be of interest to the ML community.\"\n\n- We definitely want to make our work accessible to the ML community so we find your guidance extremely helpful. We have added more details about the algorithm. We added further explanations on classical and quantum Boltzmann machines, the similarities and differences between the two, and how the quantum BMs generalize their classical counterparts. We have used and referred to the terminology of “Reinforcement Learning with Factored States and Actions” by Brian Sallans and Geoffrey E. Hinton as much as possible for further clarity.\n\n\"Relevance: RL, probabilistic models, and function approximators are all relevant topics. However, the focus of the paper seems to be on parts (like hardware aspects) that are not particularly relevant to the ML community. I have a hard time imagining follow-up work on this, given that the experiments are run on a toy task and require specialized hardware (so they would be extremely difficult to reproduce/improve upon).\"\n\n- We hope that our work serves a small part in setting the ground foundations of new trends in applicability of quantum and exotic hardware in solving real-world machine learning problems. We foresee that a collaboration between the physics and ML community can be crucial to achieving this goal. The quantum computing community is working diligently on making the prototype hardwares accessible through cloud services and to provide user-friendly packages and APIs that hide the cumbersome details and encourage engagement of researchers from other disciplines. \n\n\"Soundness: I can't judge the technical soundness as it is mostly outside my expertise. I wonder if the main algorithm the work is based on (Crawford et al, 2016) has been peer-reviewed (citation appears to be a preprint, and couldn't find a conference/journal version).\"\n\n- Crawford et al (2016) is peer reviewed, being accepted for publication in Quantum Information & Computation (early 2018), presented as a talk at Theory of Quantum Computation (2017), and presented as a poster at Adiabatic Quantum Computation (2017).\n\n\"Significance: It's good to know that the the quantum annealing machine can be used for RL. However, the method of choice (free-energy based Q-function approximation) seems a bit exotic, and the experimental results are extremely underwhelming (5x3 gridworld).\"\n\n- Free-energy based Q-function approximation is an exotic choice for classical computations but a natural choice in the context of Quantum Annealers. We agree that the experimental results are underwhelming due to limitations to the existing prototype hardware and the computationally expensive classical simulations. Moving to larger and more useful benchmarks is definitely the most important future direction to this work. \n\nThank you again for your helpful review, we hope that we have been able to address your concerns.", "Although we work in a tangential field to that of machine learning we are excited to bring the progress in quantum machine learning to the attention of the ML experts in the ICLR community. The potential collaboration the can result from such exposition would be invaluable to investigating applicability of quantum computing in real-world computational tasks.\n\n\"What the authors called \"Free energy-based reinforcement learning\" seems to me just the minimization / maximiation of the free energy. This is simply maximum likelihood applied to the free energy and I think that calling it \"reinforcement learning\" is not only wrong, but also is very confusing, given this is usually reserved to an entirely different learning process. \"\n\n- We in fact solve Bellman’s optimality equation rather than minimization/maximization of the free energy. The temporal difference is what is minimized rather than maximum likelihood. The roll of free-energy is to act as a function approximation for the Q-function. Please also refer to “Reinforcement Learning with Factored States and Actions” by Brian Sallans and Geoffrey E. Hinton. We would love to hear your feedback if this clarifies what free-energy based reinforcement learning is meant here. Do you mean that the core idea of replica stacking can be applied not only to Reinforcement Learning but to other machine learning paradigms?\n\n\"While i liked the introduction of the quantum Boltzmann machine, I would be happy to learn what they can do? Are these useful, for instance, to study correlated fermions/bosons? The paper does not explain why one should be concerns with these devices.\"\n\n- The quantum Boltzmann machines are generalization of classical Boltzmann machines, so, in theory, QBMs can replace BMs in all the tasks in which BMs are used. However, the details of how to obtain the samples and other statistics required for the specific task need to be investigated since preparing and reading classical (digital) data from quantum systems is not straightforward: any measurement of a quantum system results collapse of the wavefunction which consequently \"stops\" the quantum algorithm. In this paper we work out such details for our specific task (i.e. sampling from the transverse-field Ising model). \n\n\" The fact that the simulation on a classical computer agrees with the one on a quantum computer is promising, but I would say that this shows that, so far, there is not yet a clear advantage in using a quantum computer. This might change, but in the mean time, what is the benefits for the ICLR community?\"\n\n- You’re definitely right that the agreement between the classical simulation and quantum experiment is a verifying step for the proposal here. However, the classical simulations are extremely slow and that is where the advantage in using a quantum computer may lie. Furthermore, even using Monte-Carlo methods only special quantum systems can be simulated classically (this is known as the sign-problem; See for instance arXiv:cond-mat/0408370.) For specific systems without sign problem, Monte-Carlo methods could be viable simulation methods if implemented on specialized hardware (e.g. ASICS). This can potentially result \"quantum-inpsired\" algorithms that run on classical hardware but take advantage of the extended representational power of the quantum model.\n\nThank you again for taking the time to review our work, your comments have given us a chance to clarify the intent.", "We have added more details about our replica stacking algorithm. We have added further explanations on classical and quantum Boltzmann machines, the similarities and differences between the two, and how the quantum Boltzmann machines generalize their classical counterparts. We have used and referred to the terminology of “Reinforcement Learning with Factored States and Actions” by Brian Sallans and Geoffrey E. Hinton as much as possible for further clarity." ]
[ 4, 6, 4, -1, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1, -1 ]
[ "iclr_2018_HkMhoDITb", "iclr_2018_HkMhoDITb", "iclr_2018_HkMhoDITb", "SJ22faFez", "BysYuqXxz", "rkW8HjOlz", "iclr_2018_HkMhoDITb" ]
iclr_2018_SksY3deAW
Learning Deep ResNet Blocks Sequentially using Boosting Theory
We prove a multiclass boosting theory for the ResNet architectures which simultaneously creates a new technique for multiclass boosting and provides a new algorithm for ResNet-style architectures. Our proposed training algorithm, BoostResNet, is particularly suitable in non-differentiable architectures. Our method only requires the relatively inexpensive sequential training of T "shallow ResNets". We prove that the training error decays exponentially with the depth T if the weak module classifiers that we train perform slightly better than some weak baseline. In other words, we propose a weak learning condition and prove a boosting theory for ResNet under the weak learning condition. A generalization error bound based on margin theory is proved and suggests that ResNet could be resistant to overfitting using a network with l_1 norm bounded weights.
rejected-papers
All three reviewers felt that the paper was just below the acceptance threshold, with scores of 5,4,5. R1 felt there were problems in the proofs, but the authors rebuttal satisfactorily addressed this. R3 and the authors had an extended discussion with the authors, but did not revise their score from its initial value (5). R4 had concerns about the experimental evaluation, that wasn't fully addressed in the rebuttal. With no reviewers advocating acceptance, the paper will have to rejected unfortunately.
train
[ "BkUBAzFlG", "SJMDsFilM", "HJQdsudQM", "S1wa3wp7f", "HJTLF8nmf", "BJChJEi7M", "SyHTUGwfM", "SyUUmkDGG", "By4KyasWf", "rJOTfTjZf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author" ]
[ "Disclaimer: I reviewed this paper for NIPS as well and many of comments made by reviewers at that time still apply to this version of the paper as well, although presentation has overall improved.\n\nThe paper presents a boosting-style algorithm for training deep residual networks. Convergence analysis for training error is presented and analysis of generalization ability is also provided. Paper concludes with some experimental results.\n\nThe main contribution of this work is interpretation of ResNet as a telescoping sum of differences between the intermediate layers and treating these differences as weak learners that are then boosted. This indeed appears to an interesting insight about ResNet training.\n\nOn the other hand, one of the main objections during NIPS reviews was the relation of this work to work of Cortes et al. on Adanet. In particular, generalization bounds presented in this work are results taken from that paper (which authors admit). What is less clear is the distinction between the algorithmic approaches which makes it hard to judge the novelty of this work. There is a paragraph at the end of section 2 but it seems rather vague.\n\nOne other objection during NIPS reviews was experimental setup explanation of which is omitted from the current version. In particular, same learning rate and mini-batch size was used both for boosting and backprop algorithms which seems strange since boosting is supposed to train much smaller classifiers.\n\nAnother concern is practicality of the proposed method which seems to require maintaining explicit distribution over all examples which would not be practical for modern datasets where NNs are typically applied.\n", "Summary:\nThis paper considers a learning method for the ResNet using the boosting framework. More precisely, the authors view the structure of the ResNet as a (weighted) sum of base networks (weak hypotheses) and apply the boosting framework. The merit of this approach is to decompose the learning of complex networks to that of small to large networks in a moderate way and it uses less computational costs. The experimental results are good. The authors also show training and generalization error bounds for the proposed approach.\n\nComments: \nThe idea of the paper is natural and interesting. Experimental results are somewhat impressive. However, I am afraid that theoretical results in the paper contain several mistakes and does not hold. The details are below.\n\nI think the proof of Theorem 4.2 is wrong. More precisely, there are several possibly wrong arguments as follows:\n- In the proof, \\alpha_t+1 is chosen so as to minimize an upper bound of Z_t, while the actual algorithm is chosen to minimize Z_t. The minimizer of Z_t and that of an upper bound are different in general. So, the obtained upper bound does not hold for the training error of the actual algorithm. \n- It is not a mistake, but, there is no explanation why the equality between (27) and (28) holds. Please add an explanation. Indeed, equation (21) matters. \n\nAlso, the statement of Theorem 4.2 looks somewhat cheating: The statement seems to say that it holds for any iteration T and the training error decays exponentially w.r.t. T. However, the parameter T is determined by the parameter gamma, so it is some particular iteration, which might be small and the bound could be large. \n\nThe generalization error bound Corollary 4.3 seems to be wrong, too. More precisely, Lemma 2 of Cortes et al. is OK, but the application of Lemma 2 is not. In particular, the proof does not take into account of the function \\sigma. In other words, the proof considers the Rademacher complexity R_m(\\calF_t), of the class \\calF_t, but, acutually, I think it should consider R_m(\\sigma(\\calF_t)), where the class \\sigma(\\calF_t) consists of the composition of functions \\sigma and f_t in \\calF_t. Talagrand’s lemma (see, e.g., Mohri et al.’ s book: Foundation of Machine Learning) can be used to analyze the complexity of the composite class. But, the resulting bound would depend on the Lipschizness of \\sigma in an exponential way. \n\nThe explanation of the generalization ability is not sufficient. While the latter weak hypotheses are complex enough and would have large edges, the complexity of the function class of weak hypotheses grows exponentially w.r.t. the iteration T, which should be mentioned. \n\nAs a summary, the paper contains nice ideas and experimental results are promising, but has non-negligible mistakes in theoretical parts which degrade the contribution of the paper.\n\nMinor Comments:\n-In Algorithm 1, \\gamma_t is not defined when a while-loop starts. So, the condition of the while-loop cannot be checked.\n\n \n", "This paper formulates the deep ResNet as a boosting algorithm. Based on this formulation, the authors prove that the generalization error bound decays exponentially with respect to the number of residual blocks. Further, a greedy block-wise training procedure is proposed to optimized ResNet-like neural networks. The authors claim that this algorithm is more efficient than standard end-to-end backpropagation (e2eBP) algorithm in terms of time and memory consumption.\nOverall, the paper is well organized and easy to follow. I find that using the boosting theory to analyze the ResNet architecture quite interesting. My concerns are mainly on the proposed BoostResNet algorithm.\n1.\tI don’t quite understand why the sequentially training procedure is more time efficient than e2eBP. It is true that BoostResNet trains each block quite efficiently. However, there are T blocks need to be trained sequentially. In comparison, e2eBP updates *all* the blocks at each training iteration.\n2.\tThe claim that BoostResNet is memory efficient may not hold in practice. I agree that the GPU memory consumption is much lower than in e2eBP. However, this only holds *under the assumption that the intermediate outputs of a previous block are stored to disk*. Unfortunately, this assumption is not practical for real problems: the intermediate outputs usually requires much more space of the original datasets. What makes thing worse, the widely used data augmentation techniques (horizontal flip, shift, etc.) would further make the space requirement hundreds of or even thousands of times larger.\n3.\tThe results in Figure 2 seem quite surprising to me, as the ResNet architectures is supposed to be quite robust when the network goes deeper. Have you tried the convolutional ResNet structure used in their original paper?\n4.\tIn Figure 3, how did you measure the number of gradient updates? In the original ResNet paper, the number of iterations required to train a model is 164(epochs)*50000(training samples)/128(batch-size)=6.4x10^5, which is far less than that showing in this figure.\n5.\tIn Figure 3, it seems that the algorithms are not fully converged, and on CIFAR-10 the e2eBP outperforms BoostResNet eventually. Is there any explanations? ", "Thank you for following up. We appreciate your time.\n\nWe would like to clarify the difference between boosting features and boosting labels. These two seem similar intuitively, however, there is no existing work that proves a boosting theory (guaranteed 0 training error) by boosting features. Moreover, the special structure that a ResNet has entails more complicated analysis: telescaping-sum boosting, which has never been introduced before in the existing literature. \n\nOne of our main contributions, as the reviewer said, is to interpret ResNet using a “new” boosting theory. The weak learners used here are different from the traditional weak learners. In that sense, we have introduced a general new boosting framework that could be used by many applications other than ResNet. \n\nAgain, we appreciate your evaluation and feedback. Thank you! ", "1. First of all, I think that discussion that you have in this paragraph is very helpful and would improve the paper a lot. Currently, both sec 1.2, sec. 3 and sec. 4, are rather vague on this.\n\nNow that we have clarified more or less clarified \"bushy\", I have a question about distinction of boosting \"features\" and boosting \"labels/classifiers\". It sounds to me that these are almost the same since these labels/classifiers can be thought of as features too in boosting.\n\nSo does that mean that main contribution of this work is to propose a new way of designing features/weak learners in boosting with particular emphasize on resnets perhaps? In my view giving a more precise formulation of these concepts can strengthen the paper quite a bit.\n\n2, 3 Thanks for clarifications.", "Thank you for your review. We would like to clarify the issues you pointed out as follows.\n\n1. The efficiency of BoostResNet is proved theoretically and justified empirically. Theoretically, the number of gradient updates required by BoostResNet is much smaller than e2eBP as discussed in Section 4.3. In practice, our experiments show significant improvement of computational efficiency as shown in figures 2 and 3. \n\n2. During training, we don't consider a sample and an augmented sample as different. At each batch of training, samples are randomly augmented (cropped and horizontally flipped) before backpropagation - we don't precompute all possible augmentations and store it. This is consistent with standard training, where all possible augmentations are not stored on disk before training.\n\n3. This is because the original ResNet paper uses convolutional modules, which enforce sparsity and have a regularizing effect. Figure two, as mentioned, is entirely fully-connected, as we said in the caption \"on multilayer perceptron residual network\". We compare BoostResNet with e2eBP on both residual convolutional networks and residual multilayer perceptron networks. \n\n4. We are measuring the number of gradient updates, not the number of training iterations. The number of gradient updates increases by one every time any parameter in the network is updated. \n\n5. We assure the readers that the results are fully converged. It is true that e2eBP eventually beats BoostResNet, however training using BoostResNet with small number of iterations and then fine tuning using e2eBP will be much more efficient than e2eBP alone. Because BoostResNet converges to some relatively good solution very fast. Due to the limited space, we move all the detailed discussions to Appendix H2. \n\nAgain, we thank the reviewer for the review. We would like to emphasize that the contribution of the paper is multiple folds as discussed in section 1.1. In particular, we hope to provide some training methods that have potential for training non-differentiable architectures (for example, tensor decomposition has been used to successfully train one layer neural network. Therefore we could potentially use BoostResNet where the training of one layer neural network is replaced by tensor decomposition). ", "Thank you again for your feedback and questions. We would like to clarify some issues as follows. \n\n1. The theory between the two is different. We emphasize that traditional boosting doesn't work in the Resnet setting. Traditional boosting ensembles \"estimated score functions\" (or even estimated labels) from weak learners. Our boosting ensembles \"features\" (representation from lower level layers). There is no boosting theory for ensembling features. \n\nWe are able to boost features by developing this new \"telescoping-sum boosting\" framework, one of our main contributions. We come up with the new weak learning condition for the telescoping-sum boosting framework. The algorithm is also very different from Adanet. These are explained in details in section 3 and 4 (the 8 page limit makes it hard to add another section to compare with Adanet, given that Adanet is so different from our algorithm). We rewrote the two sections (section 3 and 4) to make it clearer after receiving your comments from the nips submission. \n\nBy \"Bushy\" we mean the following: In Adanet, features (representations) from each lower layer have to be fed into a classifier (in other words, be transferred to score function in the label space). This is because Adanet uses traditional boosting, which ensembles score functions or labels. Therefore, the top classifier in Adanet has to be connected to all lower layers, making the structure bushy. Therefore Adanet chooses their own structure during learning, and their boosting theory does not necessarily work for a Resnet structure.\n\nOur contribution does not limit to explaining Resnet in the Boosting framework, we have also developed a new boosting framework for many other relevant tasks.\n\n2. Hyperparameters were found via random search, selected for highest accuracy on a validation set. We will add this to our paper. Thank you for your suggestion. \n\n3. This is done through a cost function, explained in the paper (equation (61)), and it is inexpensive to update according to our experiments. \n", "Thank you for you clarifications.\n\n1. I was referring to section 1.2 (not 2). I find there just the following sentences: \"Therefore, to obtain low training error guarantee, AdaNet maps the feature vectors (hidden layer representations) to a classifier space and boosts the weak classifiers. Our BoostResNet, instead, boosts representations (feature vectors) over multiple channels, and therefore produces a less “bushy” architecture.\"\nGiven connections between two papers, I think comparison with AdaNet deserves at least its own subsection where some of the notions in this paragraph are much clearer define.\nFor instance, what is the difference between mapping features to classifier space and boosting representations? Isn't mapping something to classifier space just another feature representation? And if there is any difference, why one is better than other?\nWhat does \"bushy\" mean? Why or when \"less\" bushy is better?\nOverall, is Adanet a special case of BoostResNet? Is it the other way around? are they uncomparable? Right now it is clear to me that theory for the two is the same and both of them boost NNs but the difference between the two need to be highlighted more.\n\n2. Appendix H does not say that you experimented with various hparams. It just states the ones you used. If you experimented with different ones then please explain how you selected these particular ones.\n\n3. I meant that to updated distribution one needs to compute partition function. This seems to be an expensive step since it basically requires a pass over data.", "Thank you for your detailed comments. Our proofs are correct---see below for specifics:\n\n1. Proof of Theorem 4.2. Our proof is correct after double checking. The \\alpha_{t+1} is chosen to minimize Z_t in the algorithm. Therefore Z_t, achieved by choosing \\alpha_{t+1} that minimizes Z_t, will be smaller than any other Z_t. In other words, Z_t | {\\alpha_t+1 = arg min {Z_t} } is less than or equal to Z_t | {\\alpha_t+1 = anything other than arg min{Z_t} }. Therefore the upper bounds on Z_t in equation (31) and equation (78) hold. \n\n2. Equation (21) is exactly the reason why the equality holds. We will add one line of explanation in the revised version. \nThe statement of Theorem 4.2 is (unfortunately) split across pages, so please make sure to read the entire theorem. Theorem 4.2 clearly states that the statement is true only when weak learning condition is satisfied, not for all T. The T required to achieve some error rate is certainly dependent on gamma as is specified in the complete theorem statement. \n\n3. If Lemma 2 of Cortes et al. is right, then our proof will be right as well. Because the hypothesis class (equation (2) in Cortes et al. ) in Cortes et al. is the same as the hypothesis class (equation (39) in our paper ) in our paper. The results in Cortes et al. have to do with the relu function which is 1-Lipschitz activation function. \n\n4. The generalization bound is stated explicitly in corollary 4.3. We suggest that small l_1 norm of weights help in terms of generalization. \n", "Thank you very much for your review for NIPS and ICLR. We did major modifications to the NIPS version of the paper to clarify the misunderstandings from the NIPS reviewers, which we didn't have a chance to address during NIPS rebuttal as we couldn't see all reviews due to the technical problems. We had rerun all experiments and had done hyperparameter optimization for each algorithm in the submitted ICLR version. \n\n1. To respond to your concern “What is less clear is the distinction between the algorithmic approaches which makes it hard to judge the novelty of this work. There is a paragraph at the end of section 2 but it seems rather vague.”\n\n- Section 2 does not talk about the algorithm. It only serves as an introduction of preliminaries to prepare the readers for resnet and boosting. Our algorithm is different from Cortes et. al as discussed in details in section 1.2. Please take a look at section 1.2 (at the bottom of page 2). \n\n2. To respond to your concern “One other objection during NIPS reviews was experimental setup explanation of which is omitted from the current version. In particular, same learning rate and mini-batch size was used both for boosting and backprop algorithms which seems strange since boosting is supposed to train much smaller classifiers.”\n\n- Our parameters are specified in the appendix and were optimized for performance. We experimented with various learning parameters for both the e2e resnet and the boostresnet. In boostresnet, we found that the most important hyperparameters were those that govern when the algorithm stops training the current module and begins training its successor. We also found that a standard resnet, to its credit, is quite robust to hyperparameters, namely learning rate and learning rate decay, provided that we used an optimization procedure that automatically modulated these values (as mentioned, we used Adam). Changing these hyperparameters had a negligible affect on e2e model accuracy. \n\n3. To respond to your concern “Another concern is practicality of the proposed method which seems to require maintaining explicit distribution over all examples which would not be practical for modern datasets where NNs are typically applied.”\n\n- We don’t understand this comment---the additional memory requirement is just number of classes * number of examples which would be about 100MB on ImageNet 20K. Modern machines typically have a factor of 100 more RAM.\n\nWe hope the reviewer could kindly reconsider the score and decision after reading our clarifications. \n" ]
[ 5, 4, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SksY3deAW", "iclr_2018_SksY3deAW", "iclr_2018_SksY3deAW", "HJTLF8nmf", "SyHTUGwfM", "HJQdsudQM", "SyUUmkDGG", "rJOTfTjZf", "SJMDsFilM", "BkUBAzFlG" ]
iclr_2018_BJgPCveAW
Characterizing Sparse Connectivity Patterns in Neural Networks
We propose a novel way of reducing the number of parameters in the storage-hungry fully connected layers of a neural network by using pre-defined sparsity, where the majority of connections are absent prior to starting training. Our results indicate that convolutional neural networks can operate without any loss of accuracy at less than 0.5% classification layer connection density, or less than 5% overall network connection density. We also investigate the effects of pre-defining the sparsity of networks with only fully connected layers. Based on our sparsifying technique, we introduce the `scatter' metric to characterize the quality of a particular connection pattern. As proof of concept, we show results on CIFAR, MNIST and a new dataset on classifying Morse code symbols, which highlights some interesting trends and limits of sparse connection patterns.
rejected-papers
The paper received weak scores: 4,4,5. R2 complained about clarity. R3's point about the lack of fully connected layers in current SOA deepnets is very valid and the authors response far from convincing. Unfortunately the major revision provided by the authors was not commented on by the reviewers, but many of the major shortcomings of the work still remain. Generally, the paper is below the acceptance threshold, so cannot be accepted.
train
[ "S1951kYef", "Bk-lFRWWz", "HyYy75NMG", "HyBhKWsXf", "HJhs-1o7G", "Syblgksmz", "r1wbCAcmM", "r12EC0cQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper examines sparse connection patterns in upper layers of convolutional image classification networks. Networks with very few connections in the upper layers are experimentally determined to perform almost as well as those with full connection masks. Heuristics for distributing connections among windows/groups and a measure called \"scatter\" are introduced to construct the connectivity masks, and evaluated experimentally on CIFAR-10 and -100, MNIST and Morse code symbols.\n\nWhile it seems clear in general that many of the connections are not needed and can be made sparse (Figures 1 and 2), I found many parts of this paper fairly confusing, both in how it achieves its objectives, as well as much of the notation and method descriptions. I've described many of the points I was confused by in more detailed comments below.\n\n\nDetailed comments and questions:\n\n\nThe distribution of connections in \"windows\" are first described to correspond to a sort of semi-random spatial downsampling, to get different views distributed over the full image. But in the upper layers, the spatial extent can be very small compared to the image size, sometimes even 1x1 depending on the network downsampling structure. So are do the \"windows\" correspond to spatial windows, and if so, how? Or are they different (maybe arbitrary) groupings over the feature maps?\n\nAlso a bit confusing is the notation \"conv2\", \"conv3\", etc. These names usually indicate the name of a single layer within the network (conv2 for the second convolutional layer or series of layers in the second spatial size after downsampling, for example). But here it seems just to indicate the number of \"CL\" layers: 2. And p.1 says that the \"CL\" layers are those often referred to as \"FC\" layers, not \"conv\" (though they may be convolutionally applied with spatial 1x1 kernels).\n\nThe heuristic for spacing connections in windows across the spatial extent of an image makes intuitive sense, but I'm not convinced this will work well in all situations, and may even be sub-optimal for the examined datasets. For example, to distinguish MNIST 1 vs 7 vs 9, it is most important to see the top-left: whether it is empty, has a horizontal line, or a loop. So some regions are more important than others, and the top half may be more important than an equally spaced global view. So the description of how to space connections between windows makes some intuitive sense, but I'm unclear on whether other more general connections might be even better, including some that might not be as easily analyzed with the \"scatter\" metric described.\n\nAnother broader question I have is in the distinction between lower and upper layers (those referred to as \"feature extracting\" and \"classification\" in this paper). It's not clear to me that there is a crisply defined difference here (though some layers may tend to do more of one or the other function, such as we might interpret). So it seems that expanding the investigation to include all layers, or at least more layers, would be good: It might be that more of the \"classification\" function is pushed down to lower layers, as the upper layers are reduced in size. How would they respond to similar reductions?\n\nI'm also unsure why on p.6 MNIST uses 2d windows, while CIFAR uses 3d --- The paper mentions the extra dimension is for features, but MNIST would have a features dimension as well at this stage, I think? I'm also unsure whether the windows are over spatial extent only, or over features.", "The authors propose reducing the number of parameters learned by a deep network by setting up sparse connection weights in classification layers. Numerical experiments show that such sparse networks can have similar performance to fully connected ones. They introduce a concept of “scatter” that correlates with network performance. Although I found the results useful and potentially promising, I did not find much insight in this paper.\nIt was not clear to me why scatter (the way it is defined in the paper) would be a useful performance proxy anywhere but the first classification layer. Once the signals from different windows are intermixed, how do you even define the windows? \nMinor\nSecond line of Section 2.1: “lesser” -> less or fewer\n", "The paper seems to claims that\n1) certain ConvNet architectures, particularly AlexNet and VGG, have too many parameters,\n2) the sensible solution is leave the trunk of the ConvNet unchanged, and to randomly sparsify the top-most weight matrices.\nI have two problems with these claims:\n1) Modern ConvNet architectures (Inception, ResNeXt, SqueezeNet, BottleNeck-DenseNets and ShuffleNets) don't have large fully connected layers.\n2) The authors reject the technique of 'Deep compression' as being impractical. I suspect it is actually much easier to use in practice as you don't have to a-priori know the correct level of sparsity for every level of the network.\n\np3. What does 'normalized' mean? Batch-norm?\np3. Are you using an L2 weight penalty? If not, your fully-connected baseline may be unnecessarily overfitting the training data.\np3. Table 1. Where do the choice of CL Junction densities come from? Did you do a grid search to find the optimal level of sparsity at each level?\np7-8. I had trouble following the left/right & front/back notation.\np8. Figure 7. How did you decide which data points to include in the plots?", "Terminology changed - CL refers to connected layers, of which fully connected layer (FCL) is a special case. Our proposed technique of pre-defining a layer to be sparse by not having most of the connections will lead to a sparsely connected layer (SCL), which is also a special case of CL.\n\nSeveral portions of Section 2 have been changed to indicate that we used L2 regularization, batch normalization and grid search to decide the optimum junction densities for different networks.\n\nThe MNIST CL only simulations are done more extensively using several networks having varying number of hidden layers. Fig 2c has been bolstered as a result and new insights offered in Section 2.2.2, which indicate that large sparse networks perform better than small dense networks.\n\nSection 3 has been clarified to indicate the nature of windows in different junctions. In particular, left windows in a junction correspond to the dimensionality of the output of the left layer, while right windows in a junction correspond to the dimensionality of the input of the right layer. New subfigures (5a and 5b) have been added to illustrate this.\n\nThe dimensionality of the left- and right- window adjacency matrices has been corrected and Equation 1 fixed.\n\nInsights on scatter are offered in Section 3.2\n\nA new appendix section 5.3 has been added to explain possible reasons behind SCLs converging faster than FCLs, as shown in Fig 1.\n\nOther minor changes have been made which are not listed here due to space constraints. Some of these are in response to reviewers' comments, such as changing the terminology 'conv2' to 'conv+2CLs'. More details can be found in the comments below. Other changes are done to tighten the language and make the paper fit in 8 pages.", "Thank you for reviewing the paper and making insightful comments. We have addressed them and revised the paper to the best of our ability, and hope that your concerns are sufficiently addressed.\n\n\nComment: Modern ConvNet architectures (Inception, ResNeXt, SqueezeNet, BottleNeck-DenseNets and ShuffleNets) don't have large fully connected layers.\n\nResponse: We agree that some recent CNN architectures have attempted to reduce the number of FC layers. However, ResNeXt, DenseNet and ShuffleNet all have 1 final softmax FC layer, which account for approximately 3%, 28-48% and 12-30% of the overall parameters as per our calculations (the ranges indicate different DenseNet and ShuffleNet architectures). For Inception, FC parameters account for 74% in auxiliary classifier 0, 34% in auxiliary classifier 1, and 11% in the main classifier. Although these numbers are less than other architectures, we believe there are still significant savings to be achieved by reducing the density of these FC layers as per our other experiments given in the paper.\nNote that we assume the typical scenario where the outputs of the final convolutional layer are flattened before getting fully connected to the softmax classifier.\n\nSqueezeNet does not mention the use of any FC layer. Our ongoing work, as mentioned in the conclusion section of our submission, is exploring techniques to sparsify CNNs. Some methods already exist, such as depthwise convolutions (XCeption, Shufflenet) and grouping convolutions (AlexNet, ResNeXt, ShuffleNet). Note that using these methods to reduce conv params means that the fraction of FC params goes up, which further justifies our methods to sparsify FC layers.\n\n\nComment: The authors reject the technique of 'Deep compression' as being impractical. I suspect it is actually much easier to use in practice as you don't have to a-priori know the correct level of sparsity for every level of the network.\n\nResponse: One of our goals, as mentioned in the paper, is hardware acceleration of neural networks. In particular, some of the works we have cited such as Dey et al. (2017a;b) have leveraged pre-defined sparsity to simplify the memory and computational footprint of neural network hardware architectures capable of on-chip training and inference. Deep Compression uses post-training sparse methods such as pruning and quantization, which are unsuited for on-chip training. This is because the entire (non-sparse) architecture needs to be used for training, and then additional computation done to reduce parameters. This is why we propose starting off with a sparse architecture.\n\n\nQuestion: p3. What does 'normalized' mean? Batch-norm?\n\nResponse: Yes, we are referring to batch-normalization. We have modified the paper to clarify this.\n\n\nQuestion: p3. Are you using an L2 weight penalty? If not, your fully-connected baseline may be unnecessarily overfitting the training data.\n\nResponse: Yes we experimented with different values for L2 weight penalty coefficient and picked the optimum values. The paper has been revised to indicate this.\n\nQuestion: p3. Table 1. Where do the choice of CL Junction densities come from? Did you do a grid search to find the optimal level of sparsity at each level?\n\nResponse: Yes, we did a grid search. To simplify the search, we focused more on architectures with higher junction densities in the later (closer to output) layers. This is in accordance with our findings in Section 2.4. The paper has been revised to indicate this.\n\nComment: p7-8. I had trouble following the left/right & front/back notation.\n\nResponse: Layers closer to the input are ‘left’ and those closer to the output are ‘right’. Left to right indicates forward. Right to left indicates backward. We have modified the paper to explicitly mention this. For example, S_1f refers to the scatter value when going forward in junction 1, i.e. windows are formed in the input layer to the left, and data flows from them to neurons in the hidden layer to the right.\n\n\nQuestion: p8. Figure 7. How did you decide which data points to include in the plots?\n\nResponse: As mentioned in Section 3.2, we tried random and planned connection patterns. Several random connection patterns led to similar values for scatter, so we included only 1 of them. For the planned points, we distributed the connections in such a way so that certain junctions had perfect window-to-neuron connectivity, i.e. some values in the scatter vector would be 1. As mentioned, this invariably led to some other values being very low. The points included in the plots serve to highlight how all the scatter vector values are important, i.e. how a single low value can lead to bad performance.", "Thank you for reviewing the paper and making insightful comments. We have addressed them and revised the paper to the best of our ability, and hope that your concerns are sufficiently addressed.\n\n\nComment: Although I found the results useful and potentially promising, I did not find much insight in this paper. It was not clear to me why scatter (the way it is defined in the paper) would be a useful performance proxy anywhere but the first classification layer.\n \nResponse:\na) Let me explain by giving an example of a network with 3 CLs, connected as shown in this figure: https://drive.google.com/file/d/1tTGtdeyAwPvzbQ2YWeTQicDzm1RPn38q/view?usp=sharing\nIf we compute all the scatter vector values, S_f and S_b will be good because every output neuron is connected to every input neuron, i.e. the input-to-output connectivity is good. But this is not a good network because 2 of the 3 hidden neurons are being wasted and can be removed. The problem with this network is captured by the other scatter values S_1f, S_1b, S_2f and S_2b, which will be poor. This is why all the values in the scatter vector need to be considered, since some low values may lead to performance degradation, as shown in Fig. 7.\nThis is a toy example used for demonstration, but we simulated a larger example using a similar approach and obtained inferior performance. We hope this serves to explain why intermediate hidden layer connectivity is important.\n\nb) It has been shown in the literature that non-linearity is required in neural networks to improve their approximation capabilities, particularly for problems which are not linearly separable. Such non-linearity is captured by ReLU activations in the hidden layers. If we just take the scatter values involving the 1st CL, or just the scatter values of the input-output equivalent junction, we ignore the importance of non-linearity effect introduced by the hidden layers. As shown in Fig. 7a), a network where the only low scatter value is S_1b = ⅛ performs equally badly as a network where the only low scatter value is S_2f = ⅛, even though the latter has good connectivity in the 1st CL. \n \n\nQuestion: Once the signals from different windows are intermixed, how do you even define the windows?\n \nAnswer: As shown in Fig. 6 (previously fig. 5), the windows in the hidden layers are groups of adjacent neurons. We follow this approach based on the assumption that we need good mixing overall, i.e. both individual junctions 1 and 2, need to be mixed, as well as the equivalent junction 1:2. This assumption is justified by the reasoning from the response to the previous comment. Thus, the entire scatter vector is important. This insight on scatter, along with a few others, have been included in Section 3.2 of the revised paper.\n \n\nComment: Minor Second line of Section 2.1: “lesser” -> less or fewer\n \nResponse: Thank you for pointing this out. We changed the word to ‘fewer’.", "Thank you for reviewing the paper and making insightful comments. We have addressed them and revised the paper to the best of our ability, and hope that your concerns are sufficiently addressed.\n\n\nQuestion: The distribution of connections in \"windows\" are first described to correspond to a sort of semi-random spatial downsampling, to get different views distributed over the full image. But in the upper layers, the spatial extent can be very small compared to the image size, sometimes even 1x1 depending on the network downsampling structure. So are do the \"windows\" correspond to spatial windows, and if so, how? Or are they different (maybe arbitrary) groupings over the feature maps?\n \nAnswer: As we have clarified in Section 3, windows in the left (right) layer of a junction will correspond to the dimensionality of the output (input) of that layer. For example, the input layer in an MNIST CL only network would have 2D windows, each of which might correspond to a fraction of the image, as shown in Fig. 5(a). When inputs to a CL have an additional dimension for features, such as in CIFAR or the MNIST conv network, each window is a cuboid capturing fractions of both spatial extent and features, as shown in Fig. 5(b).\nFor the spatial windows, nearby pixels have correlated information, so we hypothesize that each right neuron needs only 1 connection from each such spatial window. For different feature maps, the extent of correlation is unknown. So in their case, the grouping is arbitrary.\n \n\nQuestion: Also a bit confusing is the notation \"conv2\", \"conv3\", etc. These names usually indicate the name of a single layer within the network (conv2 for the second convolutional layer or series of layers in the second spatial size after downsampling, for example). But here it seems just to indicate the number of \"CL\" layers: 2. And p.1 says that the \"CL\" layers are those often referred to as \"FC\" layers, not \"conv\" (though they may be convolutionally applied with spatial 1x1 kernels).\n \nAnswer: We have made 2 changes to clear up the notation:\na) Layers which are conventionally fully connected, i.e. those which we aim to make sparse, are now being called connected layers (CLs). Fully connected layers (FCLs) and sparsely connected layers (SCLs) that we have proposed are both special cases of CLs.\nb) The notation ‘conv2’ has been changed to ‘conv+2CLs’, and similarly for ‘conv3’\n \n\nQuestion: The heuristic for spacing connections in windows across the spatial extent of an image makes intuitive sense, but I'm not convinced this will work well in all situations, and may even be sub-optimal for the examined datasets. For example, to distinguish MNIST 1 vs 7 vs 9, it is most important to see the top-left: whether it is empty, has a horizontal line, or a loop. So some regions are more important than others, and the top half may be more important than an equally spaced global view. So the description of how to space connections between windows makes some intuitive sense, but I'm unclear on whether other more general connections might be even better, including some that might not be as easily analyzed with the \"scatter\" metric described.\n \nAnswer: The main value of scatter lies in it being an indicator, i.e. if a network has high scatter, it will definitely perform well, and if there are multiple low values in the scatter bar vector, performance will generally be poor. But the metric has its limitations, such as uncertainty regarding exact bounds which guarantee a certain level of network performance. The predictive power of scatter is largely influenced by the chosen windows. We are currently working on improvements, such as using a priori dataset knowledge on how to choose windows and decide correlation between different spatial sections of an image and its features.", "Continued from the previous comment...\n\n\nQuestion: Another broader question I have is in the distinction between lower and upper layers (those referred to as \"feature extracting\" and \"classification\" in this paper). It's not clear to me that there is a crisply defined difference here (though some layers may tend to do more of one or the other function, such as we might interpret). So it seems that expanding the investigation to include all layers, or at least more layers, would be good: It might be that more of the \"classification\" function is pushed down to lower layers, as the upper layers are reduced in size. How would they respond to similar reductions?\n \nAnswer: This is a very good point - the exact function of different layers is not so clearly demarcated in very deep networks. As mentioned in the paper conclusion, the next step is to extend sparsity methodologies to convolutional layers. But note that conv layers are already sparse by definition (since a neuron in a layer connects to only a few in another layer). Hence we believe that the scope for significantly reducing parameters without adversely affecting performance is far greater in fully connected layers.\n \n\nQuestion: I'm also unsure why on p.6 MNIST uses 2d windows, while CIFAR uses 3d --- The paper mentions the extra dimension is for features, but MNIST would have a features dimension as well at this stage, I think? I'm also unsure whether the windows are over spatial extent only, or over features.\n \nAnswer: As clarified in the answer to your first question and in Section 3 and Figure 5 in the paper, the dimension of windows in a left (right) layer of a junction is the same as the dimension of its output (input). So for example, the input layer for an MNIST CL only network will have 2D windows, while the 1st CL in an MNIST convolutional network will have 3D windows." ]
[ 4, 5, 4, -1, -1, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJgPCveAW", "iclr_2018_BJgPCveAW", "iclr_2018_BJgPCveAW", "iclr_2018_BJgPCveAW", "HyYy75NMG", "Bk-lFRWWz", "S1951kYef", "r1wbCAcmM" ]
iclr_2018_S1fHmlbCW
Neural Networks for irregularly observed continuous-time Stochastic Processes
Designing neural networks for continuous-time stochastic processes is challenging, especially when observations are made irregularly. In this article, we analyze neural networks from a frame theoretic perspective to identify the sufficient conditions that enable smoothly recoverable representations of signals in L^2(R). Moreover, we show that, under certain assumptions, these properties hold even when signals are irregularly observed. As we converge to the family of (convolutional) neural networks that satisfy these conditions, we show that we can optimize our convolution filters while constraining them so that they effectively compute a Discrete Wavelet Transform. Such a neural network can efficiently divide the time-axis of a signal into orthogonal sub-spaces of different temporal scale and localization. We evaluate the resulting neural network on an assortment of synthetic and real-world tasks: parsimonious auto-encoding, video classification, and financial forecasting.
rejected-papers
The scores were not favorable: 5,5,2. R2 felt the motivation of the paper was inadequate. R3 raised numerous technical points, some of which were addressed in the rebuttal, but not all. R3 continues to have issue with some of the results. The AC agrees with R3's concerns and feels that the paper cannot be accepted in its current form.
train
[ "r1wssyDlf", "ByeAk4weM", "HJZM5e9eM", "ryPpT3xEM", "S1RntVn7z", "BkO8q42mG", "SJvEcE2mG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors proved that convolutional neural networks with Leaky ReLU activation function are nonlinear frames, and similar results hold for non-uniformly sampled time-series as well. My main concern on this part is that theory is too rough and its link to the later part of the paper is weak. Although frames are stable representations, the ones with lower bound much smaller than the upper bound are close to unstable. That's why in classical applications of frames in signal and image processing tight frames are vastly preferred. Furthermore, the authors did not explicitly state the reliance of the lower frame bound on the parameter alpha in Leaky ReLU. It seems to me that the representation gets more unstable as alpha decreases, and the lower bound will be zero when ReLU is used. \n\nIn Section 2, the authors used CMF conditions to constraint filters which leads to a much more stable representation than in the previous section. The idea is very similar to previous work on data-driven tight frame (Cai et al. Applied & Computational Harmonic Analysis, vol. 37, no. 1, p. 89-105, 2014) and AdaFrame (Tai and E, arXiv:1507.04835). Furthermore, there has been multiple work on introducing tight-frame-like constraints to filters of convolutional neural networks (see for example Huang et al., arXiv:1710.02338). All of these work is not mentioned by the authors. Although the CMF constraints used by the authors seem new, the overall novelty is still weak in my opinion.\n\nThe experimental results are convincing, and the proposed architecture with wavelet-transform LSTM outperform the baseline model using standard LSTM. However, I am not familiar with the state-of-the-art models on the data sets used in the paper. Therefore, I am not certain whether the proposed method achieves state-of-the-art or not. ", "Pros:\n- combination of wavelets & CNN\n\nCons:\n- lack of motivation\n\nI am not sure to understand the motivation of good reconstruction/homeomorphism w.r.t. the numerical setting or combination with a CNN. (except for the first experiment) ; I give some comments section per section\n\nSection 1:\nDefinition 1.1: why is it squared?\nDefinition 1.3: with this definition, \"the framing constants\" are not unique, so it should be \"some framing constants\"\nThere is a critical assumption to have an inverse, which is its stability. In particular, the ratio B/A is the quantity of interest. Indeed, a Gaussian filtering is Bi-Lipschitz-Invertible with this definition, yet, however it is quite hard to obtain the inverse which is not stable (and this is the reason why regularization is required in this inverse problem) Consequently, the assumption that CNNs are full rank does not really help in this setting(you can check the singular values). The conditioning is the good quantity to consider.\n\nThe Proposition 1.4 is trivial to prove, however I do not understand the following: \n\"With such vanishing gradients, it is possible to find series of inputsequences that diverge in l2(Z) while their outputs through the RNN are a Cauchy sequence\" \n\nHow would you prove it or do you have some numerical experiments to do so?\n\nSection 2:\nThe figure of the Theorem 2 is not really clear and could be improved. Furthermore, the 1x1 convolutions which follows the conjugate mirror filtering are not necessarily unitary.. This would require some additional constraints.\n\nSubsection2.3: \nThe modulus is missing in the first sentence (on the fourier transform)\n\nSection 3:\nI find great the first experiment (which seems to indicate this particular problem is well conditioned). Nevertheless, the second experiment claims to improve the accuracy of the task while reducing the parameters, however it would be great to understand why there is this improvement. Similarly the last problem is better handled by the haar basis, is it because it permits the NN to learns to denoise or is it a conditioning issue? My guess is that it is because this basis sparsify the input signal, but it would require some additional experiments, in particular to understand how the NN uses it.", "Summary\n\nThis article considers neural networks over time-series, defined as a succession of convolutions and fully-connected layers with Leaky ReLU activations. The authors provide relatively general conditions for transformations described by such networks to admit a Lipschitz-continuous inverse. They extend these results to the case where the first layer is a convolution with irregular sampling. Finally, they show that the first convolutional filters can be chosen so as to represent a discrete wavelet transform, and provide some numerical experiments.\n\n\nMain remarks\n\nWhile the introduction seemed promising, and I enjoyed the writing style, I was disappointed with this article.\n\n(1) There are many mistakes in the mathematical statements. First, in Theorem 1.1, I do not think that phi_L \\circ ... \\circ phi_1 \\circ F is a non-linear frame, because I do not see why it should be of the form of Definition 1.2 (what would be the functions psi_n?). For the same reason, I also do not understand Theorem 1.2. In Proof 1.4, the line of equalities after « Also with the Plancherel formula » is, in my opinion, not true, because the L^2 norm of a product of functions is not the product of the L^2 norms of the functions. It also seems to me that Theorem 1.3, from [Benedetto, 1992], is incorrect: it is not the limit of t_n/n that must be larger than 2R, but the limit of N_n/n (with N_n the number of t_i's that belong to the interval [-n;n]), and there must probably be a compatibility condition between (t_n)_n and R_1, not only between (t_n)_n and R. In Proposition 1.6, I think that the equality should be a strict inequality. Additionally, I do not say that Proof 2.1 is not true, but the fact that the undersampling by a factor 2 does not prevent the operator from being a frame should be justified.\n\n(2) The authors do not justify, in the introduction, why admitting a continuous inverse should be a crucial criterion of quality for the representation described by a neural network. Additionally, the existence of this continous inverse relies on the fact that the non-linearity that is used is a Leaky ReLU, which looks a bit like \"cheating\" to me, because the Lipschitz constant of the inverse of a Leaky ReLU, although finite, is large, so it seems to me that cascading several layers with Leaky ReLUs could encode a transformation with strictly positive, but still very poor frame bounds.\n\n(3) I also do not understand why having \"orthogonal outputs\", as in Section 2, is really desirable; I think that it should be better justified. Also, there are probably other ways to achieve orthogonality than using wavelets in the first layer, so the fact that wavelets achieve orthogonality does not really justify why using wavelets in the first layer is a good choice, compared to other filters.\n\n(4) I had understood in the introduction that the authors would explain how to define a (good) deep representation for data of the form (x_n)_{n\\in\\N}, where each x_n would be the value of a time series at instant t_n, with the t_n non-uniformly spaced. But all the representations considered in the article seem to be applicable to functions in L^2(\\R) only (like in Theorem 1.4 and Theorem 2.2), and not to sequences (x_n)_{n\\in\\N}. There is something that I did not get here.\n\n\nMinor remarks\n\n- Fourth paragraph, third line: \"this generalization frames\"?\n- Last paragraph before \"Contributions & Organization\": \"that that\".\n- Paragraph about notations: it seems to me that what is defined as l^2(R) is denoted as l^2(Z) after the introduction.\n- Last line of this paragraph: R^d_1 should be R^{d_1}, and R^d_2 R^{d_2}.\n- I think \"smooth\" could be replaced by \"continuous\" (smoothness implies a notion of differentiability).\n- Paragraph before Proposition 1.1: \\sqrt{s} is not defined, and \"is supported\" should be \"are supported\".\n- Theorem 1.1: the f_k should be phi_k.\n- Definition 1.4: \"piece-linear\" -> \"piecewise linear\"?\n- Lemma 1.2 and Proof 1.4: there are indices missing to \\tilde h and \\tilde g.\n- Proof 1.4: \"and finally\" -> \"And finally\".\n- Proof 1.5: I do not understand the grammatical structure of the second sentence.\n- Proposition 1.4: the definition of a RNN is the same as definition 1.2 (except for the frame bounds); I do not see why such transformations should model RNNs.\n- Paragraph before Proposition 1.5: \"in,formation\".\n- Proposition 1.6: it should be said on which space the frame is injective.\n- On page 8, \"Lipschitz\" is erroneously written (twice).\n- Proposition 1.7: \"ProjW,l\"?\n- Definition 2.1: in the \"nested\" property, I think that the inclusion should be the other way around.\n- Before Theorem 2.1, the sentence \"Such Riesz basis is proven\" is unclear to me.\n- Theorem 2.1: \"filters convolution filters\".\n- I think the architecture described in Theorem 2.2 could be clarified; I am not exactly sure where all the arrows start from.\n- First line of Subsection 2.3: \". is always\" -> \"is always\".\n- First paragraph of Subsection 3.2: \"the the\".\n- Paragraph 3.2: could the previous algorithms developed for this dataset be described in slightly more detail? I also do not understand the meaning of \"must solely leverage the temporal structure\".\n- I think that the section about numerical experiments could be slightly rewritten, so that the architecture used in each experiment is clearer. In Paragraph 3.2 in particular, I did not get why the architecture presented in Figure 6 has far fewer parameters than the one in Figure 5; it would help if the authors clearly precised how many parameters each layer contains.\n- Conclusion: \"we can to\" -> \"we can\".\n- Definition 4.1: p_v(s) -> p_v(t).", "Dear authors,\n\nThank you for your reply, and thank you for correcting various typos. I however do not agree with all your corrections.\n\nTheorem 1.1:\nI agree that the composed operator is from L2(R) to l2(Z). However, not all operators from L2(R) to l2(Z) are of the form f -> (psi_n(\\scal{f}{S_n}))_n, for some family of elements S_n in L2(R) and some family of operators psi_n, as in Definition 1.2.\n\nProof 1.4:\nThe product of L2 norms of functions defined over [0;1] is not an upper bound to the norm of the product, and the inequality ||h* x||_2 <= ||h||_2 ||x||_2 is not true for all h and x (an easy counter-example is h=x=\\delta_0+\\delta_1).\n\nProposition 1.6:\nI still do not get it. if R = 1/2 mu/(1-int \\phi), then, from Propositon 1.5, N_t/t goes to 2R when t goes to infinity. Then n/t_n also goes to 2R when n goes to infinity, and since R1 > R, the limit of n/t_n is not larger than 2R1 as required by Theorem 1.3.\n\nAbout the motivations, I still think that they should be developed. Why is it important for a neural network to admit a continuous inverse? And about orthogonal outputs, I am not sure that getting rid of redundancy is necessary, or desirable; I think it should be discussed.", "We thank the reviewer for the careful examination of the paper and apologize for the many typos that have hampered the reading.\n\nMain remarks:\n\n(1)There are mistakes in the mathematical statements.\nWhile we agree that there were several key typos, we think our claims are correct.\n\nA) Theorem 1.1:\nF is a function from L2(R) to l2(Z) (which defines the psi_n) while phi_1 … phi_L are functions from l2(Z) to l2(Z). Therefore the overall composed operator is from L2(R) to l2(Z).\n\nB) Proof 1.4 presents a typographic error but remains true. Instead of an equality there is an inequality that stems from the fact that the L2 norm is algebraic and therefore the product of the norms is an upper bound to the norm of the product.\n\nC) Indeed, t_n / n is a typo. The ratio of interest is n/t_n and the constant is R_1.\n\nD) There is equality here because we defined R_1 > R and had the condition in theorem 1.3 depend on R.\n\nE) As explained in proof 2.1, the discrete wavelet transform preserve framing thanks to a careful down sampling scheme through a mirror filter bank. The critical conditions (in particular the mirroring filters) from the Mallat-Meyer theorem (which can be found in Mallat 2008) on the filters guarantee that framing is preserved.\n\n(2) The article extends the study of homeomorphic properties from the linear to the non-linear case. In particular we show that the theory of frames which was mainly defined for linear operators can be employed to better understand non-linear functions.\nThe leaky relus we employ have a leakiness factor of 0.1 and therefore their lipschitz constant is only 10.\n\n(3) Having orthogonal outputs is a sufficient condition to ensure that there are no linear level redundancies in the representation. Although other orthogonal transforms such as Fourier or Hadamard transforms could be employed, we rely on the general orthogonality of Wavelet basis.\n\n(4) As explained in the introduction of the article the input we consider is a function of L2(R) (continuous time object) and we study the impact of sequential sampling (observation is in l2(Z)) in the setting of non-linear operators.\nThe very focus of the article is indeed the fact that most time series exist as continuous time objects (temperature, latent sentiment, location) but are only observed as discrete sequences. The article examines the consequences of discrete sampling on the representation of a continuous time latent process.\n\nMinor remarks\nTypographic errors have been corrected.\n\n- I think \"smooth\" could be replaced by \"continuous\" (smoothness implies a notion of differentiability).\nR: Here we ask for Lipschitz continuity which is stronger than continuity.\n\n- Paragraph before Proposition 1.1: \\sqrt{s} is not defined, and \"is supported\" should be \"are supported\".\nR: The typographic errors have been corrected, sqrt(s) is now sqrt(delta t).\n\n- Proof 1.5: I do not understand the grammatical structure of the second sentence.\nR: “are also bi-Lipschitz” has been added at the end of the sentence.\n\n- Proposition 1.4: the definition of a RNN is the same as definition 1.2 (except for the frame bounds); I do not see why such transformations should model RNNs.\nR: An index has been added which was missing and now makes the difference between RNNs and non-linear frames explicit.\n\n- Proposition 1.6: it should be said on which space the frame is injective.\nR: We made the condition on the support of the Fourier transform of the functions of interest explicit.\n\n- Proposition 1.7: \"ProjW,l\"?\nR: The typographic error has been corrected. The W is now in index as intended while the scale l is presented as an exponent.\n\n- Definition 2.1: in the \"nested\" property, I think that the inclusion should be the other way around.\nR: The typographic error has been corrected.\n\n- Before Theorem 2.1, the sentence \"Such Riesz basis is proven\" is unclear to me.\nR: The typographic error has been corrected, “to exist” was missing.\n\n- Paragraph 3.2: could the previous algorithms developed for this dataset be described in slightly more detail? I also do not understand the meaning of \"must solely leverage the temporal structure\".\nR: In order to clarify our sentence, we added “In particular, the raw video feed is not available.” We also added “A thorough description of the baselines we employ is available in~\\cite{abu2016youtube}.”\n\n- I think that the section about numerical experiments could be slightly rewritten, so that the architecture used in each experiment is clearer. In Paragraph 3.2 in particular, I did not get why the architecture presented in Figure 6 has far fewer parameters than the one in Figure 5; it would help if the authors clearly precised how many parameters each layer contains.\nR: We added: “which results in a decrease of the total number of parameters in the recurrent layers by a factor of d”.\n\nAgain we thank the reviewer for helping us improve the paper.", "We thank the reviewer for the careful examination of the paper and apologize for the many typos that have hampered the reading.\nWe we will now respond to the comments and remarks point by point.\n\nThe reviewer’s remark on the role of alpha in Leaky ReLUs is quite accurate indeed. Although it is clear that a lower alpha means that our framing bounds will be less well conditioned, it is difficult to control the effect of the leaky part of the ReLU layer on conditioning when multiple such layers are employed. In particular, depending on the input the layers may or may not be in the leaky region of their support.\n\nAs highlighted by the references given by the reviewer the tightness of a frame is of key importance when it comes to controlling the stability of the representation. In the present paper we only give properties on the smoothness of the representation because we explicitly consider a setting in which observations are observed irregularly and randomly through a Hawkes process. Our concern is therefore slightly different in that we attempt at tackling some properties of representations of randomly observed stochastic processes as observations go through a pipeline of non-linear operators.\n\nAgain we thank the reviewer for helping us improve the paper.", "We thank the reviewer for the careful examination of the paper and apologize for the many typos that have hampered the reading.\nWe we will now respond to the comments and remarks point by point:\n\nSection1:\nDefinition 1.1: framing conditions are generally expressed in terms of energy hence the square.\n\nDefinition 1.3: framing constant are indeed not unique in our definition and the issue of lack of uniqueness has now been corrected. The conditioning number is indeed the core quantity of interest in the traditional setting of linear frames. As we delve into the non-linear setting we establish less stringent conditions that only attempt at guaranteeing homeomorphic properties.\nProposition 1.4: the remark was a reference to the issues highlighted in the literature on RNNs concerning vanishing gradients. We now refer the reader to Bengio et al. 1993 in order to give the background of the remark.\n\nSection 2:\nWe added the following remark at the end of 2.2: “A key point here is that the 1x1 convolutions operate in depth and not along the axis of time which preserves the properties of the Discrete Wavelet Transform.”\n\nSubsection 2.3:\nWe thank the reviewer for having noticed the typographic error, it has been corrected.\n\nSection 3:\nThe remarks of the reviewer about the need to better delineate the effects underlying the improvements we noticed highlight a key shortcoming of the experiments we have conducted. Our experimental setup is designed to provide evidence of gains in our representation. However, as with many evaluations of highly nonlinear deep neural networks it is difficult to resolve the precise gains due to individual changes.\n\nAgain we thank the reviewer for helping us improve the paper.\n" ]
[ 5, 5, 2, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1 ]
[ "iclr_2018_S1fHmlbCW", "iclr_2018_S1fHmlbCW", "iclr_2018_S1fHmlbCW", "S1RntVn7z", "HJZM5e9eM", "r1wssyDlf", "ByeAk4weM" ]
iclr_2018_SkrHeXbCW
Learning Representations for Faster Similarity Search
In high dimensions, the performance of nearest neighbor algorithms depends crucially on structure in the data. While traditional nearest neighbor datasets consisted mostly of hand-crafted feature vectors, an increasing number of datasets comes from representations learned with neural networks. We study the interaction between nearest neighbor algorithms and neural networks in more detail. We find that the network architecture can significantly influence the efficacy of nearest neighbor algorithms even when the classification accuracy is unchanged. Based on our experiments, we propose a number of training modifications that lead to significantly better datasets for nearest neighbor algorithms. Our modifications lead to learned representations that can accelerate nearest neighbor queries by 5x.
rejected-papers
The paper received three good quality reviews which were in agreement that the paper was below the acceptance threshold. The authors are encouraged to follow the suggestions from the reviews to revise the paper and resubmit to another venue.
val
[ "ry39-uBxG", "BkcUX-5eG", "rJIi7bcgM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nThe context is indexing images with descriptor vectors obtained from a DNN. This paper studies the impact of changing the classification part on top of the DNN on the ability to index the descriptors with a LSH or a kd-tree algorithm. The modifications include: applying or not a ReLU and batch normalization (and in which order) and normalizing the rows of the last FC layer before softmax. \n\n+ : preparing good features for indexing has not been studied AFAIK, and empirical studies that pay attention to details deserve to be published, see eg. the series \"all about vlad\" and \"Three things everyone should know to improve object retrieval\" by Arandjelović, Zisserman \n\n+/- : the paper considers two indexing methods (kd-tree and LSH) but basically it evaluates how well the features cluster together in descriptor space according to their class. Therefore it should be applicable to more SOTA techniques like product quantization and variants.\n\n- : there is no end-to-end evaluation, except figure 7 that is not referenced in the text, that that has a weird evaluation protocol (\"the probability of finding the correct nearest neighbor conditioned on the model being correct\")\n\n- : the evaluation context is supervised hashing, but the evaluation is flawed: when the same classes are used for evaluation as for training, there is a trivial encoding that consists in encoding the classifier output (see \"How should we evaluate supervised hashing?\" Sablayrolles et al). \n\n- : no comparison with the SOTA, both for the experimental setup and actual results. There are plenty of works that do feature extraction + indexing, see \"Deep Image Retrieval: Learning global representations for image search\", Gordo et al ECCV'16, \"Neural codes for image retrieval\", Babenko et al ECCV'14, \"Large-Scale Image Retrieval with Attentive Deep Local Features\", Noh et al ICCV'17, etc. \n\n\nDetails: \n\nexample x comes from a domain X typically R^p --> X = R^p but is it an image or an embedding? does the training backprop on the embedding?\n\ntop equation of p4 -> if you don't use it don't introduce it\n\nsection 3.2 is a bit disappointing. After 3.1, the natural / \"ML-correct\" way of handing this would be to design a loss that enforces the desirable properties, but this is just a set of tricks, albeit carefully justified\n\nSection 4 could use some clarification. Is the backprop applied to the embedding (\"we convert each frame to its VGG features\")? \"d taken between 256 and 1024\": which one? \n\nMore importantly, the whole section looks like a parametric evaluation with an intermediate objective (mean angle). \n\n\nOverall I think the paper does not have a significant enough contribution or impressive enough results to be published. \n\n\n", "This paper investigates learning representations for the problem of nearest neighbor (NN) search by exploring various deep learning architectural choices. The crux of the paper is the connection between NN and the angles between the closest neighbors -- the higher this angle, more data points need to be explored for finding the nearest one, and thus more computational expense. Thus, the paper proposes to learn a network that tries to reduce the angles between the inputs and the corresponding class vectors in a supervised framework using softmax cross-entropy loss. Three architectural choices are investigated, (i) controlling the norm of output layers of the CNN (using batch norm essentially), (ii) removing relu so that the outputs are well-distributed in both positive and negative orthants, and (iii) normalizing the class vectors. Experiments are given on multiMNIST and Sports 1M and show improvements. \n\nPros:\n1) The paper explores different architectural choices for the deep network to some depth and show extensive results.\n2) The results do demonstrate clearly the advantage of the various choices and is useful\n3) The theoretical connections between data angles and query times are quite interesting,\n\nCons:\n1) Unclear Problem Statement. \nI find the problem statement a bit vague. Standard NN search finds a data point in the database closest to a query under some distance metric. While, the current paper uses the cosine similarity as the distance, the deep framework is trained on class vectors using cross-entropy loss. I do not think class labels are usually assumed to be given in the standard definition of NN, and it is not clear to me how the proposed setup can accommodate NN without class labels. Thus as such, I see this paper is perhaps proposing a classification problem and not an NN problem per se. \n\n2) Lacks Focus\nThe paper lacks a good organization in my opinion. Things that are perhaps technically important are moved to the Appendix. For example, I find the theoretical part of the paper (e.g., Theorem 1) quite elegant and perhaps the main innovation in this paper. However, that is moved completely to the Appendix. So it cannot be really considered a contribution. It is also not clear if those theoretical results are novel. \n\n2) Disconnect/Unclear Assumptions\nThere seems to be some disconnect between LSH and deep learning architectures explored in Sections 2 and 3 respectively. Are the assumptions used in the theoretical results for LSH also assumed in the deep networks? For example, as far as I know, the standard LSH works assumes the projection hyperplanes are randomly chosen and the theoretical results are based on such assumptions. It is not clear how a softmax output of a CNN, which is trained in a supervised way, follow such assumptions. It would be important if the paper could clarify such assumptions to make sure the sections are congruent. \n\n3) No Related Work\nThere have been several efforts for adapting deep frameworks into KNN. The paper ignores all such works. Thus, it is not clear how significant is the proposed contribution. There are also not comparisons what-so-ever to competitive prior works.\n\n4) Novelty\nThe main contribution of this paper is basically a set of experiments looking into architectural choices. However, the results of this study do not provide any surprises. It appears that batch normalization is essential for good performances, while using RELU is not so when one wants to use all directions for effective data encoding. Thus, as such, the novelty or the contributions of this paper are minor.\n\nOverall, while I find there are some interesting theoretical bits in this paper, it lacks focus, the experiments do not offer any surprises, and there are no comparisons with prior literature. Thus, I do not think this paper is ready to be accepted in its present form.\n", "The authors are trying to improve the efficiency of similarity search on representations learned by a deep networks but it is somewhat unclear where the proposed solution will be applied and how. The idea of modifying the network learning process to obtain representations that allow for faster similarity search has definitely a lot of value. I believe that the manuscript needs some re-writing so that the problem(s) are better motivated and is easier to follow.\n\nSpecific comments:\n- Page 2, Sec 2.1, 2.2: The theoretical/empirical analysis is Section 2 has actually been properly formalized by Sanjiv Kumar, et al. [a], and Kaushik Sinha, et al.[b]'s papers on relative contrast and related quantities [a]. It would be good to discuss the proposed quantity in reference to these existing quantities. The idea presented here appears too simplistic relative to the existing ones.\n- Page 2, Sec 2.1: Usually in NNS, the data is not \"well-spread\" and has an underlying intrinsic structure. And being able to capture this intrinsic structure is what makes NNS more efficient. So how valid is this \"well-spread\"-ness assumption in the setting that is being considered? Is this common in the \"learned representations\" set up?\n- Page 4, After Eq 2: I think the properties 1,2 are only true if you are using softmax to predict label and care about predictive 0-1 accuracy. Is that the only place the proposed solution is applicable or am I misunderstanding something?\n- Figure 2, 4 and 7 don't seem to be referred anywhere.\n- Is the application of NNS in performing the softmax evaluation? This needs to be made clearer.\n- If the main advantage of the proposed solution is the improvement of training/testing time by solving angular NNS (instead of MIPS) during the softmax phase, a baseline using existing MIPS solution [c] need to be considered to properly evaluate the utility of the proposed solution.\n\n[a] He, Junfeng, Sanjiv Kumar, and Shih-fu Chang. \"On the Difficulty of Nearest Neighbor Search.\" Proceedings of the 29th International Conference on Machine Learning (ICML-12). 2012.\n\n[b] Dasgupta, Sanjoy, and Kaushik Sinha. \"Randomized partition trees for exact nearest neighbor search.\" Conference on Learning Theory. 2013.\n\n[c] Neyshabur, Behnam, and Nathan Srebro. \"On Symmetric and Asymmetric LSHs for Inner Product Search.\" Proceedings of the 32nd International Conference on Machine Learning (ICML-15). 2015." ]
[ 4, 4, 4 ]
[ 5, 5, 4 ]
[ "iclr_2018_SkrHeXbCW", "iclr_2018_SkrHeXbCW", "iclr_2018_SkrHeXbCW" ]
iclr_2018_B1mAkPxCZ
VOCABULARY-INFORMED VISUAL FEATURE AUGMENTATION FOR ONE-SHOT LEARNING
A natural solution for one-shot learning is to augment training data to handle the data deficiency problem. However, directly augmenting in the image domain may not necessarily generate training data that sufficiently explore the intra-class space for one-shot classification. Inspired by the recent vocabulary-informed learning, we propose to generate synthetic training data with the guide of the semantic word space. Essentially, we train an auto-encoder as a bridge to enable the transformation between the image feature space and the semantic space. Besides directly augmenting image features, we transform the image features to semantic space using the encoder and perform the data augmentation. The decoder then synthesizes the image features for the augmented instances from the semantic space. Experiments on three datasets show that our data augmentation method effectively improves the performance of one-shot classification. An extensive study shows that data augmented from semantic space are complementary with those from the image space, and thus boost the classification accuracy dramatically. Source code and dataset will be available.
rejected-papers
Two reviewers recommended rejection, and one was on the edge. There was no rebuttal to address the concerns and questions posed by the reviewers.
train
[ "Sk5zOVceG", "SyBcch5lM", "H1Wo7H2Zf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a feature augmentation method for one-shot learning. The proposed approach is very interesting. However, the method needs to be further clarified and the experiments need to be improved. \n\nDetails:\n1. The citation format used in the paper is not appropriate, which makes the paper, especially the related work section, very inconvenient to read. \n\n2. The approach:\n(1) Based on the discussion in the related work section and the approach section, it seems the proposed approach proposes to augment each instance in the visual feature space by adding more features, as shown by [x_i; x_i^A] in 2.3. However, under one-shot learning, won’t this make each class still have only one instance for training? \n\n(2) Moreover, the augmenting features x_i^A (regardless A=F, G, or H), are in the same space as the original features x_i. Hence x_i^A is rather an augmenting instance than additional features. What makes feature augmentation better than instance augmentation? \n\n(3) It is not clear how will the vocabulary-information be exploited? In particular, how to ensure the semantic space u to be same as the vocabulary semantic space? How to generate the neighborhood in Neigh(\\hat{u}_i) on page 5? \n\n3. In the experiments: \n(1) The authors didn’t compare the proposed method with existing state-of-the-art one-shot learning approaches, which makes the results not very convincing. \n\n(2) The results are reported for different numbers of augmented instances. Clarification is needed. \n", "This paper proposes a (new) semantic way for data augmentation problem, specifically targeted for one-shot learning setting, i.e. synthesizing training samples based on semantic similarity with a given sample . Specifically, the authors propose to learn an autoencoder model, where the encoder translates image data into the lower dimensional subspace of semantic representation (word-to-vec representation of image classes), and the decoder translates semantic representation back to the original input space. For one-shot learning, in addition to a given input image, the following data augmentation is proposed: a) perturbed input image (Gaussian noise added to input image features); b) perturbed decoded image; c) perturbed decoded neighbour image, where neighbourhood is searched in the semantic space. \nThe idea is nice and simple, however the current framework has several weaknesses:\n1. The whole pipeline has three (neural network) components: a) input image features are extracted from VGG net pre-trained on auxiliary data; 2) auto-encoder that is trained on data for one-shot learning; 3) final classifier for one-shot learning is learned on augmented image space with two (if I am not mistaken) fully connected layers. This three networks need to be clearly described; ideally combined into one end-to-end training pipeline.\n2. The empirical performance is very poor. If you look into literature for zero shot learning, work by Z. Akata in CVPR 2015, CVPR2016, the performance on AwA and on CUB-bird goes way above 50%, where in the current paper it is 30.57% and 8.21% at most (for the most recent survey on zero shot learning papers using attribute embeddings, please, refer to Zero-Shot Learning - The Good, the Bad and the Ugly by Xian et al, CVPR 2017). It is important to understand, why there is such a big drop in performance in one-shot learning comparing to zero-shot learning? One possible explanation is as follows: in the zero-shot learning, one has access to large training data to learn the semantic embedding (training classes). In contrary, in the proposed approach, the auto-encoder model (with 10 hidden layers) is learned using 50 training samples in AwA, and 200 images of birds (or am I missing something?). I am not sure, how can the auto-encoder model not overfit completely to the training data instances. Perhaps, one could try to explore the zero-shot learning setting, where there is a split between train and test classes: training the autoencoder model using large training dataset, and adapting the weights using single data points from test classes in one-shot learning setting. \nOverall, I like the idea, so I am leaning towards accepting the paper, but the empirical evaluations are not convincing. \n\n \n\n \n\n ", "Summary:\nThis paper proposes a data augmentation method for one-shot learning of image classes. This is the problem where given just one labeled image of a class, the aim is to correctly identify other images as belonging to that class as well. \nThe idea presented in this paper is that instead of performing data augmentation in the image space, it may be useful to perform data augmentation in a latent space whose features are more discriminative for classification. One candidate for this is the image feature space learned by a deep network. However they advocate that a better candidate is what they refer to as \"semantic space\" formed by embedding the (word) labels of the images according to pre-trained language models like word2vec. The reasoning here is that the image feature space may not be semantically organized so that we are not guaranteed that a small perturbation of an image vector will yield image vectors that correspond to semantically similar images (belonging to the same class). On the other hand, in this semantic space, by construction, we are guaranteed that similar concepts lie near by each other. Thus this space may constitute a better candidate for performing data augmentation by small perturbations or by nearest neighbour search around the given vector since 1) the augmented data is more likely to correspond to features of similar images as the original provided image and 2) it is more likely to thoroughly capture the intra-class variability in the augmented data.\nThe authors propose to first embed each image into a feature space, and then feed this learned representation into a auto-encoder that handles the projection to and from the semantic space with its encoder and decoder, respectively. Specifically, they propose to perform the augmentation on the semantic space representation, obtained from the encoder of this autoencoder. This involves producing some additional data points, either by adding noise to the projected semantic vector, or by choosing a number of that vector's nearest neighbours. The decoder then maps these new data points into feature space, obtaining in this way the image feature representations that, along with the feature representation of the original (real) image will form the batch that will be used to train the one-shot classifier.\nThey conduct experiments in 3 datasets where they experiment with augmentation in the image feature space by random noise, as well as the two aforementioned types of augmentation in the semantic space. They claim that these augmentation types provide orthogonal benefits and can be combined to yield superior results.\n\nOverall I think this paper addresses an important problem in an interesting way, but there is a number of ways in which it can be improved, detailed in the comments below. \n\nComments:\n-- Since the authors are using a pre-trained VGG for to embed each image, I'm wondering to what extent they are actually doing one-shot learning here. In other words, the test set of a dataset that is used for evaluation might contain some classes that were also present in the training set that VGG was originally trained on. It would be useful to clarify whether this is happening. Can the VGG be instead trained from scratch in an end-to-end way in this model?\n\n-- A number of things were unclear to me with respect to the details of the training process: the feature extractor (VGG) is pre-trained. Is this finetuned during training? If so, is this done jointly with the training of the auto-encoder? Further, is the auto-encoder trained separately or jointly with the training of the one-shot learning classifier? \n\n-- While the authors have convinced me that data augmentation indeed significantly improves the performance in the domains considered (based on the results in Table 1 and Figure 5a), I am not convinced that augmentation in the proposed manner leads to a greater improvement than just augmenting in the image feature domain. In particular, in Table 2, where the different types of augmentation are compared against each other, we observe similar results between augmenting only in the image feature space versus augmenting only in the semantic feature space (ie we observe that \"FeatG\" performs similarly as \"SemG\" and as \"SemN\"). When combining multiple types of augmentation the results are better, but I'm wondering if this is because more augmented data is used overall. Specifically, the authors say that for each image they produce 5 additional \"virtual\" data points, but when multiple methods are combined, does this mean 5 from each method? Or 5 overall? If it's the former, the increased performance may merely be attributed to using more data. It is important to clarify this point.\n\n-- Comparison with existing work: There has been a lot of work recently on one-shot and few-shot learning that would be interesting to compare against. In particular, mini-ImageNet is a commonly-used benchmark for this task that this approach can be applied to for comparison with recent methods that do not use data augmentation. Some examples are:\n- Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. (Finn et al.)\n- Prototypical Networks for Few-shot Learning (Snell et al.)\n- Matching Networks for One-shot Learning (Vinyals et al.)\n- Few-Shot Learning Through an Information Retrieval Lens (Triantafillou et al.)\n\n-- A suggestion: As future work I would be very interested to see if this method can be incorporated into common few-shot learning models to on-the-fly generate additional training examples from the \"support set\" of each episode that these approaches use for training." ]
[ 4, 6, 5 ]
[ 3, 4, 4 ]
[ "iclr_2018_B1mAkPxCZ", "iclr_2018_B1mAkPxCZ", "iclr_2018_B1mAkPxCZ" ]
iclr_2018_HymYLebCb
Network Signatures from Image Representation of Adjacency Matrices: Deep/Transfer Learning for Subgraph Classification
We propose a novel subgraph image representation for classification of network fragments with the target being their parent networks. The graph image representation is based on 2D image embeddings of adjacency matrices. We use this image representation in two modes. First, as the input to a machine learning algorithm. Second, as the input to a pure transfer learner. Our conclusions from multiple datasets are that 1. deep learning using structured image features performs the best compared to graph kernel and classical features based methods; and, 2. pure transfer learning works effectively with minimum interference from the user and is robust against small data.
rejected-papers
The main idea of the paper is to transform graph classification into image representation (via adjacency matrices). Two reviewers are positive, while one is negative. The concerns are novelty (as mentioned by R2), while the last reviewer thinks the method is too simple and unprincipled (here the AC agrees with authors that simple is not necessarily bad). Overall, none of the reviewers champions this paper. Due to many excellent submissions, unfortunately this paper cannot be accepted in present form.
train
[ "SJBw7As1f", "r1jbeZ9xM", "ry9izvnlf", "HJVVl5q7M", "HJgG5RbXM", "SyLJxCbXG", "HyhR2pZmf", "rJyCr_Pef", "ry6_ZhLgM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "public", "public", "public", "official_reviewer" ]
[ "The paper proposes to use 2-d image representation techniques as a means of learning representations of graphs via their adjacency matrices. The adjacency matrix (or a subgraph of it) is first re-ordered to produce some canonical ordering which can then be fed into an image representation method. This can then be fed into a classifier.\n\nThis is a little too unprincipled for my taste. In particular the paper uses a Caffe reference model on top of the adjacency matrix, rather than learning a method specifically for graphs. Perhaps this is due to a lack of available graph training data, but it doesn't seem to make a lot of sense.\n\nMaybe I missed or overlooked some detail, but I didn't spot exactly what the classification task was. I think the goal is to identify which of the graphs a subgraph belongs to? I'm not sure how relevant this graph classification task is. \n\nThe method does prove that the Caffe reference model maintains some information that can be used for classification, but this doesn't really suggest a generalizable method that we could confidently use for a variety of tasks. It's surprising that it works at all, but ultimately doesn't reveal a big scientific finding that could be re-used.", "The paper proposed a subgraph image representation and validate it in image classification and transfer learning problems. The image presentation is a minor extension based on a method of producing permutation-invariant adjacency matrix. The experimental results supports the claim.\n\nIt is very positive that the figures are very helpful for delivering the information.\n\nThe work seems to be a little bit incremental. The proposed image representation is mainly based on a previous work of permutation-invariant adjacency matrix. A novelty of this work seems to be transforming a graph into an image. By the proposed representation, the authors are able to apply image classification methods (supervised or unsupervised) to subgraph classification. \n\nIt will be better if the authors could provide more details in the methodology or framework section.\n\nThe experiments on 9 networks support the claims that the image embedding approaches with their image representation of the subgraph outperform the graph kernel and classical features based methods. It seem to be promising when using transfer learning.\n\nThe last two process figures in 1.1 can be improved. No caption or figure number is provided.\n\nIt will be better to make the notations easy to understand and avoid any notation in a sentence without explanation nearby.\nFor example:\n\"the test example is correctly classified if and only if its ground truth matches C.\"(P5)\n\"We carry out this exercise 4 times and set n to 8, 16, 32 and 64 respectively.\"(P6)\n\nSome minor issues:\n\"Zhu et al.(2011) discuss heterogeneous transfer learning where in they use...\"(P3)\n\"Each label vector (a tuple of label, label-probability pairs).\" (incomplete sentence?P5)", "This paper views graph classification as image classification, and shows that the CNN model adapted from image net can be effectively adapted to the graph classification. The idea is interesting and the result looks promising, but I do not understand the intuition behind the success of analogizing graph with images.\n\nFundamentally, a convolutional filter stands for a operation within a small neighborhood on the image. However, it is unclear how it means for the graph representation. Is the neighborhood predefined? Are the graph nodes pre-ordered? \n\nI am also curious with the effect of pre-trained model from ImageNet. Since the graph presentation does not use color channels, pre-trained model is used different from what it was designed to. I would imagine the benefit of using ImageNet is just to bring a random, high-dimensional embedding. In addition, I wonder whether it will help to fine-tune the model on the graph classification data. Could this submission show some fine-tune experiments?", "The latest submission has the following updates:\n\n1. Added captions to Figures 3 and 4 in Section 1\n2. Fixed typos as pointed out by reviewer3 in sections 2.3.1, 3.1 and 4\n3. Reworded a few phrases in Sections 4 and A\n\n", "Thank you for your review.\n\n1. We have moved the extra details about datasets and classifiers from the methodology section to the Appendix in the interest of space. Is there any specific detail the reviewer expects to be included in this section?\n\n2. We have fixed the other issues you have raised.", "Thank you for your comments. We address the main comments separately.\n\n1. \"It is surprising that the method works and it is unprincipled.\" The whole purpose of input representation is to find the right input representation such that a SIMPLE model with access to little data can perform good test prediction. It is not surprising that the method works since the encoding of the graph into the image is LOSSLESS. It is surprising that the graph image-feature works with simple models, that is the whole point of the paper. We do not understand in what sense the feature is unprincipled. The feature suitably reorders the vertices in the graph so that structural information information of the graph can be represented by spatial organization within the image. In the same vein, one could say that convolutional features which represent image information at different scales are unprincipled.\n\n2. \"but this doesn't really suggest a generalizable method.\" We showed results of blindly applying the feature to 9 different networks drawn from various application domains ranging from social networks to physical networks like road networks. The performance of this feature consistently outperforms all other methods, which suggests that the method is generalizable.\n\n3. \"I didn't spot exactly what the classification task was\" We tried to make this clear in Figure 1: Given a small subgraph, identify what type of parent graph it came from.\n\n4. \"I'm not sure how relevant this graph classification task is.\" Graph and subgraph classification is a large domain with a large amount of research ranging from standard learning methods based on classical graph features to kernel methods and so on. The specific task considered in this paper is merely a formalization of this task in which the class of a graph is determined by its parent graph.\n\nThe intuition behind this work: Deep learning models are highly capable of classifying structured real world images. We leverage this strength to classify subgraphs using the structured image embeddings we obtain that are a lossless representation of graphs. We also show that even when using a model (Caffe) that was trained in a completely different domain (real world images - ImageNet), the structured representation is powerful enough to provide more than meaningful results. You mention \"... the paper uses a Caffe reference model on top of the adjacency matrix, rather than learning a method specifically for graphs ...\" in your review. We think that you are conflating the two very different approaches we are proposing in this paper.\n\n5. \"In particular the paper uses a Caffe reference model on top of the adjacency matrix, rather than learning a method specifically for graphs.\" We believe the reviewer missed this part of the paper. We demonstrated the valuused the image feature in TWO ways:\n\n (a) (The main way) To train a classifier to classify graphs from scratch in a standard machine learning framework. Here we used several different learning models, including deep networks, kernels and standard models like regression based on classical graph features. Performance of our image feature is impressive (our opinion) compared to all other traditional features, and deep networks performed best.\n\n (b) (The secondary way) In a pure transfer learning setting, we simply used the Caffe classifier trained on image-Net with NO further training except for applying a k-NN on the Caffe output. This mode of classification shows that the image can be treated as a traditional image since Caffe is trained on traditional images, and further, the performance is impressive, which means that this transfer setting can be used when there is limited training data in the original graph domain.\n\nWe hope that we have addressed the concerns of the reviewer.", "Thank you for your review.\nWe emphasize that the main focus of the paper is to present the power of the image feature created by lossless \"embedding\" of the adjacency matrix as an image. We use this image feature in two ways:\n1. To train a classifier from scratch in a standard machine learning framework. Here we used several different models, including deep networks. Performance of this feature is impressive compared to other traditional features, and deep networks performed best.\n2. In a pure transfer learning setting we simply used the Caffe classifier trained on image-Net with NO further training except for applying a k-NN on the Caffe output. This mode of classification shows that the image can be treated as a traditional image and can be used when there is limited training data.\n", "Thanks for the comment. Please see the responses below.\n\n1. what is the physical meaning of CNN filters respond to the graph representation?\n- I'm not sure I understood your question correctly. I'm assuming you meant image embeddings by \"graph representation\". From CNN's perspective, the structured image embeddings are like any other images. The fact that these structured image embeddings were obtained from adjacency matrices has no effect on CNN. \n\n2. for images from ImageNet, each pixel is represented by 3 color channels (RGB). Will the adjacent matrices representation use such channels?\n\n- Caffe (trained on ImageNet) takes in black & white/grayscale images as input as well. The structured image embeddings of the adjacency matrices do not have to be modified in any way.\n\n3. if we shuffle the order of graph nodes, the rows/columns in adjacent matrix will exchange. Will the image based classification result be the same?\n\n- Yes. The shuffling of the order of the nodes in the adjacency matrices does not affect classification. This is because we apply a structuring process on the matrices before we obtain the image embeddings. This ensures that no matter the arrangement of the nodes, the image embedding produces the same structure for a given adjacency matrix. It's permutation invariant. See Section 2.1 for details.\n", "This paper views graph classification as image classification, and shows that the CNN model adapted from image net can be effectively adapted to the graph classification. The idea is interesting and the result looks promising, but I have difficulty to understand the intuition behind the success of analogizing graph with images. More specifically, I wonder\n\n1. what is the physical meaning of CNN filters respond to the graph representation?\n\n2. for images from ImageNet, each pixel is represented by 3 color channels (RGB). Will the adjacent matrices representation use such channels?\n\n3. if we shuffle the order of graph nodes, the rows/columns in adjacent matrix will exchange. Will the image based classification result be the same?" ]
[ 3, 6, 6, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HymYLebCb", "iclr_2018_HymYLebCb", "iclr_2018_HymYLebCb", "iclr_2018_HymYLebCb", "r1jbeZ9xM", "SJBw7As1f", "ry9izvnlf", "ry6_ZhLgM", "iclr_2018_HymYLebCb" ]
iclr_2018_Skvd-myR-
Learning Non-Metric Visual Similarity for Image Retrieval
Measuring visual (dis)similarity between two or more instances within a data distribution is a fundamental task in many applications, specially in image retrieval. Theoretically, non-metric distances are able to generate a more complex and accurate similarity model than metric distances, provided that the non-linear data distribution is precisely captured by the similarity model. In this work, we analyze a simple approach for deep learning networks to be used as an approximation of non-metric similarity functions and we study how these models generalize across different image retrieval datasets.
rejected-papers
Two reviewers recommended rejection, and the last reviewer votes for acceptance. The authors provided a rebuttal, including the end-to-end experiment (although the AC agrees with the authors that this experiment is not crucial to the paper). The AC read the paper and the reviews. While there are clearly interesting aspects of this work, it somewhat falls short in terms of the technical contribution. Perhaps a better writing would alleviate this issue: for example, explaining the visual features is somewhat a distraction from the main point, and could be put in the end. The 3 stage training is somewhat ad hoc (or less elegant). Since there are many excellent papers submitted to ICLR this year, this paper unfortunately did not make it above the bar.
train
[ "By32fJqlG", "SkT3Sw9lG", "SySI56ubG", "BJ5wfNKfG", "H1WLMsUzM", "Sksd0LB-G", "HyG22UHbG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors of this work propose learning a similarity measure for visual similarity and obtain, by doing that, an improvement in the very well-known datasets of Oxford and Paris for image retrieval. The work takes high-level image representations generated with an existing architecture (R-MAC), and train on top a neural network of two fully connected layers. \n\nThe training of such network is performed in three stages: firstly approximating the cosine similarity with a large amount of random feature vectors, secondly using image pairs from the same class, and finally using the hard examples.\n\n\nPROS\n\nP1. Results indicate the benefit of this approach in terms of similarity estimation and, overall, the paper present results that extend the state of the art in well-known datasets. \n\nP2. The authors make a very nice effort in motivation the paper, relating it with the state of the art and funding their proposal on studies regarding human visual perception. The whole text is very well written and clear to follow.\n\nCONS\n\nC1. As already observed by the authors, training a similarity function without considering images from the target dataset is actually harmful. In this sense, the simple cosine similarity does not present this drawback in terms of lack of generalization. This observation is not new, but relevant in the field of image retrieval, where in many applications the object of interest for a query is actually not present in the training dataset.\n\nC2. The main drawback of this approach is in terms of computation. Feed-forwarding the two samples through the trained neural network is far more expensive that computing the simple cosine similarity, which is computed very quickly with a GPU as a matrix multiplication. The authors already point at this in Section 4.3.\n\nC3. I am somehow surprised that the authors did not explore also training the network that would extract the high-level representations, that is, a complete end-to-end approach. While I would expect to have the weights frozen in the first phase of training to miimic the cosine similarity, why not freeing the rest of layers when dealing with pairs of images ?\n\nC4. There are a couple of recent papers that include results of the state of the art which are closer and sometimes better than the ones presented in this work. I do not think they reduce at all the contribution of this work, but they should be cited and maybe included in the tables:\n\nA. Gordo, J. Almazan, J. Revaud, and D. Larlus. End-to-end learning of deep visual representations for image retrieval.\nInternational Journal of Computer Vision, 124(2):237–254, 2017.\n\nAlbert Jimenez, Jose M. Alvarez, and Xavier Giro-i-Nieto. “Class-Weighted Convolutional Features for Visual Instance Search.” In Proceedings of the 28th British Machine Vision Conference (BMVC). 2017.\n", "(1) The motivation\nThe paper argues that it is more suitable to use non-metric distances instead of metric distances. However, the distance function used in this work is cosine similarity between two l2 normalized features. It is known that in such a situation, cosine similarity is equivalent to Euclidean distance. The motivation should be further explained.\n\n(2) In Eq. (5), I am not sure why not directly set y_ij = 1 if two images come from the same category, and set to 0 otherwise. It is weird to see the annotation is related to the input features considering that we already have the groundtruth labels.\n\n(3) The whole pipeline is not trained in an end-to-end manner. It requires some other features as the input (RMAC used in this work), and three-stage training. It is interesting to see some more experiments where image pixels are the input.\n\n(4) The algorithm is not comparable to the state-of-the-art. Some representative papers have reported much better performances on the datasets used in this paper. It is suggested to refer to some recent papers in top conferences.", "This paper presents a simple image retrieval method. Paper claims it is a deep learning method, however it is not an end-to-end network. The main issue of the paper is lack of technical contributions.\n\nPaper assumes that image retrieval task can be reformulated at a supervised similarity learning task. That is fine, however image retrieval is traditionally an unsupervised task. \n\nEven after using supervised method and deep learning technique, still this method is not able to obtain better results than hand crafted methods. Why is that? See - paper from CVPR2012 - Arandjelović, Relja, and Andrew Zisserman. \"Three things everyone should know to improve object retrieval.\" Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE, 2012.\n\nPaper make use of external signal to obtain y_{i,j}. It is not clear to me how does this generalize to large datasets?\n\nIf features are L2 normalized, why you need to normalize the features again in equation 5?\n\nIn equation 5, why not simply use a max margin deep similarity metric learning method with slack variables to generalizability?\n\nThe performance of entire network really rely on the accuracy of y_{i,j} and it is not clear the obtained performance is simply due to this supervision.\n\nPaper does not argue well why we need this supervision.\n\nTechnically, there is nothing new here.\n", "We would like to thank the reviewers for their feedback. Considering all their suggestions, we have tried our best to improve our work and we have uploaded a new version of the paper.\n \nWe would like to emphasized that the main idea of our work is to use a non-metric similarity function based on neural networks instead of a standard metric for image retrieval (cosines). We argue that the visual human perception is not properly explained by linear metrics and thus, non-metric visual similarity may obtain better performance than the standard cosine similarity in image retrieval systems. We think this is something researchers have not looked at, to the best of our knowledge.\n\nWe propose a simple approach based on a three stage training to learn a non-metric visual similarity and we perform exhaustive experiments to evaluate its performance. Experiments show that using a visual similarity function based on neural networks instead of cosine similarity is actually beneficial and can improve results in standard image retrieval datasets considerably. The paper is not intended to be an end-to-end network as we wanted to study the benefits of using a different similarity function than standard cosine similarity. We thought that the results of the paper would not have been clear about the benefits of our method by training an end-to-end approach, but since all the three reviewers agree that it is an interesting experiment, we have indeed included it in this revision. \n\nIn brief, the new version of the paper includes the following improvements:\n\n1. We trained a simple linear metric method using the same data and training protocol as our proposed method. Results, included in Table 2, show that a linear system based on affine transformations can not fit the visual similarity between images as well as a nonlinear MLP. This suggests that the benefits in our system are due to the non-metric nature of the architecture and not to the training configuration, as suggested by Reviewer 1. \n\n2. After the concerns of Reviewer 2 about the training dataset, we performed further empirical evaluation in this issue, which is now included in appendix A. Results show that the similarity function is able to generalize well even when a small subset of the target domain is used.\n\n3. As all the three reviewers agree that an end-to-end experiment is very interesting, we included an end-to-end approach in appendix B. Unsurprisingly, the results show that an end-to-end approach is even more beneficial, as fine-tuning the entire architecture allows us to fit better to a particular dataset. However, we would like to emphasize that the key message of the paper is that fine-tuning the final similarity computation, instead on relying on cosines as researchers have been doing so far, may be a worthwhile step that can push accuracy results higher, irrespective of the feature vector computation. \n\n4. In relation to the missing citations, we included the following missing work in Section 2 (Related Work) and Section 5.3 (Comparison with the state of the art):\n- A. Gordo, J. Almazan, J. Revaud, and D. Larlus. End-to-end learning of deep visual representations for image retrieval. IJCV 2017.\n- A. Jimenez, J. M. Alvarez, and X. Giro-i-Nieto. “Class-Weighted Convolutional Features for Visual Instance Search.” BMVC 2017.\n- H. Noh, A. Araujo, J. Sim, T. Weyand, and B. Han. Large-Scale Image Retrieval With Attentive Deep Local Features. ICCV 2017.\n\nThank you for reading our response and please, consider re-reading the new version of the paper and updating your reviews, if appropriate.", "Thank you for your review. We really appreciate criticism in our work in order to keep improving it. Below, we'd like to address the points you raise. \n\nAs already clarified in the other author responses, the paper is certainly not an end-to-end network for a very specific reason. We wanted to study the benefits of using a non-metric distance function trained with neural networks. This is something researchers have not looked at, to the best of our knowledge. Our results indicate that by casting the similarity computation as a trainable network there are benefits to be gained, irrespective of what feature extraction you use. Here we have used RMAC as a basis and shown how there are extra percentage points of accuracy to be gained by training a similarity computation instead of just using cosines. We believe the narrative of the paper gets a bit confused by including an end-to-end training but since all three reviewers mentioned it, we have indeed included the experiment in appendix B. As expected, doing end-to-end training offers even further improvement to the accuracy results but the story of the paper remains the same: “Whatever your image feature vector, consider fine-tuning your cosine similarity computation\"\n\nConcerning the rest of your points. The assumption that image retrieval can be reformulated as a supervised task is not new and it is broadly assumed in some of the state-of-the art methods, such as:\n\nF. Radenovic, G. Tolias and O. Chum. CNN image retrieval learns from BoW: Unsupervised fine-tuning with hard examples. ECCV 2016.\n\nA. Gordo, J. Almazan, J. Revaud, and D. Larlus. End-to-end learning of deep visual representations for image retrieval. IJCV 2017.\n\nWe are really confused when you say that our method is not able to obtain better results than hand-crafted methods. As it can be seen in Table 3, our method outperforms almost all the methods based in compact representations in image retrieval literature. We would appreciate if you can please clarify which methods are you referring to. As for query expansion and image re-ranking, these are add-ons that can still be applied on top of our similarity network in the same way as they are applied in top of cosine similarity. Such methods are not competitors as they might be applied altogether to push image retrieval performance.\n\nWith respect to equation (5) and the cosine similarity computation, features are normalized as it is the standard procedure. We envisaged our method to be used with any kind of features, whether previously L2 normalized or not. \n\nFinally, experiments in larger datasets (Oxford 105k and Paris 106k) are conducted in Table 3. ", "Thank you for the review. We apologize if the first draft was unclear in certain aspects. Below, we'd like to clarify and address all of your points.\n\n(1) The motivation of our work can be summarized as in the following paragraph of the paper (page 3):\n\n\"Note that g does not have to be a metric in order to be a similarity function and thus, it is not required to satisfy the rigid constraints of metric axioms, i.e. non-negativity, identity of indiscernibles, symmetry and triangle inequality. Some non-metric similarity works such as Tan et al. (2006) suggest that these restrictions are not compatible with human perception. As an example, they showed that although a centaur might be visually similar to both a person and a horse, the person and the horse are not similar to each other. A possible explanation for this phenomenon is that when comparing two images, human beings may pay more attention to similarities and thus, similar portions of the images may be more discriminative than dissimilar parts. To overcome the issues associated with applying strong rigid constraints to visual similarity, we propose to learn the non-metric similarity function g using a neural network approach.\"\n\nThe basic idea behind these lines is that the human perception of visual similarity might not correspond to what a linear metric, such as cosine similarity or Euclidean distance, represents. Thus, our work proposes to learn a non-metric similarity function with a convolutional neural network. In our work, we do not use the cosine similarity between two l2 normalized features as you stated, but the non-metric similarity function trained in our model. Then, this non-metric similarity function is applied to any pair of visual features *instead* of a standard metric (such as cosine similarity) to rank images by score in image retrieval problems. We argue and show in our experimentation that by using our non-metric similarity function we can push performance in standard image retrieval datasets.\n\n\n(2) In contrast to classification methods where labels are discrete (in this case, 1 if images are similar or 0 otherwise), a similarity function for image retrieval ranking should produce a continuous set of scores. This is for obvious reasons: even within the same class, a pair of images might be more similar than another pair of images, and the similarity function should reflect this behavior in the output scores.\n\n\n(3) We also believe that a end-to-end training is the next step after the results of this work. It is for sure a very interesting experiment to perform. However, the scope of this paper was to study the benefits of using a non-metric distance function trained with neural networks. In order to isolate the contribution of the similarity computation part and perform a fair comparison between distance functions, we used standardized features (R-MAC) as image descriptors. By training the whole pipeline in an end-to-end way we would have never found which part of the improvement was because of the feature extraction fine-tunning and which part of the improvement was due to the non-metric similarity computation.\n\n\n(4) We apologize for any missing citation and we are willing to include any missed work in an updated version of the paper. So far, we are aware of the following papers [1], [2] and [3]. However, we do not consider that these papers reduce the contribution of our work, as the assumption that using a similarity network is beneficil in image retrieval is still validated through our extensive evaluation.\n\n[1] A. Gordo, J. Almazan, J. Revaud, and D. Larlus. End-to-end learning of deep visual representations for image retrieval. IJCV 2017.\n[2] A. Jimenez, J. M. Alvarez, and X. Giro-i-Nieto. “Class-Weighted Convolutional Features for Visual Instance Search.” BMVC 2017.\n[3] H. Noh, A. Araujo, J. Sim, T. Weyand, and B. Han. Large-Scale Image Retrieval With Attentive Deep Local Features. ICCV 2017.", "Thank you very much for your useful suggestions. We'd like to address your comments for further improvement of our work.\n\nC1. We agree and we actually observe this phenomenon in our experiments. However, we found that our similarity network generalizes well even when few samples of the target dataset are given (for example, in the Oxford dataset, with only 100 samples from the target dataset our similarity network outperforms cosine similarity). Further details on this topic are going to be included as an appendix in the next update of the paper.\n\nC2. As already stated in the paper, standard metrics are relatively fast and computationally cheap. However, we believe that computation might not be necessarily a problem with the current computational power of GPUs. Moreover, speeding up the network similarity computation and linking it to some approximate nearest neighbour shceme is one of the things we are currently looking at.\n\nC3. We also believe that a end-to-end training is the next step after the results of this work. It is for sure a very interesting experiment to perform. However, the scope of this paper was to study the benefits of using a non-metric distance function trained with neural networks. In order to isolate the contribution of the similarity computation part and perform a fair comparison between distance functions, we used standardized features (R-MAC) as image descriptors. By training the whole pipeline in an end-to-end way we would have never found which part of the improvement was because of the feature extraction fine-tunning and which part of the improvement was due to the non-metric similarity computation.\n\nC4. We apologize for any missing citation. Thank you for pointing us to a couple of missing works, which for sure are going to be included in an updated version of the paper." ]
[ 7, 4, 3, -1, -1, -1, -1 ]
[ 5, 4, 5, -1, -1, -1, -1 ]
[ "iclr_2018_Skvd-myR-", "iclr_2018_Skvd-myR-", "iclr_2018_Skvd-myR-", "iclr_2018_Skvd-myR-", "SySI56ubG", "SkT3Sw9lG", "By32fJqlG" ]
iclr_2018_SJw03ceRW
GENERATIVE LOW-SHOT NETWORK EXPANSION
Conventional deep learning classifiers are static in the sense that they are trained on a predefined set of classes and learning to classify a novel class typically requires re-training. In this work, we address the problem of Low-shot network-expansion learning. We introduce a learning framework which enables expanding a pre-trained (base) deep network to classify novel classes when the number of examples for the novel classes is particularly small. We present a simple yet powerful distillation method where the base network is augmented with additional weights to classify the novel classes, while keeping the weights of the base network unchanged. We term this learning hard distillation, since we preserve the response of the network on the old classes to be equal in both the base and the expanded network. We show that since only a small number of weights needs to be trained, the hard distillation excels for low-shot training scenarios. Furthermore, hard distillation avoids detriment to classification performance on the base classes. Finally, we show that low-shot network expansion can be done with a very small memory footprint by using a compact generative model of the base classes training data with only a negligible degradation relative to learning with the full training set.
rejected-papers
Two reviewers recommended rejection, and one is slightly more positive. The main concern is that the experiments are not convincing (ie, the number of base and added classes is very small). Furthermore, while the paper introduces several interesting ideas, the AC agrees with the second reviewer that each of these could be explored in more detail. This work seems preliminary. The authors are encouraged to resubmit to a future conference.
train
[ "r1R0L9Def", "r1W-h8Flz", "BJK7re9ez", "B1Fj2sjmf", "BJWd2ismz", "SyTJhosXM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "On few-shot learning problem, this paper presents a simple yet powerful distillation method where the base network is augmented with additional weights to classify the novel classes, while keeping the weights of the base network unchanged. Thus the so-called hard distillation is proposed. This paper is well-written and well organized. The good points are as follows,\n\n1. The paper proposes a well-performance method for the important low-shot learning problem based on the transform learning.\n2. The Gen-LSNE maintains a small memory footprint using a generative model for base examples and requires a few more parameters to avoid overfitting and take less time to train.\n3. This paper builds up a benchmark for low-shot network expansion.\n\nThere are some problems,\n1. There still is drop in accuracy on the base classes after adding new classes, and the accuracy may still drop as adding more classes due to the fixed parameters corresponding to the base classes. This is slightly undesired.\n2. Grammatical mistake: page 3, line 5(“a additional layers”)\n", "The goal of this paper is to study generalisation to novel classes. This paper stipulates some interesting ideas, using an idea of expansion layers (using a form of hard distillation, where the weights of known classes are fixed), a GMM to model the already learned classes (to reduce storage), and a form of gradient dropout (updating just a subset of the weights using a dropout mask). All of these assume a fixed representation, trained on the base classifier, then only the final classification layer is adjusted for the novel examples. \n\nThe major drawback is that none of these ideas are fully explored. Given fixed representation, for example the influence of forgetting on base classes, the number of components used in the GMM, the influence of the low-shot, the dropout rate, etc etc. The second major drawback is that the experimental setting seems very unrealistic: 5 base classes and 2 novel classes. \n\nTo conclude: the ideas in this paper are very interesting, but difficult to gather insights given the focus of the experiments.\n\nMinor remarks\n- Sect 4.1 \"The randomly ... 5 novel classes\" is not a correct sentence.\n- The extended version of NCM (4.2.1), here uses as prototype-kNN (Hastie 2001) has also been explored in the paper of NCM, using k-means per class to extract prototypes.\n- Given fixed representations, plenty of work has focused on few-shot (linear) learning, this work should be compared to these. ", "The paper proposes a method for adapting a pre-trained network, trained on a fixed number of\nclasses, to incorporate novel classes for doing classification, especially when the novel classes\nonly have a few training examples available. They propose to do a `hard' distillation, i.e. they\nintroduce new nodes and parameters to the network to add the new classes, but only fine-tune the new\nnetworks without modifying the original parameters. This ensures that, in the new expanded and\nfine-tuned network, the class confusions will only be between the old and new classes and not\nbetween the old classes, thus avoiding catastrophic forgetting. In addition they use GMMs trained on\nthe old classes during the fine-tuning process, thus avoiding saving all the original training data.\nThey show experiments on public benchmarks with three different scenarios, i.e. base and novel\nclasses from different domains, base and novel classes from the same domain and novel classes have\nsimilarities among themselves, and base and novel classes from the same domain and each novel class\nhas similarities with at least one of the base class. \n \n- The paper is generally well written and it is clear what is being done \n- The idea is simple and novel; to the best of my knowledge it has not been tested before\n- The method is compared with Nearest Class Means (NCM) and Prototype-kNN with soft distillation\n (iCARL; where all weights are fine-tuned). The proposed method performs better in low-shot\n settings and comparably when large number of training examples of the novel classes are available\n- My main criticism will be the limited dataset size on which the method is validated. The ILSVRC12\n subset contains 5 base and 5 novel classes and the UT-Zappos50K subset also has 10 classes. The\n idea is simple and novel, which is good, but the validation is limited and far from any realistic\n use. Having only O(10) classes is not convincing, especially when the datasets used do have large\n number of classes. I agree that this will not allow or will takes some involved manual effort to\n curate subsets for the settings proposed, but it is necessary for being convincing.", "Thank you for emphasizing the topic of performance drop when adding new classes.\nThe drop in the base classes is common in all of the competing methods, e.g. the Soft-Dis/ICarl Adaptation where the base classes weights are adapted.\nIt is expected that as the number of categories increase, and classes that are similar to one another are introduced, there will be a drop in classification accuracy, even for a fully trained network. Please notice that the proposed method outperforms the competing methods in most cases.\n", "We thank the reviewer. We would like to draw the reviewer’s attention to the following points:\n\n“Influence of forgetting on base classes:“\nThe average accuracy of the base classes is presented in APPENDIX B BASE & NOVEL CLASSES TEST ERROR and in Figure 6,7,8, In which we demonstrate the accuracy of the base classes as a function of the number of novel samples. We present the base class and novel class accuracy in each of the designed scenarios: generic classes from imagenet, Domain specific with similar novel classes, Domain specific with similar class in base.\n\n“Influence of the number of components used in the GMM:”\nIn section 4.5 RESULTS: DEEP-FEATURES GMM EVALUATION \nwe’ve explored the effect of different number of GMM components on the ability to restore and re-create the accuracy on the base classes. We’ve explored the effect of GMM components on 5 imagenet based dataset and 3 based UT-Zappos50K dataset.\n\n“The influence of the low-shot:”\nIn Sections 4.3 and 4.4 the effect of the number of novel samples on the performance is presented. We’ve experimented with a range of number of novel samples.\n\n“The dropout rate:”\nIn Section 4.2.3 GRADIENT DROPOUT, we describe the experiment to evaluate the effect of Gradient Dropout in Low-Shot Expansion. We compare the results obtained with and without the use of gradient dropout, those are presented in Figures 2,3,4. Due to the already dense experimental section we did not add experiments done on various dropout ratios, though we agree that this is an interesting question.\n \n\n“The second major drawback is that the experimental setting seems very unrealistic: 5 base classes and 2 novel classes:” \nPlease see answer above (to first reviewer) regarding the design of benchmark.\n", "We would like to thank the reviewers for their commentary. We ask to draw the reviewer's attention to the following:\n\nIn this work, we focus on a robotic unit in real-life scenarios. It is often desired to be able to adapt to a single or two new classes available with only very few samples. We choose to establish the proposed benchmark for the task of Low-Shot Network expansion in a manner that will reflect this task. That is, we wanted the number of novel classes to be relatively small so the effect of Network adaptation to small quantities of new data can be studied. \n\nWe defined the class (base + novel) average accuracy to be our performance metric. \nWe did not want the base classes accuracy to have overwhelmly more weight than the novel classes in the average accuracy metric, hence we defined the number of base classes to be in a range similar to the number of novel classes. We’ve composed a dataset with a common cardinality (similar to CIFAR10 is O(10), MNIST is O(10) and SVHN is O(10)). While O(10) is considerably smaller than imagenet 1000 classes classification task. It is still a common classification task, which is also common in robotic unit applications. \n\nIn order to avoid bias in the constructed dataset, we’ve performed numerous random experiment, and reported their average result: \n \nThe test on imagenet partitions was done by randomly selecting 50 classes, and then randomly partitioning to 5 groups of 10 , which are then further randomly partitioned to 5 base and 5 novel classes. The result of every experiment done with a given number of novel samples is averaged on 25 = 5X5 trails. That is Figure 2 is the result of 125 tests = 25 averaging X 5 #novel samples.\n\nFigure 5a Is the result of 25 (base,novel group avg) X 8 #novel samples = 200 trails.\n\nIn this unbiased test case, we addressed 3 typical scenarios: generic classes from imagenet, Domain specific with similar novel classes, Domain specific with similar class in base. We’ve further explored the effect and gain of Gen-LSNE compared to other methods in each of these scenarios. We concentrated our effort to analyze and explore the qualities of the proposed method in scenarios that are common in robotic unit real-life scenarios, and to the best of our knowledge the designed benchmark realistically reflect those.\n" ]
[ 6, 4, 4, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1 ]
[ "iclr_2018_SJw03ceRW", "iclr_2018_SJw03ceRW", "iclr_2018_SJw03ceRW", "r1R0L9Def", "r1W-h8Flz", "BJK7re9ez" ]
iclr_2018_BJluxbWC-
Unseen Class Discovery in Open-world Classification
This paper concerns open-world classification, where the classifier not only needs to classify test examples into seen classes that have appeared in training but also reject examples from unseen or novel classes that have not appeared in training. Specifically, this paper focuses on discovering the hidden unseen classes of the rejected examples. Clearly, without prior knowledge this is difficult. However, we do have the data from the seen training classes, which can tell us what kind of similarity/difference is expected for examples from the same class or from different classes. It is reasonable to assume that this knowledge can be transferred to the rejected examples and used to discover the hidden unseen classes in them. This paper aims to solve this problem. It first proposes a joint open classification model with a sub-model for classifying whether a pair of examples belongs to the same or different classes. This sub-model can serve as a distance function for clustering to discover the hidden classes of the rejected examples. Experimental results show that the proposed model is highly promising.
rejected-papers
Three reviewers recommended rejection and there was no rebuttal to overturn their recommendation.
train
[ "HkWZFMVxf", "ry2G0fvxM", "HkFsIQclG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The main goal of this paper is to cluster images from classes unseen during training.\nThis is an interesting extension of the open-world paradigm, where at test time, the classifier has to identify images beloning to the C seen classes during training, but also identify (reject) images which were previously unseen. These rejected images could be clustered to identify the number of unseen classes; either for revealing the underlying structure of the unseen classes, or to reduce annotation costs.\n\nIn order to do so, an extensive framework is proposed, consisting of 3 ConvNet architectures, followed by a hierarchical clustering approach. The 3 ConvNets all have a different goal:\n1. an Open Classification Network (per class sigmoid, trained 1vsRest, with thresholds for rejection)\n2. Pairwise Classification Network, (binary sigmoid, trained on pairs of images of same/different classes)\n3. Auto encoder network\n\nThese network are jointly trained, and the joint-loss is simply the addition of a cross-entropy loss (from OCN), the binary cross-entropy loss (from PCN) and a pixel wise loss (from AE). \nRemarks:\n- it is unclear if the ConvNet weights of the first layers are shared). \n- it is unclear how joint training might help, given that the objectives do not influence each other\n- Eq 1: \n *label \"y_i\" has two different semantics (L_ocn it is the class label, while in L_pcn it is the label of an image pair being from the same class or not)\n * s_j is undefined\n * relation between the p(y_i = 1) (in PCN) and g(x_p,x_q) in Eq 2 could be made more explicit, PCN depends on two images, according to Eq 1, it seems just a sum over single images.\n- It is unclear why the Auto Encoder network is added, and what its function is.\n- It is unclear wether OCN requires/uses unseen class examples during training.\n- Last paragraph of 3.1 \"The 1-vs-rest ... rejected\", I don't see why you need 1vsRest classifiers for this, a multi-class (softmax) output can also be thresholded to reject an test image from the known classes and to assign it to the unknown class.\n\n\nExperimental evaluation\nThe experimental evaluation uses 2 datasets, MNIST and EMNIST, both are very specific for character recognition. It is a pity that not also more general image classification has been considered (CIFAR100, ImageNet, Places365, etc), that would provide insights to the more general behaviour of the proposed ideas.\n\nMy major concern is that the clustering task is not extensively explored. Just a single setting (with a single random sampling of seen/unseen classes) has been evaluated. This is -in part- due to the nature of the chosen datasets, in a 10 class dataset it is difficult to show the influence of the number of unseen classes. So, I'd really urge the authors to extend this evaluation. Will the method discover more classes when 100 unknown classes are used? What kind of clusters are discovered? Are the types of classes in the seen/unseen classes important, I'd expect at least multiple runs of the current experiments on (E)MNIST. \n\nFurther, I miss some baselines and ablation study. Questions which I'd like to seen answered: how good is the OCN representation when used for clustering compared to the PCN representation? What is the benefit of joint-training? How important is the AE in the loss?\n\nRemaining remarks\n- Just a very simple / non-standard ConvNet architecture is trained. Will a ResNet(32) show similar performance?\n- In Eq 4, |C_i || y_j| seems a strange notation for union.\n\nConclusion\nThis paper brings in an interesting idea, is it possible to cluster the unseen classes in an open-world classification scenario? A solution using a pairwise convnet followed by hierarchical clustering is proposed. This is a plausible solution, yet in total I miss an exploration of the solution. \n\nBoth in terms of general visual classification (only MNIST is used, while it would be nice to see results on CIFAR and/or ImageNet as in Bendale&Boult 2016), as in exploration of different scenarios (different number of unseen classes, different samplings) and ablation of the method (independent training, using OCN for hierarchical clustering, influence of Auto Encoder). Therefore, I rate this paper as a (weak) reject: it is just not (yet) good enough for acceptance.", "This paper concerns open-world classification. The open-world related tasks have been defined in many previous works. This paper had made a good survey. \nThe only special point of the open-word classification task defined in this paper is to employ the constraints from the similarity/difference expected for examples from the same class or from different classes. Unfortunately, this paper is lack of novelty. \n\nFirstly, the problem context and setting is kinda synthesized. I cannot quite imagine in what kind of applications we can get “a set of pairs of intra-class (same class) examples, and the negative training data consists of a set of pairs of inter-class”.\n\nSecondly, this model is just a direct combination of the recent powerful algorithms such as DOC and other simple traditional models. I do not really see enough novelty here.\n\nThirdly, the experiments are only on the MNIST and EMNIST; still not quite sure any real-world problems/datasets can be used to validate this approach.\nI also cannot see the promising performance. The clustering results of rejected\nexamples are still far from the ground truth, and comparing the result with\na total unsupervised K-means is a kind of unreasonable.\n", "This paper focuses on the sub-problem of discovering previously unseen classes for open-world classification. \nIt employs a previously proposed system, Open Classification Network, for classifying instances into known classes or rejecting as belonging to an unseen class, and applies hierarchical clustering to the rejected instances to identify unseen classes.\nThe key novel idea is to learn a pairwise similarity function using the examples from the known classes to apply to examples of unknown classes. The argument is that we tend to use the same notion of similarity and dissimilarity to define classes (known or unknown) and one can thus expect the similarity function learned from known classes to carry over to the unknown classes. This concept is not new. Similar idea has been explored in early 2000 by Finley and Joachims in their ICML paper titled \"Supervised Clustering with Support Vector Machines\". But to the best of my knowledge, this is the first paper that applies this concept to the open world classification task. \n\nOnce we learn the similarity function, the rest of the approach is straightforward, without any particular technical ingenuity. It simply applies hierarchical clustering on the learned similarities and use cross-validation to pick a stopping condition for deciding the number of clusters. \nI find the experiments to be limited, only on two hand-written digits/letters datasets. Such datasets are too simplistic. For example, simply applying kmeans to PCA features of the images on the MNIST data can get you pretty good performance. \nExperiments on more complex data is desired, for example on Imagenet classes. \n\nAlso the results do not clearly demonstrate the advantage of the proposed method, in particular the benefit of using PCN. The number of clusters found by the algorithm is not particularly accurate and the NMI values obtained by the proposed approach does not show any clear advantage over baseline methods that do not use PCN. \n\nSome minor comments:\nWhen applied to the rejected examples, wouldn't the ground truth # of clusters no longer be 4 or 10 because there are some known-class examples mixed in? \nFor the base line Encoder+HC, was the encoder trained independently? Or it's trained jointly with PCN and OCN? It is interesting to see the impact of incorporating PCN into the training of OCN and encoder. Does that have any impact on accuracy of OCN? \nIt seems that one of the claimed benefit is that the proposed method is effective at identifying the k. If so, it would be necessary to compared the proposed method to some classic methods for identifying k with kmeans, such as the elbow method, BIC, G-means etc, especially since kmeans seem to give much better NMI values.\n\n\n" ]
[ 5, 5, 4 ]
[ 4, 5, 4 ]
[ "iclr_2018_BJluxbWC-", "iclr_2018_BJluxbWC-", "iclr_2018_BJluxbWC-" ]
iclr_2018_HJr4QJ26W
Improving image generative models with human interactions
GANs provide a framework for training generative models which mimic a data distribution. However, in many cases we wish to train a generative model to optimize some auxiliary objective function within the data it generates, such as making more aesthetically pleasing images. In some cases, these objective functions are difficult to evaluate, e.g. they may require human interaction. Here, we develop a system for efficiently training a GAN to increase a generic rate of positive user interactions, for example aesthetic ratings. To do this, we build a model of human behavior in the targeted domain from a relatively small set of interactions, and then use this behavioral model as an auxiliary loss function to improve the generative model. As a proof of concept, we demonstrate that this system is successful at improving positive interaction rates simulated from a variety of objectives, and characterize s
rejected-papers
The reviewers agree that the idea of incorporating humans in the training of generative adversarial networks is interesting and worthwhile exploring. However, they felt that the paper fell short in providing strong support for their approach. The AC agrees. The authors are encouraged to strengthen their work and resubmit to a future venue.
train
[ "SyRiTtIgG", "r1P_bO5eG", "SyuHdLJ-z", "SJjcxSFQG", "SJ_XxStXM", "B1NQTNYXz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "+ Quality:\nThe paper discusses an interesting direction of incorporating humans in the training of a generative adversarial networks in the hope of improving generated samples. I personally find this exciting/refreshing and will be useful in the future of machine learning.\n\nHowever, the paper shows only preliminary results in which the generator trained to maximize the PIR score (computed based on VGG features to simulate human aesthetics evaluation) indeed is able to do so. However, the paper lacks discussion / evidence of how hard it is to optimize for this VGG-based PIR score. In addition, if this was challenging to optimize, it'd be useful to include lessons for how the authors manage to train their model successfully.\nIn my opinion, this result is not too surprising given the existing power of deep learning to fit large datasets and generalize well to test sets. \n\nAlso, it is not clear whether the GAN samples indeed are improved qualitatively (with the incorporation of the PIR objective score maximization objective) vs. when there is no PIR objective. The paper also did not report sample quantitative measures e.g. Inception scores / MS-SSIM.\n\nI'd be interested in how their proposed VGG-based PIR actually correlates with human evaluation.\n\n+ Clarity: \n- Yosinski et al. 2014 citation should be Nguyen et al. 2015 instead (wrong author order / year).\n- In the abstract, the authors should emphasize that the PIR model used in this paper is based on VGG features.\n\n+ Originality: \n- The main direction of incorporating human feedback in the loop is original.\n\n+ Significance: \n- I think the paper contribution is lighter vs. ICLR standard. Results are preliminary.\n\nOverall, I find this direction exciting and hope the authors would keep pushing in this direction! However, the current manuscript is not ready for publication.", "Summary:\nThis paper proposes an approach to generate images which are more aesthetically pleasing, considering the feedback of users via user interaction. However, instead of user interaction, it models it by a simulated measure of the quality of user interaction and then feeds it to a Gan architecture. \n\nPros:\n+ The paper is well-written and has just a few typos: 2.1: “an Gan”.\n+ The idea is very interesting. \n\nCons:\n\n- Page 2- section 2- The reasoning that a deep-RL could not be more successful is not supported by any references and it is not convincing.\n\n- Page 3- para 3 - mathematically the statement does not sound since the 2 expressions are exactly equivalent. The slight improvement may be achieved only by chance and be due to computational inefficiency, or changing a seed. \n\n- Page 3- 2.2. Using a crowd-sourcing technique, developing a similarly small dataset (1000 images with 100 annotations) would normally cost less than 1k$.\n\n- Page 3- 2.2.It is highly motivating to use users feedback in the loop but it is poorly explained how actually the user's' feedback is involved if it is involved at all. \n\n- Page 4- sec 3 \".. it should be seen as a success\"; the claim is not supported well.\n\n- Page 4- sec 3.2- last paragraph.\nThis claim lacks scientific support, otherwise please cite proper references. The claim seems like a subjective understanding of conscious perception and unconscious perception of affective stimuli is totally disregarded.\nThe experimental setup is not convincing.\n\n- Page 4. 3.3) \"Note that.. outdoor images\" this is implicitly adding the designers' bias to the results. The statement lacks scientific support.\n\n- Page 4. 3.3) the importance of texture and shape is disregarded. “In the Eye of the Beholder: Employing Statistical Analysis and Eye Tracking for Analyzing Abstract Paintings, Yanulevskaya et al”\nThe architecture may lead in overfitting to users' feedback (being over-fit on the data with PIR measures)\n\n- Page 6-Sec 4.2) \" It had more difficulty optimizing for the three-color result\" why? please discuss it.\n\n- The expectation which is set in the abstract and the introduction of the paper is higher than the experiments shown in the Experimental setup.\n", "This paper proposes a technique to improve the output of GANs by maximising a separate score that aims to mimic human interactions. \n\nSummary:\nThe goal of the technique to involve human interaction in generative processes is interesting. The proposed addition of a new loss function for this purpose is an obvious choice, not particularly involved. It is unclear to me whether the paper has value in its current form, that is without experimental results for the task it achieves. It feels to premature for publication. \n\n\nMore comments:\nThe main problem with this paper is that the proposed systems is designed for a human interaction setting but no such experiment is done or presented. The title is misleading, this may be the direction where the authors of the submission want to go, but the title “.. with human interactions” is clearly misleading. “Model of human interactions” may be more appropriate. \n\nThe technical idea of this paper is to introduce a separate score in the GAN training process. This modifies the generator objective. Besides “fooling” the discriminator, the generator objective is to maximise user interaction with the generated batch of images. This is an interesting objective but since no interactive experiments presented in this paper, the rest of the experiments hinges on the definition of “PIR” (positive interaction rate)using a model of human interaction. Instead of real interactions, the submission proposes to maximise the activations of hidden units in a separate neural network. By choosing the hierarchy level and type of filter the results of the GAN differ. \n\nI could not appreciate the results in Figure 2 since I was missing the definition of PIR, how it is drawn in the training setup. Further I found it not surprising that the PIR changes when a highly parameterised model is trained for this task. The PIR value comes from a separate network not directly accessible during training time, nonetheless I would have been surprised to not see an increase. Please comment in the rebuttal and I would appreciate if the details of the synthetic PIR values on the training set could be explained.\n\n- Technically it was a bit unclear to me how the objective is defined. There is a PIR per level and filter (as defined in C4) but in the setup the L_{PIR} was mentioned to be a scalar function, how are the values then summarized? There is a PIR per level and feature defined in C4. \n- What does the PIR with the model in Section 3 stand for? Shouldn’t be something like “uniqueness”, that is how unique is an image in a batch of images be a better indicator? Besides, the intent of what possibly interesting PIR examples will be was unclear. \nE.g., the statement at the end of 2.1 is unclear at that point in the document. How is the PIR drawn exactly? What does it represent? Is there a PIR per image? It becomes clear later, but I suggest to revisit this description in a new version.\n- Also I suggest to move more details from Section C4 into the main text in Section 3. The high level description in Section 3. \n", "We agree that these results are preliminary, and we are presently working on testing this approach on real human data. Nevertheless we feel that these results are sufficient to show the promise of our method and provide a contribution to the literature.\n\nWe think that the increase in PIR is somewhat surprising -- these are highly parameterized models which we are training with very few data points relative to standard deep network training paradigms (1000 images). That these highly parameterized models are able to fit a training set is not unexepcted. However, that the network is able to successfully generalize when targeting objectives based on single filters selected from the tens of thousands of filters and hundreds of millions of parameters in VGG, based on a dataset of only 1000 low-variability images was somewhat surprising and exciting to us.\n\nWe would also like to clarify that the VGG based PIRs are used as a \"ground-truth\" for which to optimize, but our PIR estimator model treats the PIR function as a black box -- this is why it would be easy to substitute in human data for the VGG features. Thus the correlation of different VGG features with human evaluations is not particularly relevant.\n", "Thank you for the comments, we have included some responses below, and added clarification to a few points in the paper.\n\n- Page 2- section 2- The reasoning that a deep-RL could not be more successful is not supported by any references and it is not convincing.\n\nWe did not say that deep RL could not be more successful, we said this is not fundamentally a reinforcement learning problem, and is higher-dimensional than typical RL problems. There is no repeated temporal component to the interactions, which is what RL techniques are fundamentally based upon.\n\n- Page 3- para 3 - mathematically the statement does not sound since the 2 expressions are exactly equivalent. The slight improvement may be achieved only by chance and be due to computational inefficiency, or changing a seed.\n\nMathematical equivalence does not imply computational equivalence -- For example there are many reasons (underflow, efficiency, etc.) that likelihood-based algorithms almost universally work with the log-likelihood rather than the direct likelihood, even though maximizing one is equivalent to maximizing the other. In our case, the curvature of the loss-function landscape could be quite dramatically changed, which could indeed result in the change in learning dynamics we observed.\n\n- Page 3- 2.2.It is highly motivating to use users feedback in the loop but it is poorly explained how actually the user's' feedback is involved if it is involved at all.\n\n We hope that this is clarified immediately below, in the PIR estimator model section. The user data is precisely what the PIR estimator model is trained on.\n\n- Page 4- sec 3 \".. it should be seen as a success\"; the claim is not supported well.o\n\nTo clarify, we are pointing out that if the objective function we give our model has adversarial weaknesses, we should not be disappointed that it exploits them -- this may be the most efficient solution to the problem of increasing the PIR. It is a success to learn the objective function given, including its flaws. The remaining question is whether the model could learn objectives that do not have adversarial features.\n\n- Page 4- sec 3.2- last paragraph.\nThis claim lacks scientific support [...] The claim seems like a subjective understanding of conscious perception and unconscious perception of affective stimuli is totally disregarded. \n\nIt is complicated to untangle conscious and unconscious processes (see e.g. \"Unconscious influences on decision making: A critical review\", Newell & Shanks, Behavioral & Brain Sciences, 2014), and it is certainly beyond the scope of our paper to do so. However, since both are presumably supported by the visual cortex processes that are well modeled by the networks we are using as objectives (see Yamins et al., 2014, cited in our paper), we believe that our procedure is actually fairly well motivated as far as neuroscience's understanding of perception is concerned.\n\n- Page 4. 3.3) \"Note that.. outdoor images\" this is implicitly adding the designers' bias to the results. The statement lacks scientific support.\n- Page 4. 3.3) the importance of texture and shape is disregarded. The architecture may lead in overfitting to users' feedback (being over-fit on the data with PIR measures)\n\nWe agree that the VGG tasks offer a more unbiased and complete set of features, that is why we ran them as well. We did not disregard these features. The color tasks just offer the opportunity to focus in on a specific feature and qualitatively evaluate performance visually on an intuitive and salient feature. The balance between fitting user feedback and fitting the original distribution can be shifted by changing the weights in the loss, as we mentioned in the discussion.\n\n- Page 6-Sec 4.2) \" It had more difficulty optimizing for the three-color result\" why? \n\nIn the supplementary analyses, we show that the models improvement of the PIR is highly correlated with the initial variability in PIR (which is sensible, in general a function is better estimated by points that vary than points that are highly similar). The three color objectives simply have less initial variability in PIR, which makes it more difficult for the model to improve them. (This is also sensible -- there are not many natural images that appear with three different vertical color stripes, so of course there would be little variability in an objective which looks for three vertical color stripes.) In supplementary figure 5, you can see that the three color results are not outliers by any means when compared to the other objectives with similarly low initial variability. \n\n- The expectation which is set in the abstract and the introduction of the paper is higher than the experiments shown in the Experimental setup.\n\nWe agree that these results are preliminary, nevertheless we feel that they are sufficient to show the promise of our method and provide a contribution to the literature.", "PIR:\n\n- I found it not surprising that the PIR changes when a highly parameterised model is trained for this task. The PIR value comes from a separate network not directly accessible during training time, nonetheless I would have been surprised to not see an increase.\n\nWe think that the increase in PIR is somewhat surprising -- these are highly parameterized models which we are training with very few data points relative to standard deep network training paradigms (1000 images). That these highly parameterized models are able to fit a training set is not unexepcted. However, that the network is able to successfully generalize when targeting objectives based on single filters selected from the tens of thousands of filters and hundreds of millions of parameters in VGG, based on a dataset of only 1000 images (sampled from a mediocre generative model) was somewhat surprising to us.\n\n- Technically it was a bit unclear to me how the objective is defined. There is a PIR per level and filter (as defined in C4) but in the setup the L_{PIR} was mentioned to be a scalar function, how are the values then summarized? There is a PIR per level and feature defined in C4.\n\nThe PIR values defined in section C4 are the \"ground-truth\" values, that is, they represent the function(s) we are trying to get our PIR estimator to approximate. The l2 norms in the definitions make these ground-truth PIR values a scalar per-image for a given layer/filter choice. Each layer/filter choice represents a possible ground-truth function corresponding to a single point in the first panel of figure 2, we try many of these layer/filter ground-truths to evaluate the robustness of our approach.\n\nFor a given layer/filter combination, we take the scalar ground-truth values per image and use them to draw noisy observations (simulating noisy data collection in the real world). We then use these to train the PIR estimator. We use this estimator as an additional loss, and backpropagate through it (with weights frozen) to improve the generator. Just as with the other losses in our experiment, we take the mean across the batch to reduce from a scalar value for each image (produced by the PIR estimator) to a single scalar loss. Hopefully this clarifies things.\n\n- What does the PIR with the model in Section 3 stand for? Shouldn’t be something like “uniqueness”, that is how unique is an image in a batch of images be a better indicator? Besides, the intent of what possibly interesting PIR examples will be was unclear.\nE.g., the statement at the end of 2.1 is unclear at that point in the document. How is the PIR drawn exactly? What does it represent? Is there a PIR per image? It becomes clear later, but I suggest to revisit this description in a new version.\n\nWe wished to keep things as generic as possible, because our approach could be useful for many applications. Uniqueness might be one such possibility. Others you might consider are how aesthetically pleasing an image was (as assessed by human raters), how likely someone is to buy a monitor in a store when it displays a given image (as assessed by relative sale rates), how much users set a generated image as a background of their phone, etc. Going a little beyond the details of our model, this could also be used for services that stream generated music (skip rates are essentially a negative interaction rate), or any other services which generate content. Essentially any situation in which humans interact with the products of a generative model is a possible source of a \"PIR\" from our model's perspective. We hope these examples are helpful.\n\n\n\nLack of human experiments:\n\nWe agree that these results are preliminary, and we are presently working on testing this approach on real human data. Nevertheless we feel that these results are sufficient to show the promise of our method and provide a contribution to the literature." ]
[ 4, 5, 4, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1 ]
[ "iclr_2018_HJr4QJ26W", "iclr_2018_HJr4QJ26W", "iclr_2018_HJr4QJ26W", "SyRiTtIgG", "r1P_bO5eG", "SyuHdLJ-z" ]
iclr_2018_SJVHY9lCb
Learning to Select: Problem, Solution, and Applications
We propose a "Learning to Select" problem that selects the best among the flexible size candidates. This makes decisions based not only on the properties of the candidate, but also on the environment in which they belong to. For example, job dispatching in the manufacturing factory is a typical "Learning to Select" problem. We propose Variable-Length CNN which combines the classification power using hidden features from CNN and the idea of flexible input from Learning to Rank algorithms. This not only can handles flexible candidates using Dynamic Computation Graph, but also is computationally efficient because it only builds a network with the necessary sizes to fit the situation. We applied the algorithm to the job dispatching problem which uses the dispatching log data obtained from the virtual fine-tuned factory. Our proposed algorithm shows considerably better performance than other comparable algorithms.
rejected-papers
Three reviewers recommended rejection and there was no rebuttal.
val
[ "r1uDK-Yez", "rJW87QaxM", "SyAd7DuZf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposed a new framework called `Learning to select’, in which a best candidate needs to be identified in the decision making process such as job dispatching. A CNN architecture is designed, called `Variable-Length CNN’, to solve this problem.\n\nMy major concern is on the definition of the proposed concept of `learning-to-select’. Essentially, I’ve not seen its key difference from the classification problem. While `even in the case of completely identical candidates, the label can be 1 in some situations, and in some other situations the label can be 0’, why not including such `situations’ into your feature vector (i.e., x)? Once you do it, the gap between learning to select and classification will vanish. If this is not doable, you should better make more discussions, especially on what the so-called `situations’ are. Furthermore, the application scope of the proposed framework is not very well discussed. If it is restricted to job dispatching scenarios, why do we need a new concept “learning to select”?\n\nThe proposed model looks quite straightforward. Standard CNN is able to capture the variable length input as is done in many NLP tasks. Dynamic computational graph is not new either. In this sense, the technical novelty of this work is somehow limited.\n\nThe experiments are weak in that the data are simulated and the baselines are not strong. I’ve not gained enough insights on why the proposed model could outperform the alternative approaches. More discussions and case studies are sorely needed.\n", "This paper proposes a \"Learning to Select\" problem which essentially is to select the best among a flexible size of candidates, which is in fact Learning to Rank with number of items to select as 1. To be able to efficiently train the model without wasting time on the items that are not candidates, the authors applied an existing work in literature named Dynamic Computation Graph and added convolutional layer, and showed that this model outperforms baseline methods such as CNN, fully-connected, Rank-SVM etc. \n\nAs this paper looks to me as an simple application of an existing approach in literature to a real-world problem, novelty is the main concern here. Other concerns include:\n1. Section 2. It would be good to include more details of DCG to make the papers more complete and easier to read.\n2. It looks to me that the data used in experimental section is simulated data, rather than real data. \n3. It looks to me that baselines such as CNN did not perform well, mainly because in test stage, some candidates that CNN picked as the best do not actually qualify. However, this should be able to be fixed easily by picking the best candidate that qualify. Otherwise I feel it is an unfair comparison to the proposed method. ", "The authors state\n\"This problem is simply a matter of choosing the best candidate among the given candidates in a specific situation.\" -> It would be nice to have examples of what constitutes a \"candidate\" or \"situation\"?\n\nThe problem definition and proposed approach needs to be made more precise. For e.g. statements like \"\nFor the problem of classifying data into several classes, neural networks have dominated with high performance. However, the neural networks are not an adequate answer for every problem.\" are not backed with enough evidence. \n\nA detailed section on related work would be beneficial.\n" ]
[ 4, 4, 4 ]
[ 4, 4, 5 ]
[ "iclr_2018_SJVHY9lCb", "iclr_2018_SJVHY9lCb", "iclr_2018_SJVHY9lCb" ]
iclr_2018_HJDUjKeA-
Learning objects from pixels
We show how discrete objects can be learnt in an unsupervised fashion from pixels, and how to perform reinforcement learning using this object representation. More precisely, we construct a differentiable mapping from an image to a discrete tabular list of objects, where each object consists of a differentiable position, feature vector, and scalar presence value that allows the representation to be learnt using an attention mechanism. Applying this mapping to Atari games, together with an interaction net-style architecture for calculating quantities from objects, we construct agents that can play Atari games using objects learnt in an unsupervised fashion. During training, many natural objects emerge, such as the ball and paddles in Pong, and the submarine and fish in Seaquest. This gives the first reinforcement learning agent for Atari with an interpretable object representation, and opens the avenue for agents that can conduct object-based exploration and generalization.
rejected-papers
All three reviewers recommended rejection and there was no rebuttal.
train
[ "Sk9f1tBlz", "BJTS11qlz", "SJ6V5roeG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper learns to construct masks and feature representations from an input image, in order to represent objects. This is applied to the relatively simple domain of Atari games video input (compared to natural images). The paper is completely inadequate in respect to related work; it re-invents known techniques like non-maximum suppression and matching for tracking; fails to learn convincing objects according to visual inspection; and fails to compare with earlier methods for these tasks. (The comment above about re-invention is the most charitable intepretation -- the worst case would be using these ideas without citation.)\n\n\n1) The related work section is outrageous, containing no references before 2016. Do the authors think researchers never tried to do this task before then? This is the bad side of the recent deep nets hype, and ICLR is particularly susceptible to this. Examples include\n\n@article{wang-adelson-94,\n author = \"Wang, J. Y. A. and Adelson, E. H.\",\n title = {{Representing Moving Images with Layers}},\n journal = {{IEEE Transactions on Image Processing}},\n year = \"1994\",\n volume = \"3(5)\",\n pages = {625-638}\n}\nsee http://persci.mit.edu/pub_pdfs/wang_tr279.pdf\n\nand\n\n@article{frey-jojic-03,\n author = {Frey, B. J. and Jojic, N.},\n title = {{Transformation Invariant Clustering Using the EM Algorithm}},\n journal = {IEEE Trans Pattern Analysis and Machine Intelligence},\n year = {2003},\n volume = {25(1)},\n pages = {1-17}\n}\nwhere mask and appearances for each object of interest are learned. There is a literature which follows on from the F&J paper. The methods used in Frey & Jojic are different from what is proposed in the paper, but there needs to be comparisons.\n\nThe AIR paper also contains references to relevant previous work.\n\n2) p 3 center -- this seems to be reinventing non-maximum suppression\n\n3) p 4 eq 3 and sec 3.2 -- please justify *why* it makes sense to use\nthe concrete transform. Can you explain better (e.g. in the supp mat)\nthe effect of this for different values of q_i?\n\n4) Sec 3.5 Matching objects in successive frames using the Hungarian \nalgorithm is also well known, e.g. it is in the matlab function\nassignDetectionsToTracks .\n\n5) Overall: in this paper the authors come up with a method for learning objects from Atari games video input. This is a greatly restricted setting compared to real images. The objects learned as shown in Appendix A are quite unconvincing, e.g. on p 9. For example for Boxing why are the black and white objects broken up into 3 pieces, and why do they appear coloured in col 4?\n\nAlso the paper lacks comparisons to other methods (including ones from before 2016) which have tackled this problem.\n\nIt may be that the methods in this paper can outperform previous ones -- that would be interesting, but it would need a lot of work to address the issues raised above.\n\nText corrections:\n\np 2 \"we are more precise\" -> \"we give more details\"\n\np 3 and p 2 -- local maximum (not maxima) for a single maximum. [occurs many times]\n", "The paper proposes a method for learning object representations from pixels and then use such representations for doing reinforcement learning. This method is based on convnets that map raw pixels to a mask and feature map. The mask contains information about the presence/absence of objects in different pixel locations and the feature map contains information about object appearance. \n\nI believe that the current method can only learn and track simple objects in a constant background, a problem which is well-solved in computer vision. Specifically, a simple method such as \"background subtraction\" can easily infer the mask (the outlying pixels which correspond to moving objects) while simple tracking methods (see a huge literature over decades on computer vision) can allow to track these objects across frames. The authors completely ignore all this previous work and their \"related work\" section starts citing papers from 2016 and onwards! Is it any benefit of learning objects with the current (very expensive) method compared to simple methods such as \"background subtraction\"? \n\nFurthermore, the paper is very badly written since it keeps postponing the actual explanations to later sections (while these sections eventually refer to the appendices). This makes reading the paper very hard. For example, during the early sections you keep referring to a loss function which will allow for learning the objects, but you never really give the form of this loss (which you should as soon as you mentioning it) and the reader needs to search into the appendices to find out what is happening. \n\nAlso, experimental results are very preliminary and not properly analyzed. For example the results in Figure 3 are unclear and need to be discussed in detail in the main text. ", "The paper proposes a neural architecture to map video streams to a discrete collection of objects, without human annotations, using an unsupervised pixel reconstruction loss. The paper uses such object representation to inform state representation for reinforcement learning. Each object is described by a position, appearance feature and confidence of existence (presence). The proposed network predicts a 2D mask image, where local maxima correspond to object locations, and values of the maxima correspond to presence values. The paper uses a hard decision on the top-k objects (there can be at most k objects) in the final object list, based on the soft object presence values (I have not understood if these top k are sampled based on the noisy presence values or are thresholded, if the authors could kindly clarify). \n The final presence values though are sampled using Gumbell-softmax.\n\nObjects are matched across consecutive frames using non parametric (not learnable) deterministic matching functions, that takes into account the size and appearance of the objects. \n\nFor the unsupervised reconstruction loss, a static background is populated with objects, one at a time, each passing its state and feature through deconvolution layers to generate RGB object content.\n\nThen a policy network is trained with deep Q learning whose architecture takes into account the objects in the scene, in an order agnostic way, and pairwise features are captured between pairs of objects, using similar layers as visual interaction nets.\n\nPros\nThe paper presents interesting ideas regarding unsupervised object discovery\n\nCons:\nThe paper shows no results. The objects discovered could be discovered with mosaicing (since the background is static) and background subtraction. \n" ]
[ 3, 4, 4 ]
[ 4, 3, 4 ]
[ "iclr_2018_HJDUjKeA-", "iclr_2018_HJDUjKeA-", "iclr_2018_HJDUjKeA-" ]
iclr_2018_SkAK2jg0b
An Out-of-the-box Full-network Embedding for Convolutional Neural Networks
Transfer learning for feature extraction can be used to exploit deep representations in contexts where there is very few training data, where there are limited computational resources, or when tuning the hyper-parameters needed for training is not an option. While previous contributions to feature extraction propose embeddings based on a single layer of the network, in this paper we propose a full-network embedding which successfully integrates convolutional and fully connected features, coming from all layers of a deep convolutional neural network. To do so, the embedding normalizes features in the context of the problem, and discretizes their values to reduce noise and regularize the embedding space. Significantly, this also reduces the computational cost of processing the resultant representations. The proposed method is shown to outperform single layer embeddings on several image classification tasks, while also being more robust to the choice of the pre-trained model used for obtaining the initial features. The performance gap in classification accuracy between thoroughly tuned solutions and the full-network embedding is also reduced, which makes of the proposed approach a competitive solution for a large set of applications.
rejected-papers
Three reviewers recommended rejection, and there was no rebuttal.
train
[ "BkNxXL5ef", "H1n46uTxf", "By2YMc3WM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an out-of-the-box embedding for image classification task. Instead of taking one single layer output from pre-trained network as the feature vector for new dataset, the method first extracts the activations from all the layers, then runs spatial average pooling on all convolutional layers, then normalizes the feature and uses two predefined thresholds to discretize the features to {-1, 0, 1}. Final prediction is learned through a SVM model using those embeddings. Experimental results on nine different datasets show that this embedding outperforms baseline of using one single layer. I think in general this paper lacks novelty and it shouldn't be surprising that activations from all layers should be more representative than one single layer representation. Moreover, in Table 4, it shows that discretization actually hurts the performance. It is also very heuristic to choose the two thresholds. \n\n", "The paper addresses the scenario when using a pretrained deep network as learnt feature representation for another (small) task where retraining is not an option or not desired. In this situation it proposes to use all layers of the network to extract feature from, instead of only one layer. \nThen it proposes to standardize different dimensions of the features based on their response on the original task. Finally, it discretize each dimension into {-1, 0, 1} to compress the final concatenated feature representation. \nDoing this, it shows improvements over using a single layer for 9 target image classification datasets including object, scene, texture, material, and animals.\n\nThe reviewer does not find the paper suitable for publication at ICLR due to the following reasons:\n- The paper is incremental with limited novelty.\n- the results are not encouraging\n- the pipeline of standardization, discretization is relatively costly, the final feature vector still large. \n- combining different layers, as the only contribution of the paper, has been done in the literature before, for instance:\n“The Treasure beneath Convolutional Layers: Cross-convolutional-layer Pooling\nfor Image Classification” CVPR 2016\n", "Paper claims to propose a deep transfer learning method. There are several reasons not to consider this paper for ICLR at this point.\n\nPaper is badly written and the problem it tries to solve is not clearly stated.\nProposed feature embedding is incremental (lack of novelty and technical contribution)\nObtained results are encouraging but not good enough.\nLack of experimental validation.\nI think paper can be improved significantly and is not ready for publication at this point.\n\n" ]
[ 3, 4, 4 ]
[ 4, 5, 5 ]
[ "iclr_2018_SkAK2jg0b", "iclr_2018_SkAK2jg0b", "iclr_2018_SkAK2jg0b" ]
iclr_2018_BkoCeqgR-
On the Construction and Evaluation of Color Invariant Networks
This is an empirical paper which constructs color invariant networks and evaluates their performances on a realistic data set. The paper studies the simplest possible case of color invariance: invariance under pixel-wise permutation of the color channels. Thus the network is aware not of the specific color object, but its colorfulness. The data set introduced in the paper consists of images showing crashed cars from which ten classes were extracted. An additional annotation was done which labeled whether the car shown was red or non-red. The networks were evaluated by their performance on the classification task. With the color annotation we altered the color ratios in the training data and analyzed the generalization capabilities of the networks on the unaltered test data. We further split the test data in red and non-red cars and did a similar evaluation. It is shown in the paper that an pixel-wise ordering of the rgb-values of the images performs better or at least similarly for small deviations from the true color ratios. The limits of these networks are also discussed.
rejected-papers
Three reviewers recommend rejection and there is no rebuttal.
train
[ "HJzYXIOxG", "B1eq0Hqlz", "rJrWUW0gM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes and evaluates a method to make neural networks for image recognition color invariant.\n\nThe contribution of the paper is: \n - some proposed methods to extract a color-invariant representation\n - an experimental evaluation of the methods on the cifar 10 dataset\n - a new dataset \"crashed cars\"\n - evaluation of the best method from the cifar10 experiments on the new dataset\n\nPros: \n - the crashed cars dataset is interesting. The authors have definitely found an interesting untapped source of interesting images.\n\n\nCons: \n- The authors name their method order network but the method they propose is not really parts of the network but simple preprocessing steps to the input of the network. \n- The paper is incomplete without the appendices. In fact the paper is referring to specific figures in the appendix in the main text.\n - the authors define color invariance as a being invariant to which specific color an object in an image does have, e.g. whether a car is red or green, but they don't think about color invariance in the broader context - color changes because of lighting, shades, ..... Also, the proposed methods aim to preserve the \"colorfullness\" of a color. This is also problematic, because while the proposed method works for a car that is green or a car that is red, it will fail for a car that is black (or white) - because in both cases the \"colorfulness\" is not relevant. Note that this is specifically interesting in the context of the task at hand (cars) and many cars being, white, grey (silver), or black. \n- the difference in the results in table 1 could well come from the fact that in all of the invariant methods except for \"ord\" the input is a WxHx1 matrix, but for \"ord\" and \"cifar\" the input is a \"WxHx3\" matrix. This probably leads to more parameters in the convolutions. \n- the results in the figure 4: it's very unlikely that the differences reported are actually significant. It appears that all methods perform approximately the same - and the authors pick a specific line (25k steps) as the relevant one in which the RGB-input space performs best. The proposed method does not lead to any relevant improvement.\nFigure 6/7: are very hard to read. I am still not sure what exactly they are trying to say.\n\nMinor comments: \n - section 1: \"called for is network\" -> called for is a network\n - section 1.1: And and -> And\n - section 1.1: Appendix -> Appendix C\n - section 2: Their exists many -> There exist many\n - section 2: these transformation -> these transformations\n - section 2: what does \"the wallpaper groups\" refer to? \n - section 2: are a groups -> are groups\n - section 3.2: reference to a non-existing figure\n - section 3.2/Training: 2499999 iterations = steps? \n - section 3.2/Training: longer as suggested -> longer than suggested\n\n", "The authors investigate a modified input layer that results in color invariant networks. The proposed methods are evaluated on two car datasets. It is shown that certain color invariant \"input\" layers can improve accuracy for test-images from a different color distribution than the training images.\n\n\nThe proposed assumptions are not well motivated and seem arbitrary. Why is using a permutation of each pixels' color a good idea?\n\nThe paper is very hard to read. The message is unclear and the experiments to prove it are of very limited scope, i.e. one small dataset with the only experiment purportedly showing generalization to red cars.\n\nSome examples of specific issues:\n- the abstract is almost incomprehensible and it is not clear what the contributions are\n- Some references to Figures are missing the figure number, eg. 3.2 first paragraph, \n- It is not clear how many input channels the color invariant functions use, eg. p1 does it use only one channel and hence has fewer parameters?\n- are the training and testing sets all disjoint (sec 4.3)?\n- at random points figures are put in the appendix, even though they are described in the paper and seem to show key results (eg \"tested on nored-test\")\n- Sec 4.6: The explanation for why the accuracy drops for all models is not clear. Is it because the total number of training images drops? If that's the case the whole experimental setup seems flawed.\n- Sec 4.6: the authors refer to the \"order net\" beating the baseline, however, from Fig 8 (right most) it appears as if all models beat the baseline. In the conclusion they say that weighted order net beats the baseline on all three test sets w/o red cars in the training set. Is that Fig 8 @0%? The baseline seems to be best performing on \"all cars\" and \"non-red cars\"\n\nIn order to be at an appropriate level for any publication the experiments need to be much more general in scope.\n", "The authors test a CNN on images with color channels modified (such that the values of the three channels, after modification, are invariant to permutations).\n\nThe main positive point is that the performance does not degrade too much. However, there are several important negative points which should prevent this work, as it is, from being published.\n\n1. Why is this type of color channel modification relevant for real life vision? The invariance introduced here does not seem to be related to any real world phenomenon. The nets, in principle, could learn to recognize objects based on shape only, and the shape remains stable when the color channels are changed.\n\n2. Why is the crash car dataset used in this scenario? It is not clear to me why this types of theoretical invariance is tested on such as specific dataset. Is there a real reason for that?\n\n3. The writing could be significantly improved, both at the grammatical level and the level of high level organization and presentation. I think the authors should spend time on better motivating the choice of invariance used, as well as on testing with different (potentially new) architectures, color change cases, and datasets.\n\n4. There is no theoretical novelty and the empirical one seems to be very limited, with less convincing results." ]
[ 4, 3, 3 ]
[ 4, 4, 4 ]
[ "iclr_2018_BkoCeqgR-", "iclr_2018_BkoCeqgR-", "iclr_2018_BkoCeqgR-" ]
iclr_2018_Bym0cU1CZ
Towards Interpretable Chit-chat: Open Domain Dialogue Generation with Dialogue Acts
Conventional methods model open domain dialogue generation as a black box through end-to-end learning from large scale conversation data. In this work, we make the first step to open the black box by introducing dialogue acts into open domain dialogue generation. The dialogue acts are generally designed and reveal how people engage in social chat. Inspired by analysis on real data, we propose jointly modeling dialogue act selection and response generation, and perform learning with human-human conversations tagged with a dialogue act classifier and a reinforcement approach to further optimizing the model for long-term conversation. With the dialogue acts, we not only achieve significant improvement over state-of-the-art methods on response quality for given contexts and long-term conversation in both machine-machine simulation and human-machine conversation, but also are capable of explaining why such achievements can be made.
rejected-papers
This work takes dialogue acts into account to generate responses in a human-machine conversation. However, incorporating dialogue acts into open-domain dialogue was already the focus of Zhao et al's ACL 2017 paper, Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders, and using dialogue acts in a policy for human-machine conversation was also an idea that already appeared in Serban et al 2017, A Deep Reinforcement Learning Chatbot. Despite the authors' response that tries to adjust their claims and incorporate a more thorough overview, I encourage the authors to re-work their research with a much more careful and reliable examination of previous work and how their effort should be understood in that more comprehensive context.
val
[ "rkBh-NNSf", "rknNBk4BM", "SJ7bhA2fM", "rJiRgVkSz", "rJNoU0uEf", "BkY9B2d4G", "ry2CUfcxz", "HklWe9qxz", "ryiSW8nef", "BknHTJTfz", "SydD_y6fG", "r1aPFC2zz" ]
[ "author", "public", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thank for reminding us the paper.\n\nThis is a system description paper in which the system is the one built by MILA for the 1st Alexa Prize competition. In the system, dialogue acts are used as one of the 1458 features for learning a response selection model and a special feature in the following abstract discourse MDP model. \n\nBecause the work is building a system for competition, it is about ensemble of response providers from multiple sources. Therefore, the policy is response select strategy from a bunch of candidates given the existing contexts. For details, please refer to the second paragraph of Section 4 of the paper. We would say dialogue acts are not used to guide the generation of responses as most responses in the related work are provided by other sources and it only cares how to select one from the candidates, nor the work used dialogue acts as policies to manage the flow of conversation. Dialogue acts in the work are extra features in the learning of response selection, even in the abstract discourse MDP. As an evidence, both the states and the actions in their models (Equation (5) and Equation (20)) are not related to dialogue acts, but are conversation history and responses respectively. In abstract discourse MDP, dialogue acts are used as one of the 3 features to guide training data sampling (Equation (18)) for the learning of the response selection model. Although the work aims to achieve engagement in social chat, it provides no insights on how to understand engagement with dialogue acts, not to mention imitating such behaviors with dialogue acts. \n\nIn summary, we would say the same thing as we commented on the ACL work:\nIt is NOT a study of how dialogue acts can be used to interpret and control dynamic human-machine interactions as policies, NOR it is a study of how the generated responses will be affected by the dialogue acts, which are the major contributions of our work. \n\nDetails:\n1. Dialogue acts. The dialogue acts in the related work mainly come from the existing 42 acts in Switchboard Corpus, which reflect semantics of single utterances. Our dialogue acts, however, are designed to describe how human perform in order to keep their social interactions. Therefore, they highly connect to human behavior regarding to conversational contexts. The dialogue acts in the related work are used as features for response selection, while our dialogue acts are used to explain human interactions and control the generation of responses in order to keep conversation going. \n\n2. Insights. No insights on what leads to engagement are provided in the related work. The work just reported evaluation results in the competition. We show enough insights from the analysis of human interactions and find that context switch and question are key acts for engagement in social chat with quantitative evidences.\n\n3. Model. The related work learns a response selection model, while we learn response generation. Therefore, we can show how different types of responses can be generated regarding to different dialogue acts. \n\n4. Learning. The related work optimizes according to human ratings (from both AMT and Alexa users), while we optimize our model by encouraging long and reasonable conversations through self-play, which is more cost-effective.\n\nAgain, we would like to show our respect to the work and have cited it in the new version", "Besides the ACL 2017 paper that has been mentioned before, another recent paper that seems relevant is Serban et al 2017, A Deep Reinforcement Learning Chatbot, https://arxiv.org/abs/1709.02349, which also uses dialogue acts to learn a policy in a bot-human conversation setting. Can the authors comment on how their submission relates to this prior work?", "Thank you for your valuable comments.\n\n1. Why we need reinforcement learning\n\nAs we have mentioned in the paper, open domain dialogue generation needs to be optimized for long-term engagement in practice. Yes, in supervised learning, we have large scale of human dialogues tagged with dialogue acts, but that does not mean the algorithm can learn how to keep conversation going from the data, as more than 45% training dialogues are not longer than 5 turns (described in the last paragraph of Section 3.1 in the new version). Supervised learning just learns a model by maximizing the likelihood of the observed data including the short dialogues. Dialogue acts are learned only according to the history, and no information of the future influence can flow in. Then, without an additional objective (i.e., Equation (8) in Section 3.2) and mechanism (optimizing for future success), how can we (explicitly) guarantee that the model is optimized for long-term engagement? Therefore, supervised learning is to learn human language and reinforcement learning is to further optimize the combination of dialogue acts in order to achieve long-term conversation.\n\nModel optimization with reinforcement learning is also encouraged by the experimental results. In Table 5, response diversity is significantly improved by RL (see the difference between RL-DAGM and SL-DAGM on distinct-1 and distinct-2), and in Table 4(b), with RL, both the dialogues from machine-machine simulation and human-machine test become longer. Moreover, as we have analyzed in the last paragraph of Section 4.3, it is because RL can promote context switch in interactions that the model, after optimized with RL, can lead to better engagement. All the results well support our motivation to learning with RL. \n\n2. >>> the formulation in equation 4 seems to be problematic\n\nThanks for pointing out this problem. We have modified Equation (4) in the previous version as Equation (4)+Equation (5) in the new version. Now the procedure of generation becomes more clear. \n\n3. >>>\"Simplify pr(ri|si,ai) as pr(ri|ai,ui−1,ui−2) since decoding natural language responses from long conversation history is challenging\" to my understanding, the only difference between the original and simplified model is the encoder part not the decoder part. Did I miss something\n\nYes, from a model perspective, the simplification here just changes the encoder. However, what we mean here is that it is difficult for an RNN to memorize long conversation history, and thus encoding long history means either the response given by the decoder is irrelevant to the early history, or the response will be messed up. \n\n4. >>>\"We train m(·, ·) with the 30 million crawled data through negative sampling.\" not sure I understand the connection between training $m(\\cdot, \\cdot)$ and the entire model\n\n $m(\\cdot, \\cdot)$ is pre-trained and used to estimate the reward function in Equation (9). This is the only connection between $m(\\cdot, \\cdot)$ and the entire model. We have clarified this in the paragraph after Equation (9).\n\n5. >>>the experiments are not convincing. At least, it should show the generation texts were affected about DAs in a systemic way. Only a single example in table 5 is not enough.\n\nThanks for your comments. We do three things to show how the generated texts are affected by dialogue acts:\n\n(1)\tWe move Table 7 in the previous version from Appendix to Section 4.2. Now the table is Table 5. In the table, one can see that with dialogue acts, the diversity of generated responses is significantly improved (corresponding to much larger distinct-1 and distinct-2). In the following explanation (the third paragraph of Section 4.2), we claim that this is one benefit of dialogue acts, as search space now becomes act × language. \n\n(2)\tWe add Section 4.4 where we compare responses from different dialogue acts using some metrics. The conclusion is that responses generated from CS.* are longer, more informative, and contain more new words than responses generated from CM.*, and statements and answers are generally more informative than questions in both CS.* and CM.*. Please refer to the new version of the paper to get more details.\n\n(3)\tIn the last paragraph of Section 4.3, we show that simulated dialogues without CS.* are much shorter than those with CS.* (SL: 4.78 v.s. 8.66, RL: 2.67 v.s., 8.18). The result indicates that if we remove CS.*, then the conversation engagement of our model may degrade to the baseline model. \n", "We updated the paper according to the comments about the existing work.\n\nSpecifically, in the new version, we made the following updates:\n\n1. We withdraw the claim of \"first work of using dialogue act in open domain dialogue generation\" and position our work as “we are the first who design dialogue acts to explain social interactions, control open domain response generation, and guide human-machine conversations.” (see the first paragraph of Related Work)\n\n2. We cite the ACL paper and clarify the difference from it at the end of the first paragraph of Related Work.\n\n3. We further emphasize our motivation on using RL in the last paragraph of Section 3.1 and the first paragraph of Section 3.2\n\n4. We correct some typos. For example: HERD-> HRED, VHERD->VHRED. ", "Thank you for leading us to the ACL paper. \n\nWhile overall the ACL paper is about how to model dialogues with latent variables using VAE techniques, which is very similar to the early work of Serban et al., https://arxiv.org/abs/1605.06069 (VHRED, this is also a baseline in our experiment), the paper does cover dialogue acts from an existing data set, the Switchboard Corpus, in the experiment part as an extra feature.\n\nWe would say that the ACL paper is about how to generate a reply for a static context. It is NOT a study of how dialogue acts can be used to interpret and control dynamic human-machine interactions as policies, NOR it is a study of how the generated responses will be affected by the dialogue acts, which are the major contributions of our work. \n\nHere, we would like to clarify the originality (contributions) of our paper and the difference with the ACL paper.\n\nORIGINALITY:\n1. The first study of how to design dialogue acts to understand how human behave and engage in social chat, instead of using dialogue acts to ground semantics of utterances. The contribution lies in filling the gap between end-to-end open domain dialogue modeling and task-oriented dialogue modeling with task specific dialogue acts, which is not covered by the ACL paper. \n\n2. Insights from the analysis of human interactions. The contribution lies in discovery of the role of context switch for engagement in social chat with quantitative evidences, which is not covered by the ACL paper.\n\n3. The first work on modeling the policy in open domain dialogue management with dialogue acts and learning the policy (i.e., the combination of dialogue acts in a conversation flow) for long-term conversation by reinforcement learning, which is not covered by the ACL paper.\n\n4. Empirical studies of how human-machine conversation is affected by the dialogue acts, from static generated text to dynamic interactions, which is not covered by the ACL paper. \n\n\nDIFFERENCE\n1. Goal. The goal of the ACL paper is to study how to leverage VAE techniques to address the\"one-to-many\" problem in open domain dialogues. The major contribution of the ACL paper also lies in the VAE framework, as summarized by the authors in the last paragraph of Introduction of the paper. Our goal is to INTERPRET engagement in human-human social interactions with dialogue acts and thus ENHANCE human-machine engagement in open domain conversations by combining the dialogue acts.\n\n2. Dialogue acts. Dialogue acts in ACL paper are extra features, therefore they come from an existing data set and follow a traditional scheme (42 acts). Our dialogue acts, however, are designed to describe how human perform in order to keep their social interactions. Therefore, they highly connect to human behavior regarding to conversational contexts and are not covered by any of the existing data sets. Dialogue acts in the ACL paper are used to measure how well the generated responses are and justify that the learned latent representations are reasonable. Our dialogue acts, however, are used to interpret and control the flow of human-machine interactions in order to achieve long-term conversation. \n\n3. Model. The ACL paper deals with static response generation (i.e., given a fixed context, how to generate a proper response). Therefore, only the dialogue act of the last turn (in training) and the predicted dialogue act of the response (in both training and test) are treated as extra features in latent representation learning and response decoding. There are no studies on how the dialogue acts are coordinated and thus affect the flow of conversation. While we deal with dynamic human-machine interactions, therefore, dialogue acts are used to model the policy of dialogue management and guide the flow of conversation. Coordination of dialogue acts across multiple turns is implicitly modeled in the policy network. Note that we also show results of static response generation. That is just to show that our model can give reasonable intermediate results in interactions.\n\n4. Learning method. The ACL paper learns a dialogue model with VAE. While we treat dialogue acts as a kind of strategies and use reinforcement learning to enhance engagement in human-machine interactions. \n\nAlthough there exists significant difference, we would like to show our respect to this ACL work. We upload a new version and make the following changes:\n1. We cite the paper and clarify the difference in Related Work\n2. We position our contributions on dialogue act design for interpretation and learning dialogue policies with the dialogue acts. \n3. We withdraw the claim of “first work of using dialogue act in open domain dialogue generation”, and change it to “we are the first who design dialogue acts to explain social interactions, control open domain response generation, and guide human-machine conversations.” (see the first paragraph of Related Work)\n", "The idea in this paper looks very similar to the idea from <Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders> which was presented at ACL'17: https://arxiv.org/abs/1703.10960. Especially, the idea of using dialog act in open domain dialogue generation, which is the main contribution of this paper, was firstly introduced in https://arxiv.org/abs/1703.10960\n\nI'd like the authors to clarify how they differ and would like to ask the reviewers to read https://arxiv.org/abs/1703.10960 and see how this affects your judgment of the submission.", "The authors use a distant supervision technique to add dialogue act tags as a conditioning factor for generating responses in open-domain dialogues. In their evaluations, this approach, and one that additionally uses policy gradient RL with discourse-level objectives to fine-tune the dialogue act predictions, outperform past models for human-scored response quality and conversation engagement.\nWhile this is a fairly straightforward idea with a long history, the authors claim to be the first to use dialogue act prediction for open-domain (rather than task-driven) dialogue. If that claim to originality is not contested, and the authors provide additional assurances to confirm the correctness of the implementations used for baseline models, this article fills an important gap in open-domain dialogue research and suggests a fruitful future for structured prediction in deep learning-based dialogue systems.\n\nSome points:\n1. The introduction uses \"scalability\" throughout to mean something closer to \"ability to generalize.\" Consider revising the wording here.\n2. The dialogue act tag set used in the paper is not original to Ivanovic (2005) but derives, with modifications, from the tag set constructed for the DAMSL project (Jurafsky et al., 1997; Stolcke et al., 2000). It's probably worth citing some of this early work that pioneered the use of dialogue acts in NLP, since they discuss motivations for building DA corpora.\n3. In Section 2.1, the authors don't explicitly mention existing DA-annotated corpora or discuss specifically why they are not sufficient (is there e.g. a dataset that would be ideal for the purposes of this paper except that it isn't large enough?)\n3. The authors appear to consider only one option (selecting the top predicted dialogue act, then conditioning the response generator on this DA) among many for inference-time search over the joint DA-response space. A more comprehensive search strategy (e.g. selecting the top K dialogue acts, then evaluating several responses for each DA) might lead to higher response diversity.\n4. The description of the RL approach in Section 3.2 was fairly terse and included a number of ad-hoc choices. If these choices (like the dialogue termination conditions) are motivated by previous work, they should be cited. Examples (perhaps in the appendix) might also be helpful for the reader to understand that the chosen termination conditions or relevance metrics are reasonable.\n5. The comparison against previous work is missing some assurances I'd like to see. While directly citing the codebases you used or built off of is fantastic, it's also important to give the reader confidence that the implementations you're comparing to are the same as those used in the original papers, such as by mentioning that you can replicate or confirm quantitative results from the papers you're comparing to. Without that there could always be the chance that something is missing from the implementation of e.g. RL-S2S that you're using for comparison.\n6. Table 5 is not described in the main text, so it isn't clear what the different potential outputs of e.g. the RL-DAGM system result from (my guess: conditioning the response generation on the top 3 predicted dialogue acts?)\n7. A simple way to improve the paper's clarity for readers would be to break up some of the very long paragraphs, especially in later sections. It's fine if that pushes the paper somewhat over the 8th page.\n8. A consistent focus on human evaluation, as found in this paper, is probably the right approach for contemporary dialogue research.\n9. The examples provided in the appendix are great. It would be helpful to have confirmation that they were selected randomly (rather than cherry-picked).", "The topic discussed in this paper is interesting. Dialogue acts (DAs; or some other semantic relations between utterances) are informative to increase the diversity of response generation. It is interesting to see how DAs are used for conversational modeling, however this paper is difficult for me to follow. For example:\n\n1) the caption of section 3.1 is about supervised learning, however the way of describing the model in this section sounds like reinforcement learning. Not sure whether it is necessary to formulate the problem with a RL framework, since the data have everything that the model needs as for a supervised learning.\n2) the formulation in equation 4 seems to be problematic\n3) \"simplify pr(ri|si,ai) as pr(ri|ai,ui−1,ui−2) since decoding natural language responses from long conversation history is challenging\" to my understanding, the only difference between the original and simplified model is the encoder part not the decoder part. Did I miss something?\n4) about section 3.2, again I didn't get whether the model needs RL for training.\n5) \"We train m(·, ·) with the 30 million crawled data through negative sampling.\" not sure I understand the connection between training $m(\\cdot, \\cdot)$ and the entire model.\n6) the experiments are not convincing. At least, it should show the generation texts were affected about DAs in a systemic way. Only a single example in table 5 is not enough.", "The paper describes a technique to incorporate dialog acts into neural conversational agents. This is very interesting work. Existing techniques for neural conversational agents essentially mimic the data in large corpora of message-response pairs and therefore do not use any notion of dialog act. A very important type of dialog act is \"switching topic\", often done to ensure that the conversation will continue. The paper describes a classifier that predicts the dialog act of the next utterance. The next utterance is then generated based on this dialog act. The paper also describes how to increase the relevance of responses and the length of conversations by self reinforcement learning. This is also very interesting. The empirical evaluation demonstrates the effectiveness of the approach. The paper is also well written. I do not have any suggestion for improvement. This is good work that should be published.", "We upload a new version of the paper in which we try our best to address the concerns from the reviewers. \nMajor revisions include:\n(1) We cited more work about dialogue acts in Section 2.1, and commented on the existing dialogue act corpus. \n(2) We changed Equation (4) in the previous version to Equation (4) and Equation (5). Now, the mechanism of dialogue generation becomes more clear and general. \n(3) We further justified the necessity of reinforcement learning at the end of Section 3.1 with the distribution of the data set. \n(4) To emphasize the effect of dialogue acts to generation, we moved Table 7 in Appendix in the previous version to the main text. Now the table becomes Table 5. \n(5) We added Section 4.4 where we systematically analyze how the generated text is affected by the dialogue acts with some automatic metrics. \n(6) We broke up some long paragraphs for ease of reading. ", "Thank you for your valuable comments.\n\n1.\tWe replace the word \"scalability\" in Introduction with other words (e.g., \"scale to new domains\"). \n \n2.\tWe follow your suggestions and cite related work about dialogue acts at the beginning of Section 2.1. We also mention a public DA corpora \"the Switchboard Corpus\" in the second paragraph of Section 2.1 and clarify that we build a new data set because no one has analyzed open domain dialogues with dialogue acts about conversational context before. \n\n3.\tWe modify Equation (4) in the previous version as Equation (4)+Equation (5). Now dialogue act selection in our model becomes more general and takes multiple strategies (top 1 and top K) as special cases. We can try dialogue generation with top K acts in our future work. \n\n4.\tWe break up some long paragraphs in Section 3.2 for ease of reading and cite (Li et al., 2016b) before the termination strategies, as some of them (e.g., regarding to repetitive turns) are inspired by the work. \n\n5.\tAlthough we use a different data set, the average number of turns of the simulated dialogues from RL-S2S in our work is very close to the number reported in (Li et al., 2016b). Our number is 4.36 (refer to the machine-machine column in Table 4(b)), while the number reported in (Li et al., 2016b) is 4.48. This might provide an additional evidence to the correctness of the implementation of the baseline model in the work.\n\n6.\tTable 5 in the previous version becomes Table 6 now. We describe the table right after it. Basically, SL-DAGM and RL-DAGM share the same text generation but differs on how they select dialogue acts, as we only optimize the policy network with RL. The response given by RL-DAGM comes from CS.Q (clarified after the generated response in Table 6), while the response given by SL-DAGM comes from CS.S. Both are top dialogue acts under the corresponding policy networks. \n\n7.\tWe follow your suggestions and break up long paragraphs. \n\n8.\tThe examples given in Appendix are picked randomly.\n", "Thank you for your comments" ]
[ -1, -1, -1, -1, -1, -1, 7, 4, 7, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, 3, 5, 4, -1, -1, -1 ]
[ "rknNBk4BM", "iclr_2018_Bym0cU1CZ", "HklWe9qxz", "iclr_2018_Bym0cU1CZ", "BkY9B2d4G", "iclr_2018_Bym0cU1CZ", "iclr_2018_Bym0cU1CZ", "iclr_2018_Bym0cU1CZ", "iclr_2018_Bym0cU1CZ", "iclr_2018_Bym0cU1CZ", "ry2CUfcxz", "ryiSW8nef" ]
iclr_2018_Bki1Ct1AW
Baseline-corrected space-by-time non-negative matrix factorization for decoding single trial population spike trains
Activity of populations of sensory neurons carries stimulus information in both the temporal and the spatial dimensions. This poses the question of how to compactly represent all the information that the population codes carry across all these dimensions. Here, we developed an analytical method to factorize a large number of retinal ganglion cells' spike trains into a robust low-dimensional representation that captures efficiently both their spatial and temporal information. In particular, we extended previously used single-trial space-by-time tensor decomposition based on non-negative matrix factorization to efficiently discount pre-stimulus baseline activity. On data recorded from retinal ganglion cells with strong pre-stimulus baseline, we showed that in situations were the stimulus elicits a strong change in firing rate, our extensions yield a boost in stimulus decoding performance. Our results thus suggest that taking into account the baseline can be important for finding a compact information-rich representation of neural activity.
rejected-papers
This work is incremental compared to previous work, solving very specific challenges, and would probably appeal to only a very limited fraction of ICLR's audience.
train
[ "SJz_9SFgz", "ByuRMz5eG", "S1Ib7Lcxf", "B1eWVftmf", "BkMdCktmM", "ry7fAJKXf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This study proposes the use of non-negative matrix factorization accounting for baseline by subtracting the pre-stimulus baseline from each trial and subsequently decompose the data using a 3-way factorization thereby identifying spatial and temporal modules as well as their signed activation. The method is used on data recorded from mouse and pig retinal ganglion cells of time binned spike trains providing improved performance over non-baseline corrected data. \n\nPros:\nThe paper is well written, the analysis interesting and the application of the Tucker2 framework sound. Removing baseline is a reasonable step and the paper includes analysis of several spike-train datasets. The analysis of the approaches in terms of their ability to decode is also sound and interesting.\n\nCons:\nI find the novelty of the paper limited: \nThe authors extend the work by (Onken et al. 2016) to subtract baseline (a rather marginal innovation) of this approach. To use a semi-NMF type of update rule (as proposed by Ding et al .2010) and apply the approach to new spike-train datasets evaluating performance by their decoding ability (decoding also considered in Onken et al. 2016).\n\nMultiplicative update-rules are known to suffer from slow-convergence and I would suspect this also to be an issue for the semi-NMF update rules. It would therefore be relevant and quite easy to consider other approaches such as active set or column wise updating also denoted HALS which admit negative values in the optimization, see also the review by N. Giles\nhttps://arxiv.org/abs/1401.5226\nas well as for instance:\nNielsen, Søren Føns Vind, and Morten Mørup. \"Non-negative tensor factorization with missing data for the modeling of gene expressions in the human brain.\" Machine Learning for Signal Processing (MLSP), 2014 IEEE International Workshop on. IEEE, 2014.\n\nIt would improve the paper to also discuss that the non-negativity constrained Tucker2 model may be subject to local minima solutions and have issues of non-uniqueness (i.e. rotational ambiguity). At least local minima issues could be assessed using multiple random initializations.\n\nThe results are in general only marginally improved by the baseline corrected non-negativity constrained approach. For comparison the existing methods ICA, Tucker2 should also be evaluated for the baseline corrected data, to see if it is the constrained representation or the preprocessing influencing the performance. Finally, how performance is influenced by dimensionality P and L should also be clarified.\n\nIt seems that it would be naturally to model the baseline by including mean values in the model rather than treating the baseline as a preprocessing step. This would bridge the entire framework as one model and make it potentially possible to avoid structure well represented by the Tucker2 representation to be removed by the preprocessing.\n\n\n\nMinor: \nThe approach corresponds to a Tucker2 decomposition with non-negativity constrained factor matrices and unconstrained core - please clarify this as you also compare to Tucker2 in the paper with orthogonal factor matrices.\n\nDing et al. in their semi-NMF work provide elaborate derivation with convergence guarantees. In the present paper these details are omitted and it is unclear how the update rules are derived from the KKT conditions and the Lagrange multiplier and how they differ from standard semi-NMF, this should be better clarified. \n\n", "In this paper, the authors present an adaptation of space-by-time non-negative matrix factorization (SbT-NMF) that can rigorously account for the pre-stimulus baseline activity. The authors go on to compare their baseline-corrected (BC) method with several established methods for dimensionality reduction of spike train data.\n\nOverall, the results are a bit mixed. The BC method often performs similarly to or is outperformed by non-BC SbT-NMF. The authors provide a possible mechanism to explain these results, by analyzing classification performance as a function of baseline firing rate. The authors posit that their method can be useful when sensory responses are on the order of magnitude of baseline activity; however, this doesn't fully address why non-BC SbT-NMF can strongly outperform the BC method in certain tasks (e.g. the step of light, Fig. 3b). Finally, while this method introduces a principled way to remove mean baseline activity from the sensory-driven response, this may also discount the effect that baseline firing rate and fast temporal fluctuations can have on the response (Destexhe et al., Nature Reviews Neuroscience 4, 2003; Gutnisky DA et al., Cerebral Cortex 27, 2017).", "In this contribution, the authors propose an improvement of a tensor decomposition method for decoding spike train. Relying on a non-negative matrix factorization, the authors tackle the influence of the baseline activity on the decomposition. The main consequence is that the retrieved components are not necessarily non-negative and the proposed decomposition rely on signed activation coefficients. An experimental validation shows that for high frequency baseline (> 0.7 Hz), the baseline corrected algorithm yields better classification results than non-corrected version (and other common factorization techniques). \n\nThe objective function is defined with a Frobenius norm, which has an important influence on the obtained solutions, as it could be seen on Figure 2. The proposed method seems to provide a more discriminant factorization than the NMF one, at the expense of the sparsity of spatial and temporal components, impeding the biological interpretability. A possible solution is to add a regularization term to the objective function to ensure the sparsity of the factorization.", "Thank you very much for this detailed review. Below, we reply to your comments and questions:\n1) To unravel the spatiotemporal activity patterns of ganglion cells during visual stimulation, we need to investigate their spatial and temporal structure. Onken et al. 2016 applied a three factor non-negative matrix factorization with full non-negativity constraints to decompose neural activity recorded from isolated salamander retinas into their non-negative spatial, temporal and activation coefficients. Salamander retinal ganglion cells have very low spontaneous activity. For this reason, the authors could ignore the baseline in that study. In order to decompose neural activity recorded from mouse and pig retina which have non-negligible spontaneous activity, we need a factorization method to deal with negative elements of baseline corrected input signals to obtain biologically interpretable spatial and temporal modules. For this purpose, we extended two factor semi-NMF presented in Ding et al. (2010) to a trial-based tri-factorization where two of the factors are non-negative and trial independent and refer to combinations of neurons firing together and temporal activation of these groups of neurons and one factor is signed and trial dependent and refers to strengths of recruitment of such neural patterns on each trial. We showed that in situations where the stimulus elicits a strong change in firing rate, our extensions yield a better decoding performance compared to SbT-NMF.\n2) According to N. Gillis. 2014 and Nielsen et al. 2014, multiplicative update rules suffer from slow convergence for image processing, text mining and hyperspectral images, while scaling well for sparse matrices. Matrices of discretized spiking activity of populations of neurons are definitely sparse and consequently multiplicative update rules do seem appropriate for these data. In our study, single trial spike trains of mouse and pig ganglion cells were binned into 10 ms time intervals and decomposed by multiplicative update rules over 1000 iterations. We investigated convergence speed and found that the update rules converged quickly in the first iterations (new Suppl. Fig. S4, page 13). Considering the fast convergence of our multiplicative update rules and their simple implementation, we did not use any alternative technique.\n3) We addressed your concern regarding local minima by considering 50 different BC-SbT-NMF decompositions with random initializations of spatial and temporal modules and activation coefficients. We selected the decomposition with the lowest reconstruction error and compared decoding performance when using this decomposition with the one that we initially obtained from a single decomposition (Fig. S5, page 14, shown only for the natural movie stimuli). We found that the decoding performance of the multiple initialization decomposition is slightly higher than SbT-NMF. Nevertheless, the very small improvement did not justify the 50-fold increase in computational cost.\n4) We now considered additional methods to compare against. We modified Suppl. Fig. S2 (page 11) to compare to the decoding performance of Orthogonal Tucker-2, spatiotemporal PCA and spatiotemporal ICA on baseline corrected data and non-baseline corrected data. It is evident that baseline correction improves decoding performance of BC-SbT-NMF, BC-Tucker-2 and BC spatiotemporal PCA, especially in Step of light (SoL) and also Low Contrast flicker (LC) stimuli.\nWe also evaluated decoding performance of the decomposition methods by considering different combinations of the number of spatial (P) and temporal (L) modules. We considered a range of the number of spatial modules from 1 to the number of neurons (the latter corresponding to no dimensionality reduction for the spatial dimension) and number of temporal modules from 1 to the number of discretized time points per trial (the latter corresponding to no dimensionality reduction for the temporal dimension). We found that the number of spatial and temporal modules did affect decoding performance. For example, Suppl. Fig. S6 (page 14) shows that BC-SbT-NMF achieved higher decoding performance with lower number of spatial and temporal modules compared to SbT-NMF. The red rectangle in Suppl. Fig. S6 (a) indicates that SbT-NMF can achieve similar decoding performance in this case, but only after increasing the number of spatial and temporal modules. In the comparisons in the main text, we used the optimal number of spatial and temporal modules for each method.\n5) Inclusion of the baseline in a joint factorization-decoding framework would require a major change of the whole method. While promising, we feel that this would be beyond the scope of this paper. We now discuss this direction in Section Discussion (page 9) and in response to reviewer 2.\n6) We now clarified the relation of SbT-NMF and BC-SbT-NMF to the Tucker-2 decompositions in Section 2.1 (paragraph 3) and In Section 2.2 (paragraph 1 and 3). ", "We thank you for the positive assessment of our work.\nRegarding your mixed results concern, unfortunately, we could not identify any data characteristics that would explain why SbT-NMF outperforms BC-SbT-NMF in certain visual tasks with lower baseline activity such as the step of light stimulus protocol. We emphasize, however, that there are not many such cases and that overall, BC-SbT-NMF outperforms SbT-NMF. Indeed, as you point out, in some situations the baseline can have an advantageous effect on the representation of neural responses, and this might be the case in the few visual tasks were SbT-NMF outperforms BC-SbT-NMF.\nWe now discuss this possibility in the Discussion Section (page 9):\n“While BC-SbT-NMF outperformed SbT-NMF overall on tasks with strong baseline activity, we also found that in a few cases, SbT-NMF performed better than BC-SbT-NMF. Previous studies showed that there is an effect of the baseline firing rate on the response (Destexhe et al., Nature Reviews Neuroscience 4, 2003; Gutnisky DA et al., Cerebral Cortex 27, 2017). In these situations, the baseline might have an advantageous effect on the representation of neural responses and could lead to better decoding performance of SbT-NMF that we observed in some cases. One possibility to take this effect into account would be to devise a joint factorization-decoding framework that explicitly introduces the baseline into the optimization framework. While this is beyond the scope of the current work, we believe that development of such a framework is a promising direction for future research.“\n", "We thank you for the positive assessment of our work and for the regularization suggestion. \n\nWe addressed this suggestion in the revised manuscript in Section 2.2 (pages 3-4) by introducing L1-regularization terms for the spatial and temporal modules in the objective function of the BC-SbT-NMF derivation following the method outlined in Hoyer, JMLR 2004. For simplicity, we considered just one regularization parameter instead of two separate regularization parameters for the spatial and temporal modules. In total, we considered a range of seven values for the regularization parameter: 0, 1, 10, 100, 1000, 10000, 100000. We now included a new Suppl. Fig. S3 (page 12), showing much sparser spatial and temporal modules that we obtained from our L1-regularization in conjunction with BC-SbT-NMF. However, we found that decoding performance decreased for all non-zero L1-regularizations that we applied (Supplementary Figure S3 panel a). Therefore, in the main paper, we report results for the regularization parameter set to 0 (corresponding to the original algorithm) which achieved highest decoding performance, and now mention in Section 2.3 (page 5) that L1 sparsity constraints for BC-SbT-NMF spatial and temporal modules decrease decoding performance.\n\nWe hope that Figure S3 as well as the modified derivation address your concern regarding sparsity of spatial and temporal modules for BC-SbT-NMF. Please let us know if you have further comments or questions.\n\n" ]
[ 4, 6, 6, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1 ]
[ "iclr_2018_Bki1Ct1AW", "iclr_2018_Bki1Ct1AW", "iclr_2018_Bki1Ct1AW", "SJz_9SFgz", "ByuRMz5eG", "S1Ib7Lcxf" ]
iclr_2018_S1GUgxgCW
Latent Topic Conversational Models
Despite much success in many large-scale language tasks, sequence-to-sequence (seq2seq) models have not been an ideal choice for conversational modeling as they tend to generate generic and repetitive responses. In this paper, we propose a Latent Topic Conversational Model (LTCM) that augments the seq2seq model with a neural topic component to better model human-human conversations. The neural topic component encodes information from the source sentence to build a global “topic” distribution over words, which is then consulted by the seq2seq model to improve generation at each time step. The experimental results show that the proposed LTCM can generate more diverse and interesting responses by sampling from its learnt latent representations. In a subjective human evaluation, the judges also confirm that LTCM is the preferred option comparing to competitive baseline models.
rejected-papers
This paper combines existing models to detect topics and generate responses, and the resulting model is shown to be slightly preferred by human evaluators over baselines. This is quite incremental and the results are not impressive enough to stand on their own merit.
val
[ "S1B9wqGrz", "HyY3SM9NM", "H1E8RNcxz", "rypOZF5eG", "r1ahhhJWM", "HJeDIJaXG", "B1nrBm57f", "rJnFHQ5XM", "HkES4X57M", "SJG0X7cQf" ]
[ "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "public", "public", "public" ]
[ "The main contribution of the paper is three-fold as mentioned in this post (– General comments on Contributions–): \n1) We were first to be able to jointly learn the neural topic and seq2seq models.\n2) The paper offers a better understanding/training of latent models for languages.\n3) Both an extensive evaluation and a comprehensive analysis were conducted to validate the results.", "It seems that the model is a combination of VAE (https://arxiv.org/abs/1605.06069) and TopicRNN (https://arxiv.org/pdf/1611.01702.pdf). Any new insights?", "This paper proposed the combination of topic model and seq2seq conversational model.\nThe idea of this combination is not surprising but the attendee of ICLR might be interested in the empirical results if the model clearly outperforms the existing method in the experimental results.\nHowever, I'm not sure that the empirical evaluation shows the really impressive results.\nIn particular, the difference between LV-S2S and LTCM seem to be trivial.\nThere are many configurations in the LSTM-based model.\nCan you say that there is no configuration of LV-S2S that outperforms your model?\nMoreover, the details of human evaluation are not clear, e.g., the number of users and the meaning of each rating.\n\n", "The paper proposes a conversational model with topical information, by combining seq2seq model with neural topic models. The experiments and human evaluation show the model outperform some the baseline model seq2seq and the other latent variable model variant of seq2seq.\n\nThe paper is interesting, but it also has certain limitations:\n\n1) To my understanding, it is a straightforward combination of seq2seq and one of the neural topic models without any justification.\n2) The evaluation doesn't show how the topic information could influence word generation. No of the metrics in table 2 could be used to justify the effect of topical information.\n3) There is no analysis about the model behavior, therefore there is no way we could get a sense about how the model actually works. One possible analysis is to investigate the values $l_t$ and the corresponding words, which to some extent will tell us how the topical information be used in generation. In addition, it could be even better if there are some analysis about topics extracted by this model.\n\nThis paper also doesn't pay much attention to the existing work on topic-driven conversational modeling. For example \"Topic Aware Neural Response Generation\" from Xing et al., 2017.\n\nSome additional issues:\n\n1) In the second line under equation 4, y_{t-1} -> y_{t}\n2) In the first paragraph of section 3, two \"MLP\"'s are confusing\n3) In the first paragraph of page 6, words with \"highest inverse document frequency\" are used as stop words?", "I enjoyed this paper a lot. The paper addresses the issue of enduring topicality in conversation models. The model proposed here is basically a mash-up between a neural topic model and a seq2seq-based dialog system. The exposition is relatively clear and a reader with sufficient background in ML should have no following the model. My only concern about the paper is that is very incremental in nature -- the authors combine two separate models into a relatively straight-forward way. The results do are good and validate the approach, but the paper has little to offer beyond that. ", "The authors would like to notify reviewers about the newest update of the paper where a quick analysis of the learned topic gate $l_t$ has been added to the paper based on reviewer1's request.", "Literature survey\nThe authors have updated the paper per the reviewer’s suggestion to add more citations for topic-aware models. We would like to point out that the suggested work (Xing et al., 2017) is quite different from ours in that they used a pretrained LDA model whereas our LTCM model trains the topic and seq2seq component jointly.\n[1] Xing et al., 2017. Topic Aware Neural Response Generation. https://arxiv.org/pdf/1606.08340.pdf \n\nInterpret topical information\nThe topic information learned in LTCM was not as easy to interpret as in other topic models that trained on document sets. This is because the word co-occurrence statistics in short text datasets are too sparse to train interpretable topic representations (Yan et al, 2013). However, we found that sampling from this learned latent representation does give us diversified sentences, at both syntactic and semantic levels. We do acknowledge the suggestion to visualize the values of $l_t$ which we have included in the newest revision of the paper.\n", "Hyperparameter search of LV-S2S\nThe experiments were conducted in a careful way where a small set of hyper-parameters were tuned to find the best model in each category. We didn’t do an exhaustive grid search over all possible network configurations, however, given the recent understanding of latent variable models (Higgins et al, 2016, Bowman et al., 2015, Dieng et al., 2017), the result of this work has shown good evidences that LTCM is generally more capable of learning diverse and interesting responses than latent variable S2S models.\n\nHuman evaluation\nWe ran 5000 pairwise comparisons between the 8 models in Table 1 (~90 comparisons per pair) and reported only the top performing ones in each of the model categories. The number of tasks each MTurk can work on was capped at 20. This results in about >=250 unique workers. The meaning of each rating is presented in Section 4.2 Human evaluation. We have added these details, please see our revision.\n", "We thank the reviewer for the comments. Please see this post (– General comments on Contributions –) for the contributions of our paper.", "We thank all the reviewers for the comments and feedback, which have helped us improve the paper (please see our revision). However, we are disappointed about the low review scores of the paper and that our contributions were not fully appreciated. To help reviewers better evaluate our paper, we would like to re-emphasize the contributions of this work:\n\n(a) Novelty\nWe were first to be able to jointly learn the neural topic and seq2seq models. The key idea is to utilize the hard-decision trick from TopicRNN (Dieng et al., 2017) to prevent the latent variable from catastrophic mode collapsing. Previous work such as [1, 2] only incorporated pre-trained models (LDA, counting grid) into seq2seq models instead of joint learning.\n[1] Xing et al., 2017. Topic Aware Neural Response Generation. https://arxiv.org/pdf/1606.08340.pdf \n[2] Wang et al., 2017. Steering Output Style and Topic in Neural Response Generation.\nhttps://arxiv.org/abs/1709.03010.pdf\n\n(b) Better understanding/training of latent models for languages\nLatent models for languages are notoriously hard to train [3, 4]. This work contributes to better training/understanding of latent models by observing and investigating in correlations of many training metrics. For examples, we found that:\n (i) approximated perplexity has much more to do with the generation quality comparing to variational lower bound; \n (ii) a lower lowerbound isn’t necessarily better because the higher KL can lead to a higher sentence diversity.\n (iii) BoW encoder works just fine in the topic component of LTCM. It is also easier to optimise.\nThese could serve as valuable rules of thumb for future model development. \n\n[3] Bowman et al., 2015. Generating Sentences from a Continuous Space. https://arxiv.org/pdf/1511.06349.pdf \n[4] Miao and Blunsom, 2016. Language as a Latent Variable: Discrete Generative Models for Sentence Compression. https://arxiv.org/pdf/1609.07317.pdf\n\n(c) Standard and comprehensive evaluation\nWe acknowledge that evaluating chat-based systems is hard. To our best effort, we included previous metrics [5, 6] to provide a comprehensive and extensive evaluation that demonstrates the superiority of our models over strong baselines. The evaluation includes both corpus-based metrics (perplexity, lowerbound, KL divergence, uniqueness, Zipf coefficients) and human judgments (interestingness, appropriateness, as well as a pairwise comparison).\n\n[5] Serban et al., 2016. A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues. https://arxiv.org/pdf/1605.06069.pdf \n[6] Cao & Clark, 2017. Latent Variable Dialogue Models and their Diversity. https://arxiv.org/pdf/1702.05962.pdf\n" ]
[ -1, -1, 4, 5, 6, -1, -1, -1, -1, -1 ]
[ -1, -1, 3, 4, 4, -1, -1, -1, -1, -1 ]
[ "HyY3SM9NM", "iclr_2018_S1GUgxgCW", "iclr_2018_S1GUgxgCW", "iclr_2018_S1GUgxgCW", "iclr_2018_S1GUgxgCW", "iclr_2018_S1GUgxgCW", "rypOZF5eG", "H1E8RNcxz", "r1ahhhJWM", "iclr_2018_S1GUgxgCW" ]
iclr_2018_SkBHr1WRW
Ego-CNN: An Ego Network-based Representation of Graphs Detecting Critical Structures
While existing graph embedding models can generate useful embedding vectors that perform well on graph-related tasks, what valuable information can be jointly learned by a graph embedding model is less discussed. In this paper, we consider the possibility of detecting critical structures by a graph embedding model. We propose Ego-CNN to embed graph, which works in a local-to-global manner to take advantages of CNNs that gradually expanding the detectable local regions on the graph as the network depth increases. Critical structures can be detected if Ego-CNN is combined with a supervised task model. We show that Ego-CNN is (1) competitive to state-of-the-art graph embeddings models, (2) can nicely work with CNNs visualization techniques to show the detected structures, and (3) is efficient and can incorporate with scale-free priors, which commonly occurs in social network datasets, to further improve the training efficiency.
rejected-papers
This paper deals with the important topic of learning better graph representations and shows promise in helping to detect critical substructures of graph that would help with the interpretability of representations. Unfortunately, this work fails to accurately portray how it relates to previous work (in particular, Niepert et al, Kipf et al, Duvenaud et al) and falls short of providing clear and convincing explanations of what it can do that these models can't, without including all of them in experimental comparisons.
val
[ "Hk3rCW5ef", "rk7Oq1oxG", "H1FOVLn-G", "rkp69mLfM", "r1lYsm8Mz", "Sk5zj78zz", "S1cjuXIMf", "rk9fnGXRZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The authors proposed a convolutional framework based on merging ego-networks. It combines graph embedding layers with task driven output layers, producing interpretable results for critical structure detection. While based on existing embedding methods such as Patchy-San, the contribution of ego-centric convolution and multi-layer architechture is novel and has a lot of potential in applications. The overall presentation of the draft is also of high quality. I recommend its publication at ICLR.\n\nHere is a list of suggested changes to further improve the draft,\n\n1. The two panels of Figure 1 seems redundant.\n\n2. Figure 4 does not provide useful information, especially in terms of how overlapping neighborhoods are aggregated at deeper layers.\n\n3. There seems to be a mistake in Figure 5 with the top neighborhood in white\n\n4. The connection between weight-tying and scale-free structure needs better explanation. Are the authors trying to say that fractal processes generates power-law degree distributions?\n\n5. The visualization of critical structures are very helpful. However, it might be better to look into structures in high level layers for truly global signatures. This is especially the case for the reddit dataset, where visualizations at the node and edge level creates hairballs.\n\n", "Dear authors,\n\nThank you for your contribution to ICLR. The problem you are addressing with your work is important. Your paper is well-motivated. Detecting and exploiting \"critical structures\" in graphs for graph classification is indeed something that is missing in previous work. \n\nAfter the introduction you discuss some related work. While I really appreciate the effort you put into this section (including the figures etc.) there are several inaccuracies in the portrayal of existing methods. Especially the comparison to Patchy-san is somewhat vague. Please make sure that you clearly state the differences between patchy-san and Ego-CNNs. What exactly is it that Patchy cannot achieve that you can. I believe I understood what the advantages of the proposed method are but it took a while to get there. Just one example to show you what I mean; you write:\n\n\"The reason why the idea of Patchy-San fails to generalize into multiple layers is that its definition of\nneighborhood, which is based on adjacency matrix, is not static and may not corresponding to local\nregions in the graph. \"\n\nIt is very difficult to understand what it is that you want to express with the above sentence. Its definition of neighborhood is based on adjacency matrix - what does that mean? A neighborhood is a set of nodes, no? Why is it that their definition of neighborhood might not correspond to local regions? In general, you should try to be more precise and concise when discussing related work. \n\nSection 3, the most important section in the paper that describes the proposed Ego-CNN approach, should also be written more clearly. For instance, it would be good if you could define the notion of an \"Ego-Convolution layer.\" You use that term without properly defining it and it is difficult to make sense of the approach without understanding it. Also, you contrast your approach with patchy and write that \"Our main idea is to use the egocentric design, i.e. the neighborhood at next\nlayer is defined on the same node.\" Unfortunately, I find it difficult to understand what this means. In general, this section is very verbose and needs a lot more work. This is at the moment also the crucial shortcoming of the paper. You should spent more time on section 3 and formally and more didactically introduce your approach. In my opinion, without a substantial improvement of this section, the paper should not be accepted. \n\nThe experiments are standard and compare to numerous existing state of the art methods. The data sets are also rather standard. The one thing I would add to the results are the standard deviations. It is common to report those. Also, in the learning for graph structured data, the variance can be quite high and providing the stddev would at least indicate how significant the improvements are. \n\nI also like the visualizations and the discussion of the critical structures found in some of the graphs.\n\nOverall, I think this is an interesting paper that has a lot of potential. The problem, however, is that the presentation of the proposed approach is verbose and partially incomprehensible. What exactly is different to existing approaches? What exactly is the formal definition of the method? All of this is not well presented and, in my opinion, requires another round of editing and reviews.\n\n", "The paper proposes a new method (Ego-CNN) to compute supervised embeddings of graphs based on the neighborhood structure of nodes. Using an approach similar to attention and deconvolution, the paper also aims to detect substructures in graphs that are important for a given supervised task.\n\nLearning graph representations is an important task and fits well into ICLR. The paper pursues interesting ideas and shows promising experimental results. I've also found the focus of the paper on interpretability (by detecting important substructures) interesting and promising. However, in its current form, I am concerned about both the novelty and the clarity of the paper.\n\nRegarding novelty: The general idea of Ego-CNN seems to be quite closely related to the model of Kipf and Welling [2]. Unfortunately, this connection is neither made clear in the discussion of related work, nor does the experimental evaluation include a comparison. In particular, the paper mentions that Ego-CNN is similar to the Weißfeiler-Lehman (WL) algorithm. However, the same is the case for [2] (see Appendix A in [2] for a discussion). It would therefore be important to discuss the benefits of Ego-CNN over [2] clearly, especially since [2] is arguably simpler and doesn't require a fixed-size neighborhood.\n\nRegarding clarity: In general, the paper would greatly benefit from a clearer discussion of methods and results. For instance,\n- The paper lacks a complete formal definition of the model.\n- Detecting critical substructures is an explicit focus of the paper. However, Section 4.1 provides only a very short description of the proposed approach and lacks again any formal definition. Similarly, the experimental results in Section 5.3 require a deeper analysis of the detected substructures as the presented examples are mostly anecdotal. For instance, quantitative results on synthetic graphs (where the critical substructures are known) would improve this section.\n- The discussion of scale-free regularization in Section 4.2 is very hand-wavy. It lacks again any formal proof that the proposed approach exploits scale-free structures or even a proper motivation why this regularization should improve results. Furthermore, the experimental results in Section 5.2 are only evaluated on a single dataset and it is difficult to say whether the improvement gains are due to some scale-free property of the model. For instance, the improvement could also just stem from the different architecture and/or decreased overfitting due to the decreased number of parameters from weight-tying.\n \nFurther comments:\n- The discussion of related work is sometimes unclear. For instance, precisely why can't Neural Fingerprint detect critical structures? Similarly, how is the k-node neighborhood constraint of Patchy-San different than the one of Ego-CNN?\n- In graph theory, the standard notion of neighborhood are all nodes adjacent to a given node, e.g., see [1]\n- The writing could be improved, since I found some passages difficult to read due to typos and sentence structure.\n\n[1] https://en.wikipedia.org/wiki/Neighbourhood_(graph_theory)\n[2] Kipf et al. \"Semi-supervised classification with graph convolutional\", 2017.", "Dear reviewer,\nThank you for time and constructive comments. Here are our answers to your questions.\n\nQ: What exactly Patchy-San cannot do but we can?\n\nBoth of Patchy-San and ours can detect useful patterns, but ours are more efficient in terms of the size of detectable local regions.\n\nRemind that Patchy-San scans the k x k adjacency matrix of neighborhoods formed by the k nearest neighbors. However, Patchy-San (proposed in their paper) is only a “single layer” model, meaning that it can only detect local neighborhoods with at most k nodes. By contrast, the detectable size of local neighborhoods of our Ego-CNN is increased as depth increases.\n\nIn Section 2, we tried to “generalize” Patchy-San’s idea (detecting patterns in the adjacency matrix of a node) to multiple layers and showed that generalization fails.\nHowever, Patchy-San cannot be directly stacked into multiple layers because the output of Patchy-San(i.e. neighborhood embeddings) cannot be treated as the required input \"adjacency matrix\" of the next Patchy-San layer.\nA naive way to generate the required k x k adjacency matrix is to calculate the pairwise similarity of the k neighborhoods with the most similar embeddings.\nThis generalization does enlarge the receptive fields of Patchy-San as depth increases. \n\nHowever, as stated in Section 2, it would be very hard to realize in practice because of two reasons:\n(1) similar neighborhoods are selected based on the “output” of previous layer. During training, the output of previous layer is likely to change, making the composition of a neighborhood “non-static”.\n(2) similar neighborhoods may not be adjacent at a deeper layer, preventing the enlarged receptive field of a neuron from denoting a local neighborhood (i.e. connected subgraph) .\n\nThe components forming into a neighborhood at deeper layer are likely to change and may spread out to the \"entire graph\" during training. Thus, this generalization (based on adjacency matrix) does not give neighborhoods corresponding to local regions. And our egocentric design is designed to solve the above problems.\n\nBack to the sentences that are confusing to you: \n>> “The reason why the idea of Patchy-San fails to generalize into multiple layers is that its\n>> definition of neighborhood, which is based on adjacency matrix, is not static and may not\n>> corresponding to local regions in the graph. ”\n\nThank you for picking them out. We meant to briefly mention the drawbacks of the “generalized” version. In fact, the “adjacency matrix” is the one defined on the neighborhood embeddings. \nHere is an update with more details:\n“The reason why the idea of Patchy-San fails to generalize into multiple layers is that its definition of neighborhood (which is based on adjacency matrix) makes the composition of neighborhoods at deeper layer not static and may not corresponding to local regions in the graph. ”\n\nQ: >> \"Our main idea is to use the egocentric design, i.e. ...\" Unfortunately, I find it difficult to understand \n >> what this means.\nA: Thanks for your constructive comments, we have revised Section 3 based on your suggestions. \n\nQ: What exactly is the formal definition of the method?\nA:We replace the Algorithm describing steps of Ego-Convolution with formal definition in math formula. Please refer to the Section 3.\n\nIf you have further questions, please comment below. And we appreciate if you could update your rating if the above clears your doubts.", "A revision updated (latest updated on 5 Jan.)\n* Section 3\n - replace Algorithm steps of Ego-Convolution with formal definition\n - add comparison to previous work to show why they fail to detect precise structure\n - rewrite verbose sentences without changing the meaning and fix typos\n* Section 4\n - add details to visualization steps\n - add definition of scale-free", "Dear reviewer,\nThank you for your time and comments. Before replying your specific comments, we think it is necessary to clarify some misunderstandings first.\n\nFirst, the model of Kipf and Welling [1] is not quite related to our paper, and this may have misled you from judging the novelty.\n\nYou said:\n>> Regarding novelty: The general idea of Ego-CNN seems to be quite closely related to the model of Kipf and Welling.\n\nOur main idea is to detect the “precise” critical structure but not aiming to be a generalization of Weisfeiler-Lehman(WL).\nIn fact, as you pointed out in Appendix A.1 of Kipf and Welling [1], to be a generalization of WL, it only requires the algorithm to approximate the “hash function” in WL. And to generalize WL, convolving on the “summation” of neighbors’ node embeddings is enough.\n\nHowever, being a generalization of WL is not enough to detect the “precise” neighborhood structure. As “summing” over neighbors’ node embeddings loses the relative position of neighbors’ neighborhoods (which is also the same problem as Neural Fingerprints [2] and is discussed in Section 2 in the draft).\n\nAnd that is the reason why we design our filters to learn the “entire” neighborhood structure, but not an approximation (i.e. “summation” of neighboring node embeddings). Although the math formula may look similar, the underlying ideas are quite different. It is the goal of detecting precise structure that separates us from those previous works such as Kipf and Welling[1], and Neural Fingerprints[2].\n\nWe hope the above explanation can help you understand the fundamental difference between our work and Kipf and Welling [1].\n\nThe following answers your specific comments:\n\nQ: The paper lacks a complete formal definition of the model\nA: Thank for your comment. \nWe have replaced the Algorithm describing Ego-Convolution with formal math definition in the revision. \nAlso, more explanation is added to section 4.1 visualization based on your suggestion.\n\nQ: >> - The discussion of scale-free regularization in Section 4.2 is very hand-wavy...\nA: Thanks, we have cited the definition of a Scale-Free network [3] and added further explanations to the paper, as extracted below: “Scale-Free networks [3] are networks with self-similarity, which means the same patterns can be observed when zooming at different scales.” \nThe power-low distribution shown in the paper is a common indicator for scale-free networks.\nAnd, by the definition of “self-similar” property, the weight-tying (i.e.repeat the combination of neighborhood patterns at each layer) is a natural way to generate scale-free networks(, and the power-law degree distribution will surely follow).\n\nQ: why can't Neural Fingerprint detect critical structures?\nA: It is that the filters in Neural Fingerprint only learn the approximated neighborhood(i.e. summation of neighbors’ node embeddings). Neural Fingerprint is basically a hash function that maps a graph to a unique fingerprint. But you cannot do the inverse to derive precise structure from a given fingerprint in their model(due to the summation in their design) and they do not need to be able to.\nAlthough, you may argue it is possible to use a dictionary to store the mapping. But that additional dictionary is not even needed in our case. The above has been explained in Section 2.\n\nQ: Similarly, how is the k-node neighborhood constraint of Patchy-San different than the one of Ego-CNN?\nA: Sorry, we do not understand your question very well. Are you asking why the k used in Patchy-San layer and the one in Ego-Convolution can be different? (ex: k=10 in Patchy-San layer, and k=16 in Ego-Convolution)\nBecause our main idea is to “aggregate” k neighbors’ neighborhoods, the size of the neighborhoods has nothing to do with the aggregation. As long as the neighborhoods are centered at the corresponding node, our Ego-Convolution can generate enlarged neighborhoods by aggregation.\n\nIf you have further questions, please feel free to comment below. And we appreciate if you could update your rating if the above clears your doubts.\n\n\n[1] Kipf et al. \"Semi-supervised classification with graph convolutional\", ICLR 2017.\n[2] Duvenaud et al. “Convolutional networks on graphs for learning molecular fingerprints”, NIPS 2015.\n[3] LI, Lun, et al. “Towards a theory of scale-free graphs: Definition, properties, and implications. Internet Mathematics”, 2005.", "Dear Reviewer,\nThank you for the positive comments and useful suggestions. We have revise based on your suggestions.\n\nQ: >> “1. The two panels of Figure 1 seems redundant.”\nA: Thank you for the suggestion. Indeed, having Figure 1(b) is enough.\n\nQ: >> “2. Figure 4 does not provide useful information, especially ...”\nA: Thank you for pointing out. We have include that in Figure 4.\n\nQ: >> “3. There seems to be a mistake in Figure 5 with the top neighborhood in white”\nA: You are right. Thanks for pointing out.\n\nQ: >> “4. The connection between weight-tying and scale-free structure needs better explanation. Are the authors trying to say that fractal processes generates power-law degree distributions?”\nA: Thanks, we have cited the definition of a Scale-Free network [1] and added further explanations to the paper, as extracted below: “Scale-Free networks [1] are networks with self-similarity, which means the same patterns can be observed when zooming at different scales.” \nThe power-low distribution shown in the paper is a common indicator for scale-free networks.\nAnd, by the definition of “self-similar” property, the weight-tying (i.e.repeat the combination of neighborhood patterns at each layer) is a natural way to generate scale-free networks(, and the power-law degree distribution will surely follow).\n\nQ: >> 5. “it might be better to look into structures in high level layers for truly global signatures ...”\nA: Thank for your suggestion. Hopefully, we would try other way to show global signatures before 5 Jan.\n\n[1] LI, Lun, et al. “Towards a theory of scale-free graphs: Definition, properties, and implications”. Internet Mathematics, 2005.", "\n* In the 4th paragraph of section 1., The only work ... is Spatial GCN ..., but it has the complexity O(N^2), ... should be O(N^3).\n* In section 3. Effective Receptive Field on Ambient Graph, all reference to Figure 4.2 should be Figure 6.\n* Table 4. caption should be \"Ego-CNN with ...\"\n* Figure 8(b) caption should be C_82 H_165 OH" ]
[ 7, 4, 4, -1, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkBHr1WRW", "iclr_2018_SkBHr1WRW", "iclr_2018_SkBHr1WRW", "rk7Oq1oxG", "iclr_2018_SkBHr1WRW", "H1FOVLn-G", "Hk3rCW5ef", "iclr_2018_SkBHr1WRW" ]
iclr_2018_BJ6anzb0Z
Multimodal Sentiment Analysis To Explore the Structure of Emotions
We propose a novel approach to multimodal sentiment analysis using deep neural networks combining visual recognition and natural language processing. Our goal is different than the standard sentiment analysis goal of predicting whether a sentence expresses positive or negative sentiment; instead, we aim to infer the latent emotional state of the user. Thus, we focus on predicting the emotion word tags attached by users to their Tumblr posts, treating these as “self-reported emotions.” We demonstrate that our multimodal model combining both text and image features outperforms separate models based solely on either images or text. Our model’s results are interpretable, automatically yielding sensible word lists associated with emotions. We explore the structure of emotions implied by our model and compare it to what has been posited in the psychology literature, and validate our model on a set of images that have been used in psychology studies. Finally, our work also provides a useful tool for the growing academic study of images— both photographs and memes—on social networks.
rejected-papers
This work combines words and images from Tumblr to provide more fine-grained sentiment analysis than just positive-negative. The contribution is too slight, as a straightforward combination of existing architectures applied on an emotion classification task with conclusions that aren't well motivated and are not providing any comparison to existing related work on finer emotion classification.
train
[ "rJA29bLxf", "BJ2J7pFgf", "HJcw0y5eM", "B1AeovaXM", "Ske1iPpQG", "HkZ_Yw6mM", "S1U4Kv67f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "\nThe authors present a study that aims at inferring the \"emotional\" tags provided by Thumblr users starting from images and texts in the captions. For text processing the authors use a standard LSTM taking as input GLOVE vectors of words in a sentence. For visual information, authors use a pretrained CNN (with fine tuning). A fully connected layer is used to fuse the multimodal information. Experimental results are reported in a self generated data set. \n\nThe contribution from the RL perspective is limited, in the sense that the authors simply applied standard models to predict a bunch of labels (in this case, emotion labels). It is interesting the \"psychological\" analysis that the authors present in Section 6. Still, I think the contribution in that part is a: sentiment-psychologically inspired analysis of the Thumbrl data set. \n\nI think the author's statement on that this study leads to a more plausible psychological model of emotion is not well founded (they also mention to learn to recognize the latent emotional state). Whereas it is true that psychological studies rely on self - filled questionnaires, comparing a questionnaire (produced by expert psychologist) to the tags provided by users in a social network is to ambitious. (in some parts the authors make explicit this is an approximation, this should be stressed in every part of the paper)\n", "This paper presents a method for classifying Tumblr posts with associated images according to associated single emotion word hashtags. The method relies on sentiment pre-processing from GloVe and image pre-processing from Inception. \n \nMy strongest criticism for this paper is against the claim that Tumblr post represent self-reported emotions and that this method sheds new insight on emotion representation and my secondary criticism is a lack of novelty in the method, which seems to be simply a combination of previously published sentiment analysis module and previously published image analysis module, fused in an output layer. \n\nThe authors claim that the hashtags represent self-reported emotions, but this is not true in the way that psychologists query participants regarding emotion words in psychology studies. Instead these are emotion words that a person chooses to broadcast along with an associated announcement. As the authors point out, hashtags and words may be used sarcastically or in different ways from what is understood in emotion theory. It is quite common for everyday people to use emotion words this way e.g. using #love to express strong approval rather than an actual feeling of love. \n\nIn their analysis the authors claim:\n“The 15 emotions retained were those with high relative frequencies on Tumblr among the PANAS-X scale (Watson & Clark, 1999)”.\nHowever five of the words the authors retain: bored, annoyed, love, optimistic, and pensive are not in fact found in the PANAS-X scale:\n\nReference: The PANAS-X Scale: https://wiki.aalto.fi/download/attachments/50102838/PANAS-X-scale_spec.pdf Also the longer version that the authors cited: \nhttps://www2.psychology.uiowa.edu/faculty/clark/panas-x.pdf\n\nIt should also be noted that the PANAS (Positive and Negative Affect Scale) scale and the PANAS-X (the “X” is for eXtended) scale are questionnaires used to elicit from participants feelings of positive and negative affect, they are not collections of \"core\" emotion words, but rather words that are colloquially attached to either positive or negative sentiment. For example PANAS-X includes words like:“strong” ,“active”, “healthy”, “sleepy” which are not considered emotion words by psychology. \n\nIf the authors stated goal is \"different than the standard sentiment analysis goal of predicting whether a sentence expresses positive or negative sentiment\" they should be aware that this is exactly what PANAS is designed to do - not to infer the latent emotional state of a person, except to the extent that their affect is positive or negative.\n\n\nThe work of representing emotions had been an field in psychology for over a hundred years and it is still continuing. https://en.wikipedia.org/wiki/Contrasting_and_categorization_of_emotions.\n\nOne of the most popular theories of emotion is the theory that there exist “basic” emotions: Anger, Disgust, Fear, Happiness (enjoyment), Sadness and Surprise (Paul Ekman, cited by the authors). These are short duration sates lasting only seconds. They are also fairly specific, for example “surprise” is sudden reaction to something unexpected, which is it exactly the same as seeing a flower on your car and expressing “what a nice surprise.” The surprise would be the initial reaction of “what’s that on my car? Is it dangerous?” but after identifying the object as non-threatening, the emotion of “surprise” would likely pass and be replaced with appreciation. \n\nThe Circumplex Model of Emotions (Posner et al 2005) the authors refer to actually stands in opposition to the theories of Ekman. From the cited paper by Posner et al : \n\"The circumplex model of affect proposes that all affective states arise from cognitive interpretations of core neural sensations that are the product of two independent neurophysiological systems. This model stands in contrast to theories of basic emotions, which posit that a discrete and independent neural system subserves every emotion.\"\nFrom my reading of this paper, it is clear to me that the authors do not have a clear understanding of the current state of psychology’s view of emotion representation and this work would not likely contribute to a new understanding of the latent structure of peoples’ emotions.\n\nIn the PCA result, it is not \"clear\" that the first axis represents valence, as \"sad\" has a slight positive on this scale and \"sad\" is one of the emotions most clearly associated with negative valence.\n\nWith respect to the rest of the paper, the level of novelty and impact is \"ok, but not good enough.\" This analysis does not seem very different from Twitter analysis, because although Tumblr posts are allowed to be longer than Twitter posts, the authors truncate the posts to 50 characters. Additionally, the images do not seem to add very much to the classification. The authors algorithm also seems to be essentially a combination of two other, previously published algorithms.\n\nFor me the novelty of this paper was in its application to the realm of emotion theory, but I do not feel there is a contribution here. This paper is more about classifying Tumblr posts according to emotion word hashtags than a paper that generates a new insights into emotion representation or that can infer latent emotional state. \n\n\n\n\n\n\n\n\n\n\n\n", "The paper presents a multi-modal CNN model for sentiment analysis that combines images and text. The model is trained on a new dataset collected from Tumblr.\n\nPositive aspects:\n+ Emphasis in model interpretability and its connection to psychological findings in emotions\n+ The idea of using Tumblr data seems interesting, allowing to work with a large set of emotion categories, instead of considering just the binary task positive vs. negative. \n\nWeaknesses:\n- A deeper analysis of previous work on the combination of image and text for sentiment analysis (both datasets and methods) and its relation with the presented work is necessary. \n- The proposed method is not compared with other methods that combine text and image for sentiment analysis.\n- The study is limited to just one dataset.\n\nThe paper presents interesting ideas and findings in an important challenging area. The main novelties of the paper are: (1) the use of Tumblr data, (2) the proposed CNN architecture, combining images and text (using word embedding. \n\nI missed a \"related work section\", where authors clearly mention previous works on similar datasets. Some related works are mentioned in the paper, but those are spread in different sections. It's hard to get a clear overview of the previous research: datasets, methods and contextualization of the proposed approach in relation with previous work. I think authors should cite Sentibanks. Also, at some point authors should compare their proposal with previous work. \n\nMore comments:\n\n- Some figures could be more complete: to see more examples in Fig 1, 2, 3 would help to understand better the dataset and the challenges. \n- In table 4, for example, it would be nice to see the performance on the different emotion categories.\n- It would be interesting to see qualitative visual results on recognitions.\n\nI like this work, but I think authors should improve the aspects I mention for its publication.\n", "4) “The work of representing emotions had been an field in psychology for over a hundred years and it is still continuing. https://en.wikipedia.org/wiki/Contrasting_and_categorization_of_emotions.\n\nOne of the most popular theories of emotion is the theory that there exist “basic” emotions: Anger, Disgust, Fear, Happiness (enjoyment), Sadness and Surprise (Paul Ekman, cited by the authors). These are short duration sates lasting only seconds. They are also fairly specific, for example “surprise” is sudden reaction to something unexpected, which is it exactly the same as seeing a flower on your car and expressing “what a nice surprise.” The surprise would be the initial reaction of “what’s that on my car? Is it dangerous?” but after identifying the object as non-threatening, the emotion of “surprise” would likely pass and be replaced with appreciation. \n\nThe Circumplex Model of Emotions (Posner et al 2005) the authors refer to actually stands in opposition to the theories of Ekman. From the cited paper by Posner et al : \n\"The circumplex model of affect proposes that all affective states arise from cognitive interpretations of core neural sensations that are the product of two independent neurophysiological systems. This model stands in contrast to theories of basic emotions, which posit that a discrete and independent neural system subserves every emotion.\"\nFrom my reading of this paper, it is clear to me that the authors do not have a clear understanding of the current state of psychology’s view of emotion representation and this work would not likely contribute to a new understanding of the latent structure of peoples’ emotions.”\n\nWe agree with your characterization of two of the theories in the literature, but as you say this work continues. As we believe this is by no means a settled field, we see our contribution as that of providing a new measurement tool, more robust than standard sentiment analysis, for the automatic measurement of emotion. We reference both Ekman and the Circumplex model because we see our work as attempting to find evidence, positive or negative, for these theories. But indeed, a longer a more detailed study is needed--we have just scratched the surface in terms of creating a relevant dataset and showing that simple neural network models can achieve good accuracy on this dataset.\n\n5) “In the PCA result, it is not \"clear\" that the first axis represents valence, as \"sad\" has a slight positive on this scale and \"sad\" is one of the emotions most clearly associated with negative valence.”\n\nWe agree and will update our text with caveats accordingly.\n\n6) “With respect to the rest of the paper, the level of novelty and impact is \"ok, but not good enough.\" This analysis does not seem very different from Twitter analysis, because although Tumblr posts are allowed to be longer than Twitter posts, the authors truncate the posts to 50 characters.”\n\nWe truncate to 50 words, not 50 characters, and therefore if we consider that on average an English word contains 4.5 characters (http://www.cs.trincoll.edu/~crypto/resources/LetFreq.html), the Tumblr text is 60% longer than the maximum Twitter post, and probably more than twice longer than the average Twitter post.\n\n7) “Additionally, the images do not seem to add very much to the classification. The authors algorithm also seems to be essentially a combination of two other, previously published algorithms.\n\nFor me the novelty of this paper was in its application to the realm of emotion theory, but I do not feel there is a contribution here. This paper is more about classifying Tumblr posts according to emotion word hashtags than a paper that generates a new insights into emotion representation or that can infer latent emotional state.”\n\nThank you for your careful reading of our paper.\n", "1) “The authors claim that the hashtags represent self-reported emotions, but this is not true in the way that psychologists query participants regarding emotion words in psychology studies. Instead these are emotion words that a person chooses to broadcast along with an associated announcement. As the authors point out, hashtags and words may be used sarcastically or in different ways from what is understood in emotion theory. It is quite common for everyday people to use emotion words this way e.g. using #love to express strong approval rather than an actual feeling of love.”\n\nAs we describe in the paper, there is no agreed upon gold-standard for measuring emotion in psychology. Self-report is considered the best, but there can be demand effects (Orne 1962) through which subjects try to tailor their responses in some way due to the fact that they are participating in a study. By contrast, behavioral measures (Webb et al, 1966) can be more reliable as they are less subject to demand effects. We see all of the performance by users on Tumblr as behavioral, with the emotion tags are as a behavioral report of emotion. We agree that there is noise inherent in this measure, due to, e.g. sarcasm, but did not see reason to worry that this was a significant source of bias.\n\n2) “In their analysis the authors claim:\n“The 15 emotions retained were those with high relative frequencies on Tumblr among the PANAS-X scale (Watson & Clark, 1999)”.\nHowever five of the words the authors retain: bored, annoyed, love, optimistic, and pensive are not in fact found in the PANAS-X scale:\n\nReference: The PANAS-X Scale: https://wiki.aalto.fi/download/attachments/50102838/PANAS-X-scale_spec.pdf Also the longer version that the authors cited: \nhttps://www2.psychology.uiowa.edu/faculty/clark/panas-x.pdf\n\nIt should also be noted that the PANAS (Positive and Negative Affect Scale) scale and the PANAS-X (the “X” is for eXtended) scale are questionnaires used to elicit from participants feelings of positive and negative affect, they are not collections of \"core\" emotion words, but rather words that are colloquially attached to either positive or negative sentiment. For example PANAS-X includes words like:“strong” ,“active”, “healthy”, “sleepy” which are not considered emotion words by psychology. ”\n\nGood point, the five emotions not appearing in the PANAS-X scale were found in the Plutchik's Wheel of Emotions. We extracted as many posts as possible for various emotions and kept the 15 emotions with the highest relative frequencies. We clarified that point in the paper, page 2.\n\n3) “If the authors stated goal is \"different than the standard sentiment analysis goal of predicting whether a sentence expresses positive or negative sentiment\" they should be aware that this is exactly what PANAS is designed to do - not to infer the latent emotional state of a person, except to the extent that their affect is positive or negative.”\n\nBy standard sentiment analysis, we simply mean methods designed to categorize a sentence like \"I loved the new Star Wars movie!\" as positive. Very simple methods (e.g. LIWC) can do a decent job at this task. But do these methods capture latent emotional state? (Whether this emotional state is conceived of as positive/negative affect, core emotions, or a circumplex model is a separate issue we discuss below). We argue that there is strong evidence that standard sentiment analysis methods do NOT correspond to the latent emotional of the user and have added a citation (Flaxman and Kassam, 2016) backing up this claim. This is what motivates our attempts to find a new sentiment analysis method.", "Thank you for the review.\n\n“The contribution from the RL perspective is limited, in the sense that the authors simply applied standard models to predict a bunch of labels (in this case, emotion labels)”\nWe wouldn’t qualify our model to just be predicting a “bunch of labels” given the complexity of inferring emotional states (due to the high intra class variability). The main contribution of the paper is that we investigate the study of emotion with a novel and large dataset including images (which is not as readily available on other social media such as Twitter), and further use the model to examine psychological components of the structure of emotion.\n", "Thank you very much for your comments which helped us restructure the paper that is hopefully more intelligible now.\n\n“A deeper analysis of previous work on the combination of image and text for sentiment analysis (both datasets and methods) and its relation with the presented work is necessary.”\nWe added a “Related work” section (page 2) to better contextualise our proposed model with what has been previously done in visual and textual sentiment analysis.\n\n“Some figures could be more complete: to see more examples in Fig 1, 2, 3 would help to understand better the dataset and the challenges.”\nMore examples of Tumblr posts are now in the Appendix, page 13.\n" ]
[ 6, 4, 5, -1, -1, -1, -1 ]
[ 5, 5, 5, -1, -1, -1, -1 ]
[ "iclr_2018_BJ6anzb0Z", "iclr_2018_BJ6anzb0Z", "iclr_2018_BJ6anzb0Z", "BJ2J7pFgf", "BJ2J7pFgf", "rJA29bLxf", "HJcw0y5eM" ]
iclr_2018_rJ7yZ2P6-
Enhance Word Representation for Out-of-Vocabulary on Ubuntu Dialogue Corpus
Ubuntu dialogue corpus is the largest public available dialogue corpus to make it feasible to build end-to-end deep neural network models directly from the conversation data. One challenge of Ubuntu dialogue corpus is the large number of out-of-vocabulary words. In this paper we proposed an algorithm which combines the general pre-trained word embedding vectors with those generated on the task-specific training set to address this issue. We integrated character embedding into Chen et al's Enhanced LSTM method (ESIM) and used it to evaluate the effectiveness of our proposed method. For the task of next utterance selection, the proposed method has demonstrated a significant performance improvement against original ESIM and the new model has achieved state-of-the-art results on both Ubuntu dialogue corpus and Douban conversation corpus. In addition, we investigated the performance impact of end-of-utterance and end-of-turn token tags.
rejected-papers
This paper's idea is to augment pre-trained word embeddings on a large corpus with embeddings learned on the data of interest. This is shown to yield better results than the pre-trained word embeddings alone. This contribution is too limited to justify publication at iclr.
train
[ "rJdjmmLez", "BkomChuxf", "H1RMVeqgz", "r1RKt4ffM", "BJzBG4Gfz", "ryX6HZzzM", "r1lgG9n-z", "HJAQJq2Wf", "rJswuIPbG", "HJAg9rDZM", "SJEsevIZf", "B1BaLmQWz", "SyDbmz6gf", "BkpgMaheG", "B15ZQj2gf", "rk9R5BjgM", "r1UNAmogz", "HkFtNMclM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author", "author", "public", "author", "author", "public", "author", "public", "author", "author", "public" ]
[ "Summary:\nThis paper proposes an approach to improve the out-of-vocabulary embedding prediction for the task of modeling dialogue conversations. The proposed approach uses generic embeddings and combines them with the embeddings trained on the training dataset in a straightforward string-matching algorithm. In addition, the paper also makes a couple of improvements to Chen et. al's enhanced LSTM by adding character-level embeddings and replacing average pooling by LSTM last state summary vector. The results are shown on the standard Ubuntu dialogue dataset as well as a new Douban conversation dataset. The proposed approach gives sizable gains over the baselines.\n\n\nComments:\n\nThe paper is well written and puts itself nicely in context of previous work. Though, the proposed extension to handle out-of-vocabulary items is a simple and straightforward string matching algorithm, but nonetheless it gives noticeable increase in empirical performance on both the tasks. All in all, the methodological novelty of the paper is small but it has high practical relevance in terms of giving improved accuracy on an important task of dialogue conversation.", "The paper considers a setting (Ubuntu Dialogue Corpus and Douban Conversation Corpus) where most word types in the data are not covered by pretrained representations. The proposed solution is to combine (1) external pretrained word embeddings and (2) pretrained word embeddings on the training data by keeping them as two views: use the view if it's available, otherwise use a zero vector. This scheme is shown to perform well compared to other methods, specifically combinations of pretraining vs not pretraining embeddings on the training data, updating vs not updating embeddings during training, and others. \n\nQuality: Low. The research is not very well modularized: the addressed problem has nothing specifically to do with ESIM and dialogue response classification, but it's all tangled up. The proposed solution is reasonable but rather minor. Given that the model will learn task-specific word representations on the training set anyway, it's not clear how important it is to follow this procedure, though minor improvement is reported (Table 5). \n\nClarity: The writing is clear. But the point of the paper is not immediately obvious because of its failure to modularize its contributions (see above).\n\nOriginality: Low to minor.\n\nSignificance: It's not convincing that an incremental improvement in the pretraining phase is so significant, for instance compared to developing a novel better architecture actually tailored to the dialogue task. ", "The main contributions in this paper are:\n1) New variants of a recent LSTM-based model (\"ESIM\") are applied to the task of response-selection in dialogue modeling -- ESIM was originally introduced and evaluated for natural language inference. In this new setting, the ESIM model (vanilla and extended) outperform previous models when trained and evaluated on two distinct conversational datasets.\n\n2) A fairly trivial method is proposed to extend the coverage of pre-trained word embeddings to deal with the OOV problem that arises when applying them to these conversational datasets.\nThe method itself is to combine d1-dimensional word embeddings that were pretrained on a large unannotated corpus (vocabulary S) with distinct d2-dimensional word embeddings that are trained on the task-specific training data (vocabulary T). The enhanced (d1+d2)-dimensional representation for a word is constructed by concatenating its vectors from the two embeddings, setting either the d1- or d2-dimensional subvector to zeros when the word is absent from either S or T, respectively. This method is incorporated as an extension into ESIM and evaluated on the two conversation datasets.\n\nThe main results can be characterized as showing that this vocabulary extension method leads to performance gains on two datasets, on top of an ESIM-model extended with character-based word embeddings, which itself outperforms the vanilla ESIM model.\n\nThese empirical results are potentially meaningful and could justify reporting, but the paper's organization is very confusing, and too many details are too unclear, leading to low confidence in reproducibility. \n\nThere is basic novelty in applying the base model to a new task, and the analysis of the role of the special conversational boundary tokens is interesting and can help to inform future modeling choices. The embedding-enhancing method has low originality but is effective on this particular combination of model architecture, task and datasets. I am left wondering how well it might generalize to other models or tasks, since the problem it addresses shows up in many other places too...\n\nOverall, the presentation switches back and forth between the Douban corpus and the Ubuntu corpus, and between word2vec and Glove embeddings, and this makes it very challenging to understand the details fully.\n\nS3.1 - Word representation layer: This paragraph should probably mention that the character-composed embeddings are newly introduced here, and were not part of the original formulation of ESIM. That statement is currently hidden in the figure caption.\n\nAlgorithm 1:\n- What set does P denote, and what is the set-theoretic relation between P and T?\n- Under one possible interpretation, there may be items in P that are in neither T nor S, yet the algorithm does not define embeddings for those items even though its output is described as \"a dictionary with word embeddings ... for P\". This does not seem consistent? I think the sentence in S4.2 about initializing remaining OOV words as zeros is relevant and wonder if it should form part of the algorithm description?\n\nS4.1 - What do the authors mean by the statement that response candidates for the Douban corpus were \"collected by Lucene retrieval model\"?\n\nS4.2 - Paragraph two is very unclear. In particular, I don't understand the role of the Glove vectors here when Algorithm 1 is used, since the authors refer to word2vec vectors later in this paragraph and also in the Algorithm description.\n\nS4.3 - It's insufficiently clear what the model definitions are for the Douban corpus. Is there still a character-based LSTM involved, or does FastText make it unnecessary?\n\nS4.3 - \"It can be seen from table 3 that the original ESIM did not perform well without character embedding.\" This is a curious way to describe the result, when, in fact, the ESIM model in table 3 already outperforms all the previous models listed.\n\nS4.4 - gensim package -- for the benefit of readers unfamiliar with gensim, the text should ideally state explicitly that it is used to create the *word2vec* embeddings, instead of the ambiguous \"word embeddings\".\n\n", "For reference and result reproducibility (ESIM^a in Table 3 in the paper), I pasted the logs of performance evaluation on the validation every 1000 steps during the training. It took about 13 hours 41 minutes to reach 23000 training steps.\n\nstep: 1000\nMAP (mean average precision: 0.735673771383\tMRR (mean reciprocal rank): 0.735673771383\tTop-1 precision: 0.607566462168\tNum_query: 19560\n\nStep: 2000\nMAP (mean average precision: 0.762894553186\tMRR (mean reciprocal rank): 0.762894553186\tTop-1 precision: 0.643149284254\tNum_query: 19560\n\nStep: 3000\nMAP (mean average precision: 0.781005473594\tMRR (mean reciprocal rank): 0.781005473594\tTop-1 precision: 0.666462167689\tNum_query: 19560\n\nStep: 4000\nMAP (mean average precision: 0.791324840945\tMRR (mean reciprocal rank): 0.791324840945\tTop-1 precision: 0.679396728016\tNum_query: 19560\n\nStep: 5000\nMAP (mean average precision: 0.793004146785\tMRR (mean reciprocal rank): 0.793004146785\tTop-1 precision: 0.680112474438\tNum_query: 19560\n\nStep: 6000\nMAP (mean average precision: 0.806250669491\tMRR (mean reciprocal rank): 0.806250669491\tTop-1 precision: 0.698108384458\tNum_query: 19560\n\n....\nStep: 9000\nMAP (mean average precision: 0.819590433992\tMRR (mean reciprocal rank): 0.819590433992\tTop-1 precision: 0.717791411043\tNum_query: 19560\n\nStep: 10000\nMAP (mean average precision: 0.818069269971\tMRR (mean reciprocal rank): 0.818069269971\tTop-1 precision: 0.714008179959\tNum_query: 19560\n\nStep: 11000\nMAP (mean average precision: 0.818855596942\tMRR (mean reciprocal rank): 0.818855596942\tTop-1 precision: 0.714979550102\tNum_query: 19560\n\nStep: 12000\nMAP (mean average precision: 0.821677885708\tMRR (mean reciprocal rank): 0.821677885708\tTop-1 precision: 0.719325153374\tNum_query: 19560\n\nStep: 13000\nMAP (mean average precision: 0.8232087472\tMRR (mean reciprocal rank): 0.8232087472\tTop-1 precision: 0.721523517382\tNum_query: 19560\n\nStep: 14000\nMAP (mean average precision: 0.825161326971\tMRR (mean reciprocal rank): 0.825161326971\tTop-1 precision: 0.724948875256\tNum_query: 19560\n\nStep: 15000\nMAP (mean average precision: 0.825991109975\tMRR (mean reciprocal rank): 0.825991109975\tTop-1 precision: 0.725051124744\tNum_query: 19560\n\nStep: 16000\nMAP (mean average precision: 0.824983891648\tMRR (mean reciprocal rank): 0.824983891648\tTop-1 precision: 0.722750511247\tNum_query: 19560\n\nStep: 17000\nMAP (mean average precision: 0.827094653812\tMRR (mean reciprocal rank): 0.827094653812\tTop-1 precision: 0.727198364008\tNum_query: 19560\n\nStep: 18000\nMAP (mean average precision: 0.829552151297\tMRR (mean reciprocal rank): 0.829552151297\tTop-1 precision: 0.730981595092\tNum_query: 19560\n\nStep: 19000\nMAP (mean average precision: 0.830157512903\tMRR (mean reciprocal rank): 0.830157512903\tTop-1 precision: 0.73200408998\tNum_query: 19560\n\nStep: 20000\nMAP (mean average precision: 0.82902826468\tMRR (mean reciprocal rank): 0.82902826468\tTop-1 precision: 0.729703476483\tNum_query: 19560\n\nStep: 21000\nMAP (mean average precision: 0.832002669848\tMRR (mean reciprocal rank): 0.832002669848\tTop-1 precision: 0.734918200409\tNum_query: 19560\n\nStep: 22000\nMAP (mean average precision: 0.830050982731\tMRR (mean reciprocal rank): 0.830050982731\tTop-1 precision: 0.731339468303\tNum_query: 19560\n\nStep: 23000\nMAP (mean average precision: 0.832678571429\tMRR (mean reciprocal rank): 0.832678571429\tTop-1 precision: 0.735736196319\tNum_query: 19560\n\nStep: 24000\nMAP (mean average precision: 0.828641116467\tMRR (mean reciprocal rank): 0.828641116467\tTop-1 precision: 0.728936605317\tNum_query: 19560\n\nStep: 25000\nMAP (mean average precision: 0.826601259454\tMRR (mean reciprocal rank): 0.826601259454\tTop-1 precision: 0.725766871166\tNum_query: 19560\n", "Thank Hugo et al very much for reproducing the results. \n\n> The paper does not detail the computing infrastructure that was used.\nLocal machine : Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz * 2\n RAM: 32G ( 8 * 4G (DDR4, 2133 MHz)\n One GPU : nvidia P5000 (16 G GPU RAM)\n\nYou used Telsa K80 (with 24G GPU RAM). I have not compared the performance between P5000 and Tesla K80.\n\n> Accuracy and cost over the validation set and over a subset of the training set were employed to evaluate the training of the model.\n\nIn our experiments, we evaluated the accuracy, MRR, P@1 on the validation set every 1000 steps and saved the model with the highest MRR. In your code, you saved the model with the best accuracy on the validation set every 50 steps. \nMy suggestion: \n 1) use MRR\n 2) perform the evaluation on the validation set every K steps (K could be larger to reduce the computational cost since evaluation on the validation set is slow). This will help you speed up the training.\n\n> In training the character embeddings using Word2Vec,\n>we used all the default hyperparameters, and trained each\n> context/response as distinct inputs such that each context/response\n>pair takes one line in the input data file.\n\nI assume that there is a typo here. 'character embedding' may be 'word embedding'.\nIn our algorithm 1, we used Word2vec to generate word embedding on the training set and concatenated them with pre-built GloVe vectors. Character Embedding is used in our ESIM. Since you only evaluated the baseline ESIM model, character embedding would not be used.\n\n> the training of character-composed embeddings is briefly described only as the concatenation of final state vectors at the BiLSTM.\nThe implementation of character embedding was showed in my first comment. It is relatively easy to integrate them into your code (see: tf_esim.py Line 43 and Line 44). Character-embedding may consume more memory. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "INTRODUCTION\n\nAs participants in the 2018 ICLR Reproducibility Challenge, we aimed to reproduce the findings of this paper. The paper presents a methodology to improve performance on modelling dialogue systems that contain out-of-vocabulary words. Ultimately, we implemented Chen et al.’s Enhanced LSTM Method (ESIM) to model the Ubuntu Dialogue Corpus V2.0 as presented in the paper.\n\nOverall, we found the paper to be clear and concise. However, we found it difficult to implement the authors’ enhancements to the ESIM model. In particular, the training of character-composed embeddings is briefly described only as the concatenation of final state vectors at the BiLSTM. Further communication with the authors did not clarify enough for our purposes how exactly these embeddings are concatenated with word embeddings within the model. For this reason, as stated above, we decided only to replicate the ESIM.\n\nREPRODUCTION\n\nDownloading and preprocessing the Ubuntu Dialogue Corpus and pre-trained GloVe was a simple matter of following the procedure specified in the paper. Using publicly available datasets definitely facilitated reproducibility. In generating the dataset, all default parameters were used, with a random seed of ‘1234’ that the authors provided upon enquiry.\n\nThe generated data was modified into formats appropriate for Word2Vec and ESIM inputs. The dataset was tokenized using the publicly available Stanford CoreNLP library PTB Tokenizer, and then lemmatized. We then used a stored set of distinct tokens to filter the pre-trained GloVe vector, removing all words that do not appear in the training corpus. We thought this step would be beneficial since the unzipped glove dataset (glove.42B.300d.txt) is 4.67 GB large, which would take a considerable amount of memory simply to load it. The filtered GloVe dataset takes about 440 MB and contains roughly 9% of the original GloVe dataset. Through this process, we confirmed the authors’ observation that only 22% of the 823,057 Ubuntu tokens occur in the pre-built GloVe word vectors, and that our reproduction produced the same dataset.\n\t\t\t\t\nTo reproduce the baseline ESIM model, we were not able to access the source code of the paper’s authors due to issues regarding their employer’s open source policy. Instead, we implemented the ESIM using source code of an implementation by Williams et al., that was found on GitHub. We followed all hyperparameters specifications possible, and when particular hyperparameters were not provided, we consulted the authors who provided further detail. Specifically, these hyperparameters were ‘patience’, and ‘gradient clipping threshold’, and ‘max epochs’. Of these, the authors stated that ‘patience’ and ‘gradient clipping’ were not used, and that “training usually achieved the best performance (MRR) on the validation set at around 22000 - 25000 batch steps.” In general, the authors replied quickly and comprehensively to our enquiries within the comments section of OpenReview, which contributed positively to the reproducibility of the paper.\n\nWhen training the model, performance metrics were printed every 50 steps. Accuracy and cost over the validation set and over a subset of the training set were employed to evaluate the training of the model. We did not evaluate over the whole training set since this is significantly larger and would greatly slower the training time. We implemented our own algorithms to evaluate R@k and MRR.\n\nThe paper does not detail the computing infrastructure that was used. For our implementation we used an Google Cloud Engine instance (full technical specifications in the linked report).\n\nWe were only able to train the model to 9750 steps given our implementation architecture. This took 65 hours to train. This is considerably less than the 22000-25000 steps described by the authors as providing the best results. It is very likely that the authors trained their model using multiple GPU units, which would be unfeasible for us given the cost of GPUs on a virtual machine.\n\nRESULTS AND CONCLUSION\n\nGiven our limited computation power, we were able to train our reproduction model to the same level of performance as described by the authors. We observed a MRR of 0.733, lower than the MRR of 0.802 seen in the paper in question. However, on account of the methodology followed in our investigation, we can confirm that the model is re-computable. It seems highly likely that the paper’s results are valid, and would have been observed by us had our model been trained with more iterations.\n\nThe full report by H. Scurti, I. Sultan, and A. Wong is contained in the folder ‘report’ in the repository linked below: \n\nhttps://bitbucket.org/hugoscurti/comp551_f17_a4/src/\n", "Thank for your valuable feedback. \n\n> the addressed problem has nothing specifically to do with ESIM and dialogue response classification, but it's all tangled up. The proposed solution is reasonable but rather minor. \n\nIn order to check whether the effectiveness of the proposed enhanced representation depends on ESIM model and dataset, I uploaded a revision (12/11/2017) to use a very simple model (represent contexts/responses by a simple average of word vectors). I evaluated it on Ubuntu, Douban and WikiQA datasets. The results on the enhanced representation are still better on the above three datasets. This may indicate that the enhanced vectors may fuse domain-specific info into pre-built vectors. Also this process is unsupervised.\n\nSee section \"4.5 EVALUATION OF ENHANCED REPRESENTATION ON A SIMPLE MODEL\"\n\n\n\n\n", "I uploaded the revision on 12/11/2017 to address whether the effectiveness of the proposed enhanced representation depends on ESIM model and datasets.\n\nI added a section \"4.5 EVALUATION OF ENHANCED REPRESENTATION ON A SIMPLE MODEL\". Here I used a very simple model : represent contexts (or responses) by a simple average of word vectors. Cosine-similarity is used to rank candidate responses. The results on the enhanced vectors are still better. I also tested it on WikiQA dataset.", "> 1. We used stanford CoreNLP's library \nWe wrote a java program based on CoreNLP library to perform PTBTokenizer, other than command-line interface (CLI). For CLI, it is not easy to create input-output correspondence.\nSee java API example (https://stanfordnlp.github.io/CoreNLP/api.html)\nProperties props = new Properties();\nprops.put(\"annotators\", \"tokenize, ssplit, lemma\"}\n\n> Regarding word2vec, did you use any non-default hyperparameters? \nuse the default. Iter=20\n> did you train contexts and responses as distinct inputs or concatenate the context-response pairs to train?\ndistinct inputs. Each context/response takes one line in the input data file.\n\n\n\n\n\n", "Hi,\n\nWe have a few more questions.\n\n1. We used starnford CoreNLP's library (https://stanfordnlp.github.io/CoreNLP/tokenize.html) to use the equivalent of the PTBTokenizer, as stated in the paper. However, this produced 811,059 tokens (instead of 823,057 tokens). Have you specified any special parameters when applying tokenization?\n2. Regarding word2vec, did you use any non-default hyperparameters? And for training, did you train contexts and responses as distinct inputs or concatenate the context-response pairs to train?\n\nWe haven't received an email from you yet, but if you'd rather communicate through email, you can reach us at alexander.wong4@mail.mcgill.ca\n\nThanks!", "I uploaded a new revision on Dec. 6.\nOn Table 5, added performance comparison with FastText vectors.\n\nUsed the fixed pre-built FastText vectors ( https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.en.zip) where word vectors for out-of-vocabulary words were computed based on built model.\nThat is,\nall_words_on_ubtuntu_dataset|./fasttext print-word-vectors wiki.en.bin > ubuntu_fastText_word_vectors.txt\n(see: https://github.com/facebookresearch/fastText)\n\nThe performance of the proposed method is better.", "Thank for your feedback. I have uploaded a new revision based on your suggestions.\n\n> The embedding-enhancing method has low originality but is effective on this particular combination of model architecture, task and datasets. I am left wondering how well it might generalize to other models or tasks, since the problem it addresses shows up in many other places too...\n\nGood point. I will test embedding-enhanced method on other benchmark set/task to check whether it is still effective. I will report results here.\n\n> S3.1 - Word representation layer: This paragraph should probably mention that the character-composed embeddings are newly introduced here\nI updated it in revised version based on your advice.\n\n> What set does P denote, and what is the set-theoretic relation between P and T?\nP: all words in training/validation/testing sets (number of unique words could be large)\nT: words with word2vec embedding on the training set. T is a subset of P. Word2vec also uses word document frequency to remove some low frequency words.\n\nIn the revised version, I change output to \" dimension d1 + d2 for (S\\cap P) \\cup T\" and added notes \"The remaining words which are in P and not in the above output dictionary are initialized with zero vectors\". Here we did not store word with zero vector in the above dictionary to save space in the output dictionary. This initialization is usually done during neural network initialization stage.\n\n> S4.1 - What do the authors mean by the statement that response candidates for the Douban corpus were \"collected by Lucene retrieval model\"?\nBased on your advice, I added the following sentences in the revised paper\n\"That is, the last turn of each Douban dialogue with additional keywords extracted from the context on the test set was used as query to retrieve 10 response candidates from the Lucene index set (Details are referred to section 4 in (Wu et al., 2017)).\" \n\nDouban data was created by Wu et al., not by us (paper: https://arxiv.org/pdf/1612.01627.pdf, \nSee section 4: Response Candidate retrieval and Section 5.2 Douban Conversation Corpus). On this dataset, response negative candidates on the training/validation sets were random sampled whereas the retrieved method was used for testing set. \n\n> S4.2 - Paragraph two is very unclear. In particular, I don't understand the role of the Glove vectors here when Algorithm 1 is used, since the authors refer to word2vec vectors later in this paragraph and also in the Algorithm description.\n\nHere GloVe vectors are just pre-trainined word embedding ones from a general large dataset.\n\nFor the clarification, I added the following sentence in Section 3.2\n\"Here the pre-trainined word vectors can be from known methods such as GloVe (Pennington et al., 2014), word2vec (Mikolov et al., 2013) and FastText (Bojanowski et al., 2016).\".\n\nOn the training set we used word2vec in Algorithm 1 though other methods (GloVe and FastText) can be used too. \n\n> S4.3 - It's insufficiently clear what the model definitions are for the Douban corpus. Is there still a character-based LSTM involved, \nI used the same model layout and hyper-parameters for Douban and Ubuntu corpus. In Section 4.2 \n\"The same hyper-parameter settings are applied to both Ubuntu Dialogue and Douban conversation corpus.\"\n\nOnly the differences are pre-trained embedding vectors and word2vec generated on the training sets. Wu et al's Douban dataset (Chinese) have been already tokenized so that it is easy for us to run word2vec based on gensim. \n\n> does FastText make it unnecessary?\nFor western languages such as English, Germany, FastText generates ngram (character) internal embeddings and are used to address out-of-vocabulary issue. For OOV (a word is out of FastText pre-trained embeddings), we can use average of word ngram to obtain its representation. For Ubuntu corpus, I can test it if you think that it is useful.\nFor Douban, it is not easy for us to do it since dataset has been tokenized by Chinese tokenizer.\n\n> S4.3 - \"It can be seen from table 3 that the original ESIM did not perform well without character embedding.\" \nThanks. I changed it to \"\nIt can be seen from table 3 that character embedding enhances the performance of original ESIM.\"\n\"\n\n> S4.4 - gensim package -- for the benefit of readers unfamiliar with gensim, the text should ideally state explicitly that it is used to create the *word2vec* embeddings, \nI updated it in revised version based on your advice.\n\n\n\n\n\n\n\n\n\n\n\n\n", "Hi,\n\nThanks for all of this, will definitely take time to go through your notes. For more communication, my email is alexander.wong4@mail.mcgill.ca\n\nCheers", "Hi, Alex,\n I could not see your email in open review profile (\"a****4@cs\"). Open source the code in the paper is in progress. I don't know whether I can share the code for this reproduction challenge now and need to check the legal department in my company. \n\n> 1. What random seed did you use to generate the Ubuntu corpus?\njust used the default one (default = 1234) (see: https://github.com/rkadlec/ubuntu-ranking-dataset-creator) so that results are comparable with others.\n\n> 2. How did you implement the character-composed embedding? \n> 3. could you clarify on the concatenation of word and character embeddings?\n\nI used the tensorflow (tf.nn.bidirectional_dynamic_rnn) to conduct all experiments in the paper.\nFor example, you can define function below:\ndef lstm_layer(inputs, input_seq_len, rnn_size, dropout_keep_prob, scope, scope_reuse=False):\n with tf.variable_scope(scope, reuse=scope_reuse) as vs:\n fw_cell = tf.contrib.rnn.LSTMCell(rnn_size, forget_bias=1.0, state_is_tuple=True, reuse=scope_reuse)\n fw_cell = tf.contrib.rnn.DropoutWrapper(fw_cell, output_keep_prob=dropout_keep_prob)\n bw_cell = tf.contrib.rnn.LSTMCell(rnn_size, forget_bias=1.0, state_is_tuple=True, reuse=scope_reuse)\n bw_cell = tf.contrib.rnn.DropoutWrapper(bw_cell, output_keep_prob=dropout_keep_prob)\n rnn_outputs, rnn_states = tf.nn.bidirectional_dynamic_rnn(cell_fw=fw_cell, cell_bw=bw_cell,\n inputs=inputs,\n sequence_length=input_seq_len,\n dtype=tf.float32)\n return rnn_outputs, rnn_states\n\nThen\n#context_char_embedded: [batch_size * max_sequence_length, max_word_length, embed_char_dim]\n#context_char_length: [batch_size * max_sequence_length] (define number of character per word)\n#charRNN_size: 40 \n#max_word_length: 18\n#max_sequence_length: 180\n#dropoutput_keep_prob: 1.0\n#embed_char_dim: 69\n#batch_size: 128\nchar_rnn_output_context, char_rnn_state_context = lstm_layer(context_char_embedded, context_char_length, charRNN_size, dropout_keep_prob, charRNN_scope_name, scope_reuse=False)\n\n#response_char_embedded: [batch_size * max_sequence_length, max_word_length, embed_char_dim]\n#response_char_length: [batch_size * max_sequence_length]\n\nchar_rnn_output_response, char_rnn_state_response = lstm_layer(response_char_embedded,\nresponse_char_length, charRNN_size, dropout_keep_prob, charRNN_scope_name, scope_reuse=True)\n\n#context char representation\nchar_embed_dim = charRNN_size * 2\n#context_char_state: [batch_size * max_sequence_length, char_embed_dim]\ncontext_char_state = tf.concat(axis=1, values=[char_rnn_state_context[0].h, char_rnn_state_context[1].h])\n#reshape \ncontext_char_state = tf.reshape(context_char_state, [-1, max_sequence_length, char_embed_dim])\n\nThe similar operations are applied to char_rnn_state_response.\n\nFor word embedding, I assume that you can get \"context_word_output and response_word_output\"\nBoth tensors will have shape [batch_size, max_sequence_length, word_embedding_dim]\nThen you can use tf.concat to get the combined representation.\n\n> Regarding your ESIM, what settings did you use for the following hyper-parameters: patience, gradient clipping threshold, max epochs?\nI am not familiar with patience. No gradient clipping was used. In my experiments, training usually achieved the best performance (MRR) on the validation set at around 22000 - 25000 batch steps. \n\nNote: tensorflow version (tensorflow-gpu (1.1.0)). \n\nIf you share your email, we can communicate through email or another channel.\n\n\n\n\n\n\n\n\n", "Thank you for your willingness to help!\n\nUnfortunately, we are not in a position to enter a legal contractual agreement on behalf of the University, but if there is still a way to share any source code anonymously that would be helpful. We would only be using your source material for this reproduction challenge. If not, then you probably do not need to stay anonymous for further correspondence. You can find my email address on my OpenReview profile.\n\nIn terms of implementation details, we do have some questions:\n1. What random seed did you use to generate the Ubuntu corpus?\n2. How did you implement the character-composed embedding? More specifically, could you give more detail on you are describing in this line from section 3.1: \"The character-composed embedding is generated by concatenating the final state vector of the forward and backward direction of bi-directional LSTM (BiLSTM)\"\n3. Could you clarify on the concatenation of word and character embeddings?\n4. Regarding your ESIM, what settings did you use for the following hyper-parameters: patience, gradient clipping threshold, max epochs?\n\nAgain, if you'd rather communicate through email or another channel, feel free.\n\nThanks!", "Hi, Alex,\n Thank for your interest in our paper. Open source approval process in our company may take time. At the same time, if further clarification about technical implementation details (e.g hyper-parameter setting) is needed, feel free to ask here. We like to help you reproduce the results in the paper.", "We are extremely excited that you have selected our paper for reproducibility. We are going through our employer's open source approval process which will take a much longer time than the Dec 15 deadline. Few questions that may help us with alternatives. \n\n1. Do we need to stay anonymous to continue further our correspondence?\n2. Are you open for us to enter a legal contractual agreement to access our source code between our employer and your school for the \"reproducibility\" purpose? This would be potentially a faster process to give you access to our source code. I can explore this route to get more affirmative answers on the timing if you are open to enter a legal contractual agreement, like \"no cost collaboration\".\n\nAt this time, we believe open source process would take beyond your Dec 15 deadline, but we hope to finish the open source approval for the conference date. \nIf you have any further questions about our paper, please let us know as well. \n", "Dear Authors,\n\nI am part of a team at McGill University participating in the ICLR 2018 Reproducibility Challenge (linked below). We have chosen to reproduce your study and are wondering if you would like to share some or all of the code you used.\n\nhttp://www.cs.mcgill.ca/~jpineau/ICLR2018-ReproducibilityChallenge.html\n\nThank you!" ]
[ 6, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJ7yZ2P6-", "iclr_2018_rJ7yZ2P6-", "iclr_2018_rJ7yZ2P6-", "ryX6HZzzM", "ryX6HZzzM", "iclr_2018_rJ7yZ2P6-", "BkomChuxf", "SJEsevIZf", "HJAg9rDZM", "BkpgMaheG", "B1BaLmQWz", "H1RMVeqgz", "BkpgMaheG", "B15ZQj2gf", "r1UNAmogz", "HkFtNMclM", "HkFtNMclM", "iclr_2018_rJ7yZ2P6-" ]
iclr_2018_By5SY2gA-
Towards Building Affect sensitive Word Distributions
Learning word representations from large available corpora relies on the distributional hypothesis that words present in similar contexts tend to have similar meanings. Recent work has shown that word representations learnt in this manner lack sentiment information which, fortunately, can be leveraged using external knowledge. Our work addresses the question: can affect lexica improve the word representations learnt from a corpus? In this work, we propose techniques to incorporate affect lexica, which capture fine-grained information about a word's psycholinguistic and emotional orientation, into the training process of Word2Vec SkipGram, Word2Vec CBOW and GloVe methods using a joint learning approach. We use affect scores from Warriner's affect lexicon to regularize the vector representations learnt from an unlabelled corpus. Our proposed method outperforms previously proposed methods on standard tasks for word similarity detection, outlier detection and sentiment detection. We also demonstrate the usefulness of our approach for a new task related to the prediction of formality, frustration and politeness in corporate communication.
rejected-papers
This work attempts to incorporate affect information from additional resources into word embeddings. This is a valuable goal, but the methods used are very similar to existing ones, and the experimental results are not quite convincing enough to make a strong enough case for accepting the paper.
train
[ "HyaDR19xG", "ByVH7MslM", "H1YXWe6gM", "HJqOCra7M", "r1t5jrTmG", "HyW-sH6XM", "H17McHTmf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposed to use affect lexica to improve word embeddings. They extended the training objective functions of Word2vec and Glove with the affect information. The resulting embeddings were evaluated not only on word similarity tasks but also on a bunch of downstream applications such as sentiment analysis. Their experimental results showed that their proposed embeddings outperformed standard Word2vec and Glove. In sum, it is an interesting paper with promising results and the proposed methods were carefully evaluated in many setups.\n\nSome detailed comments are:\n-\tAlthough the use of affect lexica is innovative, the idea of extending the training objective function with lexica information is not new. Almost the same method was proposed in K.A. Nguyen, S. Schulte im Walde, N.T. Vu. Integrating Distributional Lexical Contrast into Word Embeddings for Antonym-Synonym Distinction. In Proceedings of ACL, 2016.\n-\tAlthough the lexicons for valence, arousal, and dominance provide different information, their combination did not perform best. Do the authors have any intuition why?\n-\tIn Figure 2, the authors picked four words to show that valence is helpful to improve Glove word beddings. It is not convincing enough for me. I would like to see to the top k nearest neighbors of each of those words.\n", "This paper proposes integrating information from a semantic resource that quantifies the affect of different words into a text-based word embedding algorithm. \n\nThe affect lexical seems to be a very interesting resource (although I'm not sure what it means to call it 'state of the art'), and definitely support the endeavour to make language models more reflective of complex semantic and pragmatic phenomena such as affect and sentiment. \n\nThe justification for why we might want to do this with word embeddings in the manner proposed seems a little unconvincing to me:\n\n- The statement that 'delighted' and 'disappointed' will have similar contexts is not evident to me at least (other then them both being participle / adjectives).\n\n- Affect in language seems to me to be a very contextual phenomenon. Only a tiny subset of words have intrinsic and context-free affect. Most affect seems to me to come from the use of words in (phrasal, and extra-linguistic) contexts, so a more context-dependent model, in which affect is computed over phrases or sentences, would seem to be more appropriate. Consider words like 'expensive', 'wicked', 'elimination'...\n\nThe model proposes several applications (sentiment prediction, predicting email tone, word similarity) where the affect-based embeddings yield small improvements. However, in different cases, taking different flavours of affect information (V, A or D) produces the best score, so it is not clear what to conclude about what sort of information is most useful. \n\nIt is not surprising to me that an algorithm that uses both WordNet and running text to compute word similarity scores improves over one that uses just running text. It also not surprising that adding information about affect improves the ability to predict sentiment and the tone of emails. \n\nTo understand the importance of the proposed algorithm (rather than just the addition of additional data), I would like to see comparison with various different post-processing techniques using WordNet and the affect lexicon (i.e. not just Bollelaga et al.) including some much simpler baselines. For instance, what about averaging WordNet path-based distance metrics and distance in word embedding space (for word similarity), and other ways of applying the affect data to email tone prediction?\n\n", "This paper introduces modifications the word2vec and GloVe loss functions to incorporate affect lexica to facilitate the learning of affect-sensitive word embeddings. The resulting word embeddings are evaluated on a number of standard tasks including word similarity, outlier prediction, sentiment detection, and also on a new task for formality, frustration, and politeness detection.\n\nA considerable amount of prior work has investigated reformulating unsupervised word embedding objectives to incorporate external resources for improving representation learning. The methodologies of Kiela et al (2015) and Bollegala et al (2016) are very similar to those proposed in this work. The main originality seems to be captured in Algorithm 1, which computes the strength between two words. Unlike prior work, this is a real-valued instead of a binary quantity. Because this modification is not particularly novel, I believe this paper should primarily be judged based upon the effectiveness of the method rather than the specifics of the underlying techniques. In this light, the performance relative to the baselines is particularly important. From the results reported in Tables 1, 2, and 3, I do not see compelling evidence that +V, +A, +D, or +VAD consistently lead to significant performance increases relative to the baseline methods. I therefore cannot recommend this paper for publication.", "Thank you for a detailed and a lucid review. We would like to address the points made individually: \n\n1) Detailed comment 1 - \"Although the use of affect lexica is innovative...\": Using the affect lexicon is different than prior work due to the presence of word level scores on a continuous scale instead of discrete labels. Hence, it demands for a novel way to define the strength between two words. This is covered by our Algorithm 1 in the paper. Our intuition for joint learning comes from how Word2Vec and GloVe training models work. Since this has been used to incorporate knowledge bases in the prior work, we end up with a similar looking loss term. However, we refer you to our Related Work section (see Section 2), where we individually point out the similarities and differences with the prior work in terms of the intuition and the mathematical formulations. \n\n2) Detailed comment 2 - \"combination of valence, arousal and dominance did not perform best\": The quality of embeddings itself can partially be the reason why the combination of all the affect scores does not perform the best in all cases. Exploring better techniques for combining the information is a part of future exploration. \n\nIn order to further understand our results, we also perform error analysis for sentiment prediction task. We have added our observations in a separate section (see Section 6) in the paper. \n\n3) Detailed comment 3 - \"top k nearest neighbors\": \n\nHere, we show the top 5 neighbors of each of the four words shown in Figure 2. We also show corresponding cosine similarity values.\n\ni) Refuse-\n\nGloVe baseline\t|\tGloVe + Valence information\n\ninsist, 0.716 |\tdeny, 0.721\nrefusal, 0.707\t|\treject, 0.707\t\ndecide, 0.682\t |\tinsist, 0.706\ndeny, 0.672\t\t|\trefusal, 0.697\nreject, 0.671\t |\tdecide, 0.680\n\nii) Reject-\n\nGloVe baseline\t|\tGloVe + Valence information\n\naccept, 0.708\t |\taccept, 0.726\nrefuse, 0.671\t |\trefuse, 0.707\noppose, 0.659\t|\toppose, 0.698\ndismiss, 0.657\t|\tdismiss, 0.692\ndeny, 0.657\t\t|\tdeny, 0.691\n\niii) Accept-\n\nGloVe baseline\t\t\t\t\t|\tGloVe + Valence information\n\ncannot, 0.727\t\t\t\t\t|\treject, 0.726\nreject, 0.708\t\t\t\t\t |\tcannot, 0.708\nVisa/Mastercard/Switch, 0.697\t | \tacknowledge, 0.701\nacknowledge, 0.696\t\t\t\t|\tVisa/Mastercard/Switch, 0.688\nmust, 0.671\t\t\t\t\t\t|\tDogs/pets, 0.676\n\niv) Approve-\n\nGloVe baseline\t\t\t\t|\tGloVe + Valence information\n\napproval, 0.740\t\t\t\t|\tapproval, 0.731\nagree, 0.669\t\t\t\t |\tendorse, 0.679\na.fldnoofproducts, 0.648\t |\tagree, 0.657\nproposal, 0.646\t\t\t\t|\ta.fldnoofproducts, 0.646\nendorse, 0.644\t\t\t\t|\tproposal, 0.625\n\n In general, among the cases which we analyze, we observe that using the affect information takes a word closer to the synonyms and farther from antonyms. However, there are exceptions to this. For instance, the similarity between 'reject' and 'accept' increases after the addition of valence information. We again refer you to the error analysis section where we describe various other instances and point out several possible reasons for errors. ", "Thank you for a detailed and a lucid review. We would like to address the points made individually: \n\n1) Explanation for calling \"state-of-the-art\": We apologise if the language is unclear at some point in the paper. But we do not mean to refer the affect lexicon itself as 'state-of-the-art'. We use 'state-of-the-art' for [3] which uses synonym pairs to modify GloVe embeddings. To have a proper comparison, we run this approach on our dataset and compare it to our results for all the evaluation tasks. I refer you to the 'Experiments and Results' section (Section 4) in the paper for more details. \n\n2) With reference to the 'delighted' and 'disappointed' example, we were citing the work by [1] who applied post-hoc affect-based signed clustering on word embeddings and identified that after incorporating valence ratings using the signed clustering algorithm, 'disappointed' moves further away from 'delighted' than in the original space. More details are available in the original paper. \n\n3) \"Affect as a contextual phenomenon\": We agree with the observation that words may have different affect in different contexts. We refer you to Section 5.1 in the paper on \"Affect Polysemy\", which we have added to discuss this issue in detail. \n\n4) \"Small improvements, not clear as to what to conclude\": In order to test the significance of our improvements, we perform hypothesis testing. Taking advice from prior work, we use Fisher transformation to test for statistical significance of our word similarity correlations. We were able to achieve a similar level of significance as prior work such as [2] and [3]. \n\nApart from this, we have added an error analysis section (see Section 6) in the paper to discuss reasons for inconsistent performance and why models make errors. We mainly focus on sentiment prediction task. We qualitative compare our approach with [3] and baseline approaches. Overall, we see a reasonable improvement in almost all the cases with the addition of valence affect scores. \n\n5) Comparison with Post processing techniques: To build a stronger evaluation, taking this advice, we have added comparison with a post-processing baseline in the paper. We refer you to Section 4.1 where we explain this approach. We have added the corresponding results in the Evaluation Framework (section 4.2). \n\nReferences: \n\n[1] Sedoc, J., et al. \"Predicting Emotional Word Ratings using Distributional Representations and Signed Clustering\". Proceedings of the European Association for Computational Linguistics. ACL, 2017. \n\n[2] Xu, Chang, et al. \"Rc-net: A general framework for incorporating knowledge into word representations.\" Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. ACM, 2014. \n\n[3] Bollegala, Danushka, et al. \"Joint Word Representation Learning Using a Corpus and a Semantic Lexicon.\" AAAI. 2016. ", "Thank you for a detailed and a lucid review. We would like to address the points made individually: \n\n1) Lack of novelty: \n\n To the best of our knowledge, there is no prior art which incorporates affect lexicons in a joint learning framework with either Word2Vec or GloVe. Having word level scores on a continuous scale instead of discrete labels in the lexicon seems more useful but it demands for a novel way to define the strength between two words. This is covered by our Algorithm 1 in the paper. For incorporating this information in a joint learning framework, we take our intuition from how Word2Vec and GloVe models work. Since this has been used to incorporate knowledge bases in the prior work, we end up with a similar looking loss term. However, we refer you to our Related Work section (see Section 2), where we individually point out the similarities and differences with the prior work in terms of the intuition and the mathematical formulations. \n\n2) Unconvincing results: \n\nTo justify the significance of our improvements, we perform a hypothesis test. Taking advice from prior work, we use Fisher transformation to test for statistical significance of our word similarity correlations. We were able to achieve a similar level of significance as prior work such as [1] and [2]. \n\nTo partially address the inconsistent performance of our models across various tasks and analyze other reasons for errors, we perform an error analysis on sentiment detection task. We refer you to the Error Analysis section (Section 6) in the paper which we have added to discuss our observations. \n\nReferences: \n\n[1] Xu, Chang, et al. \"Rc-net: A general framework for incorporating knowledge into word representations.\" Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. ACM, 2014. \n\n[2] Bollegala, Danushka, et al. \"Joint Word Representation Learning Using a Corpus and a Semantic Lexicon.\" AAAI. 2016. ", "The following changes have been made to the paper based on the Reviewer Comments. \n\n1) Comparison against post-training baseline: The paper presents a pretraining based approach to create enriched word embeddings. In order to have a complete evaluation, we compare our results to a post-training method. \n\nOnce we have trained embeddings using a standard approach, we modify that embedding space in a post-processing step to inculcate the affect information. The detailed explanation of this approach in added to Section 4.1. The results have been added in the Evaluation Framework (section 4.2). \n\n2) Affect Polysemy: We agree with AnonReviewer1 that words may have different affect in different contexts. A new discussion on this has been added to Section 5.1. \n\n3) Error Analysis: In order to gain further insights into our results, we perform an error analysis for sentiment prediction task. We qualitatively compare our approach with [1] and baseline methods. We have added our observations as a new Error Analysis section (see Section 6). \n\nReferences: \n\n[1] Bollegala, Danushka, et al. \"Joint Word Representation Learning Using a Corpus and a Semantic Lexicon.\" AAAI. 2016. " ]
[ 6, 4, 4, -1, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_By5SY2gA-", "iclr_2018_By5SY2gA-", "iclr_2018_By5SY2gA-", "HyaDR19xG", "ByVH7MslM", "H1YXWe6gM", "iclr_2018_By5SY2gA-" ]
iclr_2018_r17lFgZ0Z
Relevance of Unsupervised Metrics in Task-Oriented Dialogue for Evaluating Natural Language Generation
Automated metrics such as BLEU are widely used in the machine translation literature. They have also been used recently in the dialogue community for evaluating dialogue response generation. However, previous work in dialogue response generation has shown that these metrics do not correlate strongly with human judgment in the non task-oriented dialogue setting. Task-oriented dialogue responses are expressed on narrower domains and exhibit lower diversity. It is thus reasonable to think that these automated metrics would correlate well with human judgment in the task-oriented setting where the generation task consists of translating dialogue acts into a sentence. We conduct an empirical study to confirm whether this is the case. Our findings indicate that these automated metrics have stronger correlation with human judgments in the task-oriented setting compared to what has been observed in the non task-oriented setting. We also observe that these metrics correlate even better for datasets which provide multiple ground truth reference sentences. In addition, we show that some of the currently available corpora for task-oriented language generation can be solved with simple models and advocate for more challenging datasets.
rejected-papers
This paper tackles a very important problem: evaluating natural language generation. The paper presents an overview of existing unsupervised metrics, and looks at how they correlate with human evaluation scores. This is important work and the empirical conclusions are useful to the community, but the datasets used are too limited and the authors agree it would be better to use newer bigger and more diverse datasets suggested by reviewers for drawing more general conclusions. This work would indeed be much stronger if it relied on better, more recent datasets; therefore publication as is seems premature.
train
[ "Hy0xrQegf", "SJWidMceG", "rkEMEscxf", "HyBXcDamM", "Bk66KPaQz", "ByiJYDTXM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper's main thesis is that automatic metrics like BLEU, ROUGE, or METEOR is suitable for task-oriented natural language generation (NLG). In particular, the paper presents a counterargument to \"How NOT To Evaluate Your Dialogue System...\" where Wei et al argue that automatic metrics are not correlated or only weakly correlated with human eval on dialogue generation. The authors here show that the performance of various NN models as measured by automatic metrics like BLEU and METEOR is correlated with human eval.\n\nOverall, this paper presents a useful conclusion: use METEOR for evaluating task oriented NLG. However, there isn't enough novel contribution in this paper to warrant a publication. Many of the details unnecessary: 1) various LSTM model descriptions are unhelpful given the base LSTM model does just as well on the presented tasks 2) Many embedding based eval methods are proposed but no conclusions are drawn from any of these techniques.", "1) This paper conducts an empirical study of different unsupervised metrics' correlations in task-oriented dialogue generation. This paper can be considered as an extension of Liu, et al, 2016 while the later one did an empirical study in non-task-oriented dialogue generation. \n\n2)My questions are as follows:\ni) The author should give the more detailed definition of what is non-task-oriented and task-oriented dialogue system. The third paragraph in the introduction should include one use case about non-task-oriented dialogue system, such as chatbots.\nii) I do not think DSTC2 is good dataset here in the experiments. Maybe the dataset is too simple with limited options or the training/testing are very similar to each other, even the random could achieve very good performance in table 1 and 2. For example, the random solution is only 0.005 (out of 1) worse then d-scLSTM, and it also has a close performance compared with other metrics. Even the random could achieve 0.8 (out of 1) in BLEU, this is a very high performance.\niii) About the scatter plot Figure 3, the authors should include more points with a bad metric score (similar to Figure 1 in Liu 2016). \niv) About the correlations in figure b, especially for BLEU and METEOR, I do not think they have good correlations with human's judgments. \nv) BLEU usually correlates with human better when 4 or more references are provided. I suggest the authors include some dataset with 4 or more references instead of just 2 references.\n", "The authors present a solid overview of unsupervised metrics for NLG, and perform a correlation analysis between these metrics and human evaluation scores on two task-oriented dialog generation datasets using three LSTM-based models. They find weak but statistically significant correlations for a subset of the evaluated metrics, an improvement over the situation that has been observed in open-domain dialog generation.\nOther than the necessarily condensed model section (describing a model explained at greater length in a different work) the paper is quite clear and well-written throughout, and the authors' explication of metrics like BLEU and greedy matching is straightforward and readable. But the novel work in the paper is limited to the human evaluations collected and the correlation studies run, and the authors' efforts to analyze and extend these results fall short of what I'd like to see in a conference paper.\nSome other points:\n1. Where does the paper's framework for response generation (i.e., dialog act vectors and delexicalized/lexicalized slot-value pairs) fit into the landscape of task-oriented dialog agent research? Is it the dominant or state-of-the-art approach?\n2. The sentence \"This model is a variant of the “ld-sc-LSTM” model proposed by Sharma et al. (2017) which is based on an encoder-decoder framework\" is ambiguous; what is apparently meant is that Sharma et al. (2017) introduced the hld-scLSTM, not simply the ld-scLSTM.\n3. What happens to the correlation coefficients when exact reference matches (a significant component of the highly-rated upper right clusters) are removed?\n4. The paper's conclusion naturally suggests the question of whether these results extend to more difficult dialog generation datasets. Can the authors explain why the datasets used here were chosen over e.g. El Asri et al. (2017) and Novikova et al. (2016)?", "Thank you for the valuable feedback!\n\n1) Our results on metrics contradict previous work by Wen et al. (2015) which observed much lower BLEU scores for the same datasets. We presented all the base models so that we could have a clearer comparison between their results and ours.\n2) We couldn't find any clear correlation trends for the embedding based metrics. We will report that in the paper. In future work, we will look at larger datasets as mentioned above to AnonReviewer2 and AnonReviewer1 where there might possibly be clearer trends.", "Thank you for the valuable feedback!\n\n2)\n(i) We will make these changes.\n(ii) We agree with this analysis. We also found the DSTC2 dataset to be very simple for the NLG task. In future work we will be using the datasets from El Asri et al. (2017) and Novikova et al. (2016) as mentioned by AnonReviewer2 as well.\n(iii) We also include all the points but due to these task-oriented dialog datasets datasets being very simple most of these are overlapping in the upper right cluster.\n(iv) Most of the points being in the upper right cluster does distort the correlation values.\n(v) The dataset by Novikova et al. (2016) has larger number of references. We will use that dataset in future work.", "Thank you for the valuable feedback!\n\n1. Most of the research in task oriented dialog generation research uses dialog / speech acts and slots as they significantly help. It is expensive to collect these labels and there has been recent work on doing task-oriented dialog generation end-to-end without these labels. However we focus on NLG in a modular framework instead of end-to-end dialog generation and acts and slots are the dominant methodology in that area.\n2. We'll update this in the paper. They did not introduce the hld-lscLSTM model.\n3. We haven't looked at that because most of the points are actually within that cluster but this could be an interesting analysis to improve the system.\n4. This work was finished earlier than the conference submission period and at that time these datasets were very recent. We will be using these datasets in future work as they are much bigger and diverse than earlier datasets." ]
[ 4, 5, 5, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_r17lFgZ0Z", "iclr_2018_r17lFgZ0Z", "iclr_2018_r17lFgZ0Z", "Hy0xrQegf", "SJWidMceG", "rkEMEscxf" ]
iclr_2018_ByhthReRb
A Neural Method for Goal-Oriented Dialog Systems to interact with Named Entities
Many goal-oriented dialog tasks, especially ones in which the dialog system has to interact with external knowledge sources such as databases, have to handle a large number of Named Entities (NEs). There are at least two challenges in handling NEs using neural methods in such settings: individual NEs may occur only rarely making it hard to learn good representations of them, and many of the Out Of Vocabulary words that occur during test time may be NEs. Thus, the need to interact well with these NEs has emerged as a serious challenge to building neural methods for goal-oriented dialog tasks. In this paper, we propose a new neural method for this problem, and present empirical evaluations on a structured Question answering task and three related goal-oriented dialog tasks that show that our proposed method can be effective in interacting with NEs in these settings.
rejected-papers
This work deals with the important task of capturing named entities in a goal-directed setting. The description of the work and the experiments are not ready for publication; for example, it is unclear whether the proposed method would have an advantage over existing methods such as the match type features that are only mentioned in Table 3 for establishing the baseline on the original bAbI dialogue dataset, but not even discussed in the paper.
train
[ "SJP-xVtlf", "B1hNwrFeM", "S18iNb5lG", "r1qKHZ77f", "SyAtG-7mM", "Skp_ybmXz", "BJpwneQQM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper proposes to generate embedding of named-entities on the fly during dialogue sessions. If the text is from the user, a named entity recognizer is used. If it is from the bot response, then it is known which words are named entities therefore embedding can be constructed directly. The idea has some novelty and the results on several tasks attempting to prove its effectiveness against systems that handle named entities in a static way.\n\nOne thing I hope the author could provide more clarification is the use of NER. For example, the experimental result on structured QA task (section 3.1), where it states that the performance different between models of With-NE-Table and W/O-NE-Table is positioned on the OOV NEs not present in the training subset. To my understanding, because of the presence of the NER in the With-NE-Table model, you could directly do update to the NE embeddings and query from the DB using a combination of embedding and the NE words (as the paper does), whereas the W/O-NE-Table model cannot because of lack of the NER. This seems to prove that an NER is useful for tasks where DB queries are needed, rather than that the dynamic NE-Table construction is useful. You could use an NER for W/O-NE-Table and update the NE embeddings, and it should be as good as With-NE-Table model (and fairer to compare with too).\n\nThat said, overall the paper is a nice contribution to dialogue and QA system research by pointing out a simple way of handling named entities by dynamically updating their embeddings. It would be better if the paper could point out the importance of NER for user utterances, and the fact that using the knowledge of which words are NEs in dialogue models could help in tasks where DB queries are necessary.", "The paper addresses the task of dealing with named entities in goal oriented dialog systems. Named entities, and rare words in general, are indeed troublesome since adding them to the dictionary is expensive, replacing them with coarse labels (ne_loc, unk) looses information, and so on. The proposed solution is to extend neural dialog models by introducing a named entity table, instantiated on the fly, where the keys are distributed representations of the dialog context and the values are the named entities themselves. The approach is applied to settings involving interacting to a database and a mechanism for handling the interaction is proposed. The resulting model is illustrated on a few goal-oriented dialog tasks.\n\n\nI found the paper difficult to read. The concrete mappings used to create the NE keys and attention keys are missing. Providing more structure to the text would also be useful vs. long, wordy paragraphs. Here are some specific questions:\n\n1. How are the keys generated? That are the functions used? Does the \"knowledge of the current user utterance\" include the word itself? The authors should include the exact model specification, including for the HRED model.\n\n2. According to the description, referring to an existing named entity must be done by \"generating a key to match the keys in the NE table and then retrieve the corresponding value and use it\". Is there a guarantee that a same named entity, appearing later in the dialog, will be given the same key? Or are the keys for already found entities retrieved directly, by value?\n\n3. In the decoding phase, how does the system decide whether to query the DB?\n\n4. How is the model trained?\n\nIn its current form, it's not clear how the proposed approach tackles the shortcomings mentioned in the introduction. Furthermore, while the highlighted contribution is the named entity table, it is always used in conjunction to the database approach. This raises the question whether the named entity table can only work in this context.\n\nFor the structured QA task, there are 400 training examples, and 100 named entities. This means that the number of training examples per named entity is very small. Is that correct? If yes, then it's not very surprising that adding the named entities to the vocabulary leads to overfitting. Have you compared with using random embeddings for the named entities?\n\nTypos: page 2, second-to-last paragraph: firs -> first, page 7, second to last paragraph: and and -> and\n\n", "Properly capturing named entities for goal oriented dialog is essential, for instance location, time and cuisine for restaurant reservation. Mots successful approaches have argued for separate mechanism for NE captures, that rely on various hacks and tricks. This paper attempt to propose a comprehensive approach offers intriguing new ideas, but is too preliminary, both in the descriptions and experiments. \n\nThe proposed methods and experiments are not understandable in the current way the paper is written: there is not a single equation, pseudo-code algorithm or pointer to real code to enable the reader to get a detailed understanding of the process. All we have a besides text is a small figure (figure 1). Then we have to trust the authors that on their modified dataset, the accuracies of the proposed method is around 100% while not using this method yields 0% accuracies?\n\nThe initial description (section 2) leaves way too many unanswered questions:\n- What embeddings are used for words detected as NE? Is it the same as the generated representation?\n- What is the exact mechanism of generating a representation for NE EECS545? (end of page 2)\n- Is it correct that the same representation stored in the NE table is used twice? (a) To retrieve the key (a vector) given the value (a string) as the encoder input. (b) To find the value that best matches a key at the decoder stage?\n- Exact description of the column attention mechanism: some similarity between a key embedding and embeddings representing each column? Multiplicative? Additive?\n- How is the system supervised? Do we need to give the name of the column the Attention-Column-Query attention should focus on? Because of this unknown, I could not understand the experiment setup and data formatting!\n\nThe list goes on...\n\nFor such a complex architecture, the authors must try to analyze separate modules as much as possible. As neither the QA and the Babi tasks use the RNN dialog manager, while not start with something that only works at the sentence level\n\nThe Q&A task could be used to describe a simpler system with only a decoder accessing the DB table. Complexity for solving the Babi tasks could be added later.\n", "R:\"One thing I hope the author could provide more clarification is the use of NER. For example, the experimental result on structured QA task (section 3.1), where it states that the performance different between models of With-NE-Table and W/O-NE-Table is positioned on the OOV NEs not present in the training subset. To my understanding, because of the presence of the NER in the With-NE-Table model, you could directly do update to the NE embeddings and query from the DB using a combination of embedding and the NE words (as the paper does), whereas the W/O-NE-Table model cannot because of lack of the NER. This seems to prove that an NER is useful for tasks where DB queries are needed, rather than that the dynamic NE-Table construction is useful. You could use an NER for W/O-NE-Table and update the NE embeddings, and it should be as good as With-NE-Table model (and fairer to compare with too).\"\n It is totally true that NER is useful for these kind of tasks, and we utilise it. But just the NER alone is not enough. For example, in a dialog there could be multiple NEs of the same type occurring at different places and we need to choose the right one, say for a DB query. This can be done using the dynamic NE-table idea and it also a differentiable process and hence can be learnt using back propagation. Just identifying the NEs alone is not enough. \n We do accept that for simple structured synthetic tasks such as the QA task of ours one could write a simple rule based system to utilise NER and do the task perfectly. Infact for the bAbI task, as shown in the original paper, one can build a rule based system (which are brittle ofcourse) to get 100% accuracy on all tasks, since the data is a synthetic simulated one, but this is not true for real data. For more sophisticated tasks, where it's not possible to solve the task by writing a set of rules, our approach for handling named entities provides a first of its kind solution and addresses problems associated with learning embeddings for rare NEs and handling OOV NEs. It also gives a way to work with exact NE values, within the neural learning framework.\n Also, for all our dialog tasks in the bAbI dialog dataset, we do give the NE type information to the baseline W/O-NE table model as well.\n\nR:\"It would be better if the paper could point out the importance of NER for user utterances, and the fact that using the knowledge of which words are NEs in dialogue models could help in tasks where DB queries are necessary.\"\n We totally agree with this point and we have added this to our updated version of the paper.\n\n ", "Following your suggestion, we have rewritten the proposed solution in section 2 adding equations and modifying the figure to give the concrete mappings used to create the NE keys. We have tried to split the core idea from the retrieval mechanism used in the experiments in the text and have explained the retrieval mechanism separately in section 3.1\n\nR:\"1. How are the keys generated? That are the functions used? Does the \"knowledge of the current user utterance\" include the word itself?\"\n When using an RNN sentence encoder, the exact mechanism for generating an embedding for a NE is shown in equation 1 in section 2. It is also shown in figure 1. In words, when the encoder RNN encounters a NE, the representation of the dialog so far, the sentence representation of the current user utterance so far and the NE type information are used (a linear transformation) to generate a neural embedding on the fly and is stored in the NE-Table as key along with the NE associated with it stored as the value. \nThe knowledge of the current NE word is given by its NE-type.\n\nR:\"2. According to the description, referring to an existing named entity must be done by \"generating a key to match the keys in the NE table and then retrieve the corresponding value and use it\". Is there a guarantee that a same named entity, appearing later in the dialog, will be given the same key? Or are the keys for already found entities retrieved directly, by value?\"\n A new key is generated for each named entity that comes during a dialog, irrespective of whether they have occurred before in the dialog (hence already in the NE table) or not. \n\nR:\"3. In the decoding phase, how does the system decide whether to query the DB?\"\n The system has to make a decision whether it has to query the DB or not. The exact way and place where this is done is task dependent. For example, in the QA task, query to the DB happens always. In the case of the bAbI dialog task 1 and 2, query to the DB happens only if the agent chooses to output the “api_call’’ sentence. In dialog task 4, the DB query happens whenever the output system utterance has a NE_tag as a part of it (hence requires some information from the DB).\n\nR:\"4. How is the model trained?\"\n In all our experiments the model is trained in a fully supervised way, along with the labels for column and row attentions (for the DB retrieval mechanism).\n\nR:\"In its current form, it's not clear how the proposed approach tackles the shortcomings mentioned in the introduction. Furthermore, while the highlighted contribution is the named entity table, it is always used in conjunction to the database approach. This raises the question whether the named entity table can only work in this context.\"\n We have a method that can learn to store and point to any NE that has occurred in the dialog so far, depending upon the requirement of the downstream task. This gives neural way of interacting with NEs that does not have the following issues: Explosion in vocabulary size, issues associated with not learning good embeddings as a particular NE occurs only few times in a dataset, loss of information and inability to refer to particular NEs of the same type, which happens when NEs are replaced by their NE type tags, issues with OOV NE words. The idea is general enough to be applied in any NLP task that involves NEs. Here we focus on goal-oriented dialog, which almost always has an external DB and has the set of issues mentioned above. We are working on experiments which don't involve a DB and will share more details on future work.\n\nR:\"For the structured QA task, there are 400 training examples, and 100 named entities. This means that the number of training examples per named entity is very small. Is that correct? If yes, then it's not very surprising that adding the named entities to the vocabulary leads to overfitting. Have you compared with using random embeddings for the named entities?\"\n NEs in general have this issue of not being able to learn good neural embeddings for them since the individual NEs occur only few number of times in a dataset. As you have pointed out, that is very much true here as well. Here the whole dataset dominated by lot of NEs, but the training examples per NE is similar to a real world scenario.\nThe point that we were trying to emphasize was about the new NEs that come during the test time in the questions, which the system has not seen in the questions during the training time. The embeddings of these new NEs that come during the test time alone, have not been trained and hence remain random as they were initialised. In a general task, this might be confusing to a system that tries to interpret or work with their embeddings. Following your suggestion, we did an experiment (Structured QA task) with random embeddings for NEs and fixing it throughout, the test performance (accuracy) is still 82 %, compared to the 100% accuracy which we obtain using the NE-table idea.", "Following your suggestion, we have rewritten the section that explains the proposed method (Section 2) (along with equations to show the exact process of NE key generation) and the section that explains the retrieval mechanism used for the experiments (Section 3.1). We have also tried to modify the figures for easier and detailed understanding. We hope the modified text read in coordination with the figure should make it understandable now.\n\nR: \"What embeddings are used for words detected as NE? Is it the same as the generated representation?\"\n Yes, the new generated key embedding for the NE is fed to the sentence encoder as the NE’s word embedding.\n\nR:\"What is the exact mechanism of generating a representation for NE EECS545?\"\n When using an RNN sentence encoder, the exact mechanism for generating an embedding for a NE is shown in equation 1 in section 2. It is also shown in figure 1. In words, when the encoder RNN encounters a NE, the representation of the dialog so far (h^d_t−1 ), the sentence representation of the current user utterance so far (h^x,enc_t,i−1 ) and the NE type information are used (a linear transformation) to generate a neural embedding (z^ne_i ) on the fly and is stored in the NE-Table as key along with the NE x_t,i associated with it stored as the value.\n\nR: \"Is it correct that the same representation stored in the NE table is used twice? (a) To retrieve the key (a vector) given the value (a string) as the encoder input. (b) To find the value that best matches a key at the decoder stage?\"\n No. It is true that the representation stored in the NE table is used to find the value that best matches a key at the decoder stage. In the encoder side, when the encoder encounters a NE, it generates a new key (using the knowledge of the dialog so far and the NE type) for it and stores it in the NE table (even if this particular NE has come before in the dialog)\n\nR:\"Exact description of the column attention mechanism: some similarity between a key embedding and embeddings representing each column? Multiplicative? Additive?\"\n We use dot product followed by sigmoid for finding the similarity and calculating the attention score.\n\nR:\"How is the system supervised? Do we need to give the name of the column the Attention-Column-Query attention should focus on?\"\n Yes, as mentioned in the paper, in all our experiments, the system is supervised. We give labels to the column and row attentions. \n\nR:\"For such a complex architecture, the authors must try to analyze separate modules as much as possible.\"\n We have tried to incorporate this suggestion into our new modified version. In Section 2: “Proposed solution”, we focus only on our core idea of usage of NE tables, without getting into the dialog manager or the retrieval mechanism used in our experiments. Later in Section 3.1, we explain the retrieval mechanism used in our experiments.\n\n\n\n", "First, we would like to thank the reviewers for their valuable reviews. The main issue, as we understand, seems to be with the writing and hence, the understandability of the proposed idea. We have made our sincere efforts in rewriting the paper, mainly the “Proposed Solution” section (2) and experimental setup section (3.1) to make things more clear and explicit. Though we answer the specific questions raised in the reviews below, we do request the reviewers to go through the paper once more for a more clear explanation of the idea and experiments.\n" ]
[ 6, 4, 3, -1, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2018_ByhthReRb", "iclr_2018_ByhthReRb", "iclr_2018_ByhthReRb", "SJP-xVtlf", "B1hNwrFeM", "S18iNb5lG", "iclr_2018_ByhthReRb" ]
iclr_2018_HJXyS7bRb
A Goal-oriented Neural Conversation Model by Self-Play
Building chatbots that can accomplish goals such as booking a flight ticket is an unsolved problem in natural language understanding. Much progress has been made to build conversation models using techniques such as sequence2sequence modeling. One challenge in applying such techniques to building goal-oriented conversation models is that maximum likelihood-based models are not optimized toward accomplishing goals. Recently, many methods have been proposed to address this issue by optimizing a reward that contains task status or outcome. However, adding the reward optimization on the fly usually provides little guidance for language construction and the conversation model soon becomes decoupled from the language model. In this paper, we propose a new setting in goal-oriented dialogue system to tighten the gap between these two aspects by enforcing model level information isolation on individual models between two agents. Language construction now becomes an important part in reward optimization since it is the only way information can be exchanged. We experimented our models using self-play and results showed that our method not only beat the baseline sequence2sequence model in rewards but can also generate human-readable meaningful conversations of comparable quality.
rejected-papers
While using self-play for training a goal-oriented dialogue system makes sense, the contribution of this paper compared to previous work (that the paper itself cites) seems too minor, and the limitations of using toy synthetic data further weaken the work.
train
[ "SkIuskXef", "B1uZU0Kgf", "SyZknb5ez" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I like the idea of coupling the language and the conversation model. This is in line with the latest trends of constructing end-to-end NN models that deal with the conversation in a holistic manner. The idea of enforcing information isolation is brilliant. Creating hidden information and allowing the two-party model to learn through self-play is a very interesting approach and the results seem promising. \n\nHaving said that, I feel important references are missing and specific statements of the paper, like that \"Their success is however limited to conversations with very few turns and without goals\" can be argued. There are papers that are goal oriented and have many turns. I will just provide one example, to avoid being overwhelming, although more can be found in the literature. That would be the paper of T.-H. Wen, D. Vandyke, N. Mrksic, M. Gasic, L. Rojas-Barahona, P.-H. Su, S. Ultes and S. Young (2017). \"A Network-based End-to-End Trainable Task-oriented Dialogue System.\" EACL 2017, Valencia, Spain. In fact in this paper even more dialogue modules are coupled. So, the \"fresh challenge\" of the paper can be argued. \n\nIt is not clear to me how you did the supervised part of the training. To my experience, although supervised learning can be used, reinforcement learning seems to be the most popular choice. Also, I had to read most of the paper to understand that the system is based on a simulator. Additionally, it is not clear how you got the ground-truth for the training. How are the action and the dialogue generated by the simulator guaranteed to follow the optimal policy?\n \nI also disagree with the statement that \"based on those... to estimate rewards\". If ruled-based systems were sufficient, there would not be a need for statistical dialogue managers. However, the latter is a very active research area. \n\nFigure 1 is missing information (for my likings), like not defined symbols. In addition, it's not self-contained. Also, I would prefer a longer, descriptive and informative label to make the figure as self-explained as possible. I believe it would add to the clarity of the paper. \n\nAlso, fundamental information, according to my opinion is missing. For example, what are the restrictions R and how is the database K formed? What is the size of the database? How many actions do you define? Some of them are defined in the action state decoder, but it is not clear if it is all of them.\n\n\nGRU -> abbreviation not defined\n\nI would really appreciate a figure to better explain the subsection \"Encoding External Knowledge\". In the current form I am struggling to understand what the authors mean. \n\nHow is the embedding matrix E created?\n\nHave you tried different unit sizes d? Have you tried different unit sizes for the customer and the service? \n\n\"we use 2 transformation matrixes\" -> if you could please provide more details\n\nHow is equation 2 related to figure 1? \n\nTypo: \"name of he person\"\n\n\"During the supervised learning... and the action states\". I am not sure I get what you mean. May you make this statement more clearly by adding an equation for example?\n\nWhat happens if you use random rather than supervised learning weight initialisation? \n\nEquation 7: What does T stand for?\n\nI cannot find Table 1, 2 and 5 referred to in-text. Moreover I am not sure about quite some items. For example, what is number db? What is the inference set? \n\n500k of data is quite some. A figure on convergence would be nice.\n\nSetting generator: You mention the percentage of book and flight not found. What about the rest of the cases? \n\n\nTypo: “table 3 and table 3”\n\nThe set of the final states of the dialogue is not the same as the ones presented at Fig. 2.\n\nSub section reward generation is poorly described. After all, reward seems to play a very important role for the proposed system. Statements like “things such as” (instead the exhaustive list of rules for example) or “the error against the optimal distance” with no note what should be considered the optimal distance make the paper clarity decreased and the results not possible to be reproduced. Personally I would prefer to see some equations or a flow chart. By the way, have you tried and alternative reward function? \n\n\nTable 4 is not easy for me to understand. For example, what do you mean when you say eval reward?\n\nImplementation details. I fail to understand how the supervised learning is used (as said already). Also you make a note for the value network, but not for the policy network. \n\nThere are some minor issues with the references such as pomdp or lstm not being capitalised\n\n\nIn general, I believe that the paper has a great potential and is a noticeable work. However, \nthe paper could be better organised. Personally, I struggled with the clarity of some text portions.\nFor me, the main drawback of the paper is that it was't tested with human users. The actual success of the system when evaluated by humans can be surprisingly different from the one that comes from simulation. ", "Summary: The paper proposes a self-play model for goal oriented dialog generation, aiming to enforce a stronger coupling between the task reward and the language model.\n\nContributions:\n\nWhile there are architectural changes (e.g. the customer agent and client agent have different roles and parameters; the parameters of both agents are updated via self-play training), the information isolation claim is not clear. Both the previous work (Lewis et al., 2017) and the proposed approach pitch two agents against each other and the agents communicate via language utterances alone (e.g. rather than exchanging hidden states). In the previous work, the two agents share a set of initial conditions (the set of objects to be divided; this is required by the nature of the task: negotiation), but the goals of each agent are hidden and the negotiation process and outcome are only revealed through natural language. Could you expand on your claim regarding information isolation? Could you design an experiment which highlights the contribution and provide a comparison with the previous approach?\n\nFurthermore, divergence from natural language when optimizing the task reward remains an issue. As a result, both methods require alternate training between the supervised loss and the reinforcement loss.\n\nExperiments:\n\n1. Minor question: During self-play \"we conduct 1 supervised training using the training data every time we make a reinforcement update\". One iteration or one epoch of supervised training?\n\n2. The method is only evaluated on a toy dataset where both the structure of the dialog is limited (see figure 2) and the sentences themselves (the number of language templates is not provided). The referenced negotiation paper uses data collected from mechanical turk ensuring more diversity and the dataset is publicly available. Couldn't your method be applied to that setting for comparison?\n\n3. The qualitative evaluation shows compelling examples from the model. Are the results hand-picked to highlight the two outcomes? I wish more examples and some statistics regarding the diversity of produced dialogs were provided (e.g. how many times to they result in a booked flight vs. unfulfilled request and compare that with the training data).\n\n4. What is the difference between evaluation reward reported in Table 4 and self-play evaluation reward reported in Table 5? (Is the former obtained by conditioning on target utterances?). Is there a reason to not report the itemized rewards in Table 5 as well (Eval flight, Eval action) etc?\n\n5. The use of the value network vs. the policy network is not clarified in the model description nor in the experiments. Is the value network used to reduce the variance in the reward?\n\nFinally, there are several typos or grammatical errors, including:\n- Page 4, t and i should be the same.\n- Page 4. Use p(u_t | t_{<t-1}; \\theta) instead of p(u_t | t_{<t-1} | \\theta).\n- Page 2, second paragraph: \"which correctness\" -> \"whose correctness\".\n- Page 2, second-to-last paragraph: \"access to a pieces\" -> \"access to pieces\", \"to the best of it can\" -> \"as good as it can\".\n- Page 4. \"feeded\" -> fed\n- Page 5, second-to-last paragraph: \"dataset is consists of\" -> \"dataset consists of\".\n- Page 7/8: Both examples are labeled \"Sample dialog 1\"\n- Dataset & experiments: Table 3 and Table 3\n- Experiments: \"to see how our model performs qualitative\" -> \"to see how our model performs qualitatively\"\n- Related work: \"... of studying dialog system is to ...\" -> \"dialog systems\"\n- Conclusion: \"In those scenario\" -> \"In those scenarios\"", "This paper describes a method for improving a goal oriented dialogue system using selfplay. Using similar techniques to previous work, they pretrain the model using supervised learning, and then update it with selfplay reinforcement learning. The model is evaluated on a synthetic flight booking task, and selfplay training improves the results.\n\nI found it very difficult to work out what the contribution of this paper is over previous work, particularly the Lewis et al. (2017) paper that they cite. The approach to using selfplay RL seems almost identical in each case, so there doesn’t appear to be a technical contribution. The authors say that in contrast to Lewis et al. their “setting enforces the isolation of information between the models of the two agents” - I’m not sure what this means, but the agents in Lewis et al. each have information that the other does not. They also claim that Lewis et al.’s task “can easily degenerate into classification problems without the need to interact through dialogue”, but offer no justification for this claim.\n\nThe use of a synthetic dataset also weakens the paper, and means it is unclear if the results will generalize to real language. For example, Lewis et al. note challengings in avoiding divergence from human language during selfplay, which are likely to be less pronounced on a synthetic dataset. \n\nOverall, the paper needs to be much clearer about what its contributions are over previous work before it can be accepted.\n" ]
[ 6, 4, 3 ]
[ 3, 3, 4 ]
[ "iclr_2018_HJXyS7bRb", "iclr_2018_HJXyS7bRb", "iclr_2018_HJXyS7bRb" ]
iclr_2018_Syl3_2JCZ
A Self-Organizing Memory Network
Working memory requires information about external stimuli to be represented in the brain even after those stimuli go away. This information is encoded in the activities of neurons, and neural activities change over timescales of tens of milliseconds. Information in working memory, however, is retained for tens of seconds, suggesting the question of how time-varying neural activities maintain stable representations. Prior work shows that, if the neural dynamics are in the ` null space' of the representation - so that changes to neural activity do not affect the downstream read-out of stimulus information - then information can be retained for periods much longer than the time-scale of individual-neuronal activities. The prior work, however, requires precisely constructed synaptic connectivity matrices, without explaining how this would arise in a biological neural network. To identify mechanisms through which biological networks can self-organize to learn memory function, we derived biologically plausible synaptic plasticity rules that dynamically modify the connectivity matrix to enable information storing. Networks implementing this plasticity rule can successfully learn to form memory representations even if only 10% of the synapses are plastic, they are robust to synaptic noise, and they can represent information about multiple stimuli.
rejected-papers
This work extends Druckmann and Chklowskii, 2012 and demonstrates some interesting properties of the new model. This would be of interest to a neuroscience audience, but the focus is off for ICLR.
test
[ "S1VMieDlG", "SytEarFxz", "S1Vm7Z7GG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This is a great discussion on an interesting problem in computational neuroscience, that of holding an attractor memory stable even though individual neurons fluctuate. The previously published idea is that this is possible when the sum of all these memory neurons remain constant for the specific readout network, which is possible with the right dependency of the memory neurons. While this previous work relied on fine tuned weights to find such solutions, this work apparently shows that a gradient-based learning rule can finds easily robust solutions.\n\nUnfortunately, I am a bit puzzled about this paper in several ways. To start with I am not completely sure why this paper is submitted to ICLR. While it seems to address a technical issue in computational neuroscience, the discussion on the impact on machine learning is rather limited. Maybe some more discussion on that would be good.\n\nMy biggest concern is that I do not understand the model as presented. A recurrent network of the form in eq 1 is well known as an attractor network with symmetric weights. However, with the proposed learning rate this might be different as I guess this is not even symmetric. Shouldn’t dr_i/da_i = 1 for positive rates as a rectified linear function is used? I guess the derivation of the learning rule (eq.3) is not really clear to me. Does this not require the assumption of stationary single recurrent neuron activities to get a_i=\\sum L_ij r_j? How do the recurrent neurons fluctuate in the first experiments before noise is introduced? I see that the values change at the beginning as shown in Fig1a, but do they continue to evolve or do the asymptotic? I think a more careful presentation of this important part of the paper would be useful.\n\nAlso, I am puzzled about the readout, specifically when it comes to multiple memories. It seems that separate memories have different readout nodes as the readout weights has an additional index k in this case. I think the challenge should be to have multiple stable states that can be read out in the same pathway. I might miss here something, though I think that without a more clear explanation the paper is not clear for a novel reader. \n\nAnd how about Figure 3a? Why is the fever model suddenly shooting up? This looks rather than a numerical error.\nIn summary, while I might be missing some points here, I can not make sense of this paper at this point.", "A neural network model consisting of recurrently connected neurons and one or more readouts is introduced which aims to retain some output over time. A plasticity rule for this goal is derived. Experiments show the robustness of the network with respect to noisy weight updates, number of non-plastic connections, and sparse connectivity. Multiple consecutive runs increase the performance; furthermore, remembering multiple stimuli is possible. Finally, ideas for the biological implementation of the rule are suggested.\n\nWhile the presentation is generally comprehensible a number of errors and deficits exist (see below). In general, this paper addresses a question that seems only relevant from a neuroscience perspective. Therefore, I wonder whether it is relevant in terms of the scope of this conference. I also think that the model is rather speculative. The authors argue that the resulting learning rule is biologically plausible. But even if this is the case, it does not imply that it is implemented in neuronal circuits in the brain. As far as I can see, there exists no experimental evidence for this rule. \n\nThe paper shows the superiority of the proposed model over the approach of Druckmann & Chkolvskii (2012), however, it lacks in-depth analysis of the network behavior. Specifically, it is not clear how the information is stored. Do neurons show time-varying responses as in Druckmann & Chkolvskii (2012) or do all neuron stabilize within the first 50 ms (as in Fig. 2A, it is not detailed how the neurons shown there have been selected)? Do the weights change continuously within the delay period or do they also converge rapidly? This question is particularily important when considering multiple consecutive trials (cf. Fig. 5) as it seem that a specific but constant network architecture can retain the desired stimulus without further plasticity. Weight histograms should be presented for the different cases and network states. Also, since autapses are allowed, an analysis of their role should be performed. This information is vital to compare the model to findings from neuroscience and judge the biologic realism.\n\nThe target used is \\hat{s}(t) / \\hat{s}(t = 0), this is dubbed \"fraction of stimulus retained\". In most plots, the values for this measure are <= 1, but in Fig. 3A, the value (for the FEVER network) is > 1. Thus, the name is arguably not well-chosen: how can a fraction of remembrance be greater than one? Also, in a realistic environment, it is not clear that the neuronal activities decay to zero (resulting in \\hat{s}(t) also approaching zero). A squared distances measure should therefore be considered.\n\nIt is not clear from the paper when and how often weight updates are performed. Therefore, the biologic plausability cannot be assessed, since the learning rule might lead to much more rapid changes of weights than the known learning rules in biological neural networks. Since the goal seems to be biologic realism, generally, spiking neurons should be used for the model. This is important as spiking neural networks are much more fragile than artificial ones in terms of stability.\n\nFurther remarks:\n\n- In Sec. 3.2.1, noise is added to weight updates. The absolute values of alpha are hard to interpret since it is not clear in what range the weights, activities, and weight updates typically lie.\n\n- In Sec. 3.2.2 it is shown that 10% plastic synapses is enough for reasonable performance. In this case, it should be investigated whether the full network is essential for the memory task at all (especially since later, it is argued that 100 neurons can store up to 100 stimuli).\n\n- For biologic realism, just assuming that the readout value at t = 0 is the target seems a bit too simple. How does this output arise in the first place? At least, an argument for this choice should be presented.\n\n\nRemarks on writing:\n\n- Fig. 1A is too small to read.\n\n- The caption of Fig. 4C is missing.\n\n- In Fig. 7AB, q_i and q_j are swapped. Also, it is unclear in the figure to which connection the ds and qs belong.\n\n- In 3.6.1, Fig. 7 is referenced, but in the figure the terminology of Eq. 5 is used, which is only introduced in Sec. 3.6.2. This is confusing.\n\n- The beginning of Sec. 3.6 claims that all information is local except d\\hat{s}_k / dt, but this is not the case as d_i is not local (which is explained later).\n\n- The details of the \"stimulus presentation\" (i.e. it is not performed explicitly) should be emphasised in 2.1. Also, the description of the target \\hat{s} is much clearer in 3.4 than in 2.1 (where it should primarily be explained).\n\n- The title of the citation Cowan (2010) is missing.\n\n- In Fig. 2A, the formulas are too small too read in a printed version.\n\n- In Sec. 3.6.1 some sums are given over k, but k is also the index of a neuron in Fig. 7A (which is referenced there), this can be ambiguous and could be changed. ", "This paper presents a self-organizing (i.e. learned) memory mechanism in a neural model. The model is not so much an explicit mechanism, rather the paper introduces an objective function that minimizes changes in the signal to be memorized. \n\nThe model builds on the FEVER model (Druckmann and Chklowskii, 2012) and stays fairly close to the framework and goals laid out in this paper. The contribution offered in this paper is a gradient-based weight update (corresponding to the objective function being the square of the temporal derivative of the signal to memorize). This naturally extends the FEVER framework to non-linear dynamics and allows the model to learn to be more robust to weight noise than the original FEVER model. \n\nThe paper goes on to show a few properties of the new memory model including it's ability to remember multiple stimuli and sensitivity to various degrees of connectivity and plasticity. The authors also demonstrate that the update rule can be implemented with a certain degree of biological plausibility. In particular, they show that the need for the same weights to be used in the forward propagation of activity and the backward propagation of gradient can be relaxed. This result is consistent with similar findings in the deep learning literature (Lillicrap et al., 2016).\n\nFor the reader interested in models of working memory and how it can be implemented in dynamic neural hardware, I believe this paper does contribute something interesting to this field of study.\n\nI have two principle concerns with the paper. First, it seems that ICLR is not the right venue for this work. While ICLR has certainly published work with a strong neuro-scientific orientation, I believe this paper offers relatively little of interest to those that are not really invested in the models under consideration. The task that is considered is trivial - maintain a constant projection of the activity of the network activations. I would expect the model to attempt to do this in the service of another, more clearly useful task. In the RNN literature, there exists tasks such as the copy task that test memory, this model is really at a layer below this level of task sophistication. \n\nSecond, while the set of experiments are fair and appropriate, they also seem quite superficial. There is a lack of analysis of the consequences of the learning objective on the network dynamics. There just does not seem to be the same level of contribution as I would expect from an ICLR paper. " ]
[ 3, 4, 4 ]
[ 2, 4, 4 ]
[ "iclr_2018_Syl3_2JCZ", "iclr_2018_Syl3_2JCZ", "iclr_2018_Syl3_2JCZ" ]
iclr_2018_HJXOfZ-AZ
When and where do feed-forward neural networks learn localist representations?
According to parallel distributed processing (PDP) theory in psychology, neural networks (NN) learn distributed rather than interpretable localist representations. This view has been held so strongly that few researchers have analysed single units to determine if this assumption is correct. However, recent results from psychology, neuroscience and computer science have shown the occasional existence of local codes emerging in artificial and biological neural networks. In this paper, we undertake the first systematic survey of when local codes emerge in a feed-forward neural network, using generated input and output data with known qualities. We find that the number of local codes that emerge from a NN follows a well-defined distribution across the number of hidden layer neurons, with a peak determined by the size of input data, number of examples presented and the sparsity of input data. Using a 1-hot output code drastically decreases the number of local codes on the hidden layer. The number of emergent local codes increases with the percentage of dropout applied to the hidden layer, suggesting that the localist encoding may offer a resilience to noisy networks. This data suggests that localist coding can emerge from feed-forward PDP networks and suggests some of the conditions that may lead to interpretable localist representations in the cortex. The findings highlight how local codes should not be dismissed out of hand.
rejected-papers
This work looks at what factors can lead to the emergence of selectivity (to certain categories) in units of a neural network. While this is an intriguing area to explore, this work uses settings that are quite toy-ish, making it a very hard to see how the observations could generalize to more realistic architectures or tasks.
train
[ "ryBhOOXlM", "ryuoSrKxM", "Bko4LP9eM", "Hkpg-16mG", "BkhtWLYzG", "Bk65YSKff", "rJbgBrYGf", "SJoDNBYfz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public", "public", "public" ]
[ "The authors ask when the hidden layer units of a multi-layer feed-forward neural network will display selectivity to object categories. They train 3-layer ANNs to categorize binary patterns, and find that typically at least some of the hidden layer units are category selective. The number of category selective (\"localist\") units varies depending on the size of the hidden layer, the structure of the outputs the network is trained to return (i.e., one-hot vs distributed), the neurons' activation functions, and the level of dropout-induced noise in the training procedure.\n\nOverall, I find the work to hint at an interesting phenomenon. However, the paper as presented uses an overly-simplistic task for the ANNs, and the work is sloppily presented. These factors detract from my enthusiasm. My specific criticisms are as follows:\n\n1) The binary pattern classification seems overly simplistic a task for this study. If you want to compare to the medial temporal lobe's Jennifer Aniston cells (i.e., the Quiroga result), then an object recognition task seems much more meaningful, as does a deeper network structure. Likewise, to inform the representations we see in deep object recognition networks, it is better to just study those networks, instead of simple shallow binary classification networks. Or, at least show that the findings apply to those richer settings, where the networks do \"real\" tasks.\n\n2) The paper is somewhat sloppy, and could use a thorough proofreading. For example, what are \"figures 3, ?? and 6\"? And which is Figure 3.3.1?\n\n3) What formula is used to quantify the selectivity? And do the results depend on the cut-off used to label units as \"selective\" or not (i.e., using a higher or lower cutoff than 0.05)? Given that the 0.05 number is somewhat arbitrary, this seems worth checking.\n\n4) I don't think that very many people would argue that the presence of distributed representations strictly excludes the possibility of some of the units having some category selectivity. Consequently, I find the abstract and introduction to be a bit off-putting, coming off almost as a rant against PDP. This is a minor stylistic thing, but I'd encourage the authors to tone it down a bit.\n\n5) The finding that more of the selective units arise in the hidden layer in the presence of higher levels of noise is interesting, and the authors provide some nice intuition for this phenomenon (i.e., getting redundant local representations makes the system robust to the dropout). This seems interesting in light of the Quiroga findings of Jennifer Aniston cells: the fact that the (small number of) units they happened to record from showed such selectivity suggests that many neurons in the brain would have this selectivity, so there must be a large number of category selective units. Does that finding, coupled with the result from Fig. 6, imply that those \"grandmother cell\" observations might reflect an adaptation to increase robustness to noise? \n", "Quality and Clarity\nThe neural networks and neural codes are studied in a concise way, most of the paper is clear. The section on data design, p3, could use some additional clarification wrt to how the data input is encoded (right now, it is hard to understand exactly what happens). \n\nOriginality\nI am not aware of other studies on this topic, the proposed approach seems original. \n\nSignificance\nThe biggest problem I have is with the significance: I don't see at all how finding somewhat localized responses in the hidden layer of an MLP with just one hidden layer has any bearing on deeper networks structured as CNNs: compared to MLPs, neurons in CNNs have much smaller receptive fields, and are known to be sensitive to selective and distinct features. \n\nOverall the results seem rather trivial without greater implications for modern deep neural networks: ie, of course adding dropout improves the degree of localist coding (sec 3.4). Similarly, for a larger network, you will find fewer localist codes (though this is hard to judge, as an exact definition of selectivity is missing). \n\nMinor issues: the \"selectivity\" p3 is not properly defined. On p3, a figure is undefined. \nTypo: p2: \"could as be\". \nMany of the references are ugly : p3, \"in kerasChollet (2015)\", this needs fixing. ", "This paper studies the development of localist representations in the hidden layers\nof feed-forward neural networks.\n\nThe idea is interesting and the findings are intriguing. Local codes\nincrease understandability and could be important for better\nunderstanding natural neural networks. Understanding how local codes\nform and the factors that increase their likelihood is critically\nimportant. This is a good start in that direction, but still leaves\nopen many questions. The issues raised in the Conclusions section are\nalso very interesting -- do the local codes increase with networks\nthat generalize better, or with overtrained networks?\n\nA weakness in this paper (admitted by the authors in the\nConclusions section) is the dependence of the results on the form of input\nrepresentation. If we consider the Jennifer Aniston cells, they do\nnot receive as input as well separated inputs as modeled in this\npaper. In fact the input representation used in this study is already\na fairly localist representation as each 1 unit is fairly selectively\non for its own class and mostly off for the other classes. It will be\nvery interesting to see the results of hidden layers in deep networks\noperating on natural images.\n\nPlease give your equation for selectivity. On Page 2 it is stated \"We\nuse the word ‘selectivity’ as a quantitative measure of the difference\nbetween activations for the two categories, A and not-A, where A is\nthe class a neuron is selective for (and not-A being all other\nclasses).\" However you state that neurons were counted as a local\ncode if the selectivity was above .05. A difference between\nactivations for the two categories of .05 does not seem very\nselective, so I'm thinking you used something other than the\nmathematical difference.\n\nWhat is the selectivity of units in the input codewords? With no\nperturbation, and S_x=.2, w_R=50, w_P=50, the units in the prototype\nblocks have a high selectivity responding with 1 for all patterns in\ntheir class and with 0 for 8/9 of the patterns in the other classes.\nCould this explain the much higher selectivity for this case in the\nhidden units? I would like to see the selectivity of the input units\nfor each of the plots/curves. This would be especially interesting\nfor Figure 5.\n\nIt is stated that LCs emerge with longer training and that ReLU\nneurons may produce more LCs because they train quicker and all\nexperiments were stopped at 45,000 epochs. Why not investigate this\nby changing learning rates for one of ReLu or sigmoidal units to more\nclosely match their training speed? It would be interesting to see if\nthe difference is simply due to learning rate, or something deeper\nabout the activation functions.\n\nYou found that very few local codes in the HLNs were found when a\n1-hot ouput encoding was used and suggest that this means that\nemergent local codes are highly unlikely to be found in the\npenultimate layer of deep networks. If your inputs are a local code\n(e.g. for low w_R), you found local codes above the layer of local\ncodes but in this result not below the layer of local codes which\nmight also imply (as you say in the Conclusions) that more local\ncoding neurons may be found in the higher layers (though not the\npenultimate one as you argue). Could you analyze how the selectivity\nof a hidden layer changes as a function of the selectivity in the\nlower and higher layers?\n\n\n\nMinor Note -- The Neural Network Design section looks like it still\nhas draft notes in it.\n\n\n", "What was done in the rewrite of the paper:\n\n1. Corrected typo’s, references, etc\n2. Rewrote introduction for clarity and tone, and out-lined the motivation more clearly (as I think we didn’t explain it well enough and one of the reviewers misunderstood our motivation)\n3. Added in the formula for selectivity (as suggested by the reviewers), and an argument for why the presence of a selective neuron is more important than the amount it is selectivie by (as the equation wasn’t originally included to avoid giving the misleading impression that the quantitative selectivity was more important than the emergence of selective codes). \n4. Added in the measure of selectivity in fig 1, to make the formula more clear\n5. Added in the explanation for why we used such a simple task at the end of the intro, as this point was questioned by a reviewer.\n6. Stated that local codes were highly unlikely to be present in the input code (as this question was asked by a reviewer)\n\nMost the changes are in in the rewrite of the introduction to increase clarity, the results and their write-up are unchanged from the first version.\n", "Thank you for your review and your comments that the work is interesting.\n\n1. The reviewer stated: ‘The binary pattern classification seems overly simplistic a task for this study. If you want to compare to the medial temporal lobe's Jennifer Aniston cells, then an object recognition task seems much more meaningful, as does a deeper network structure. Likewise, to inform the representations we see in deep object recognition networks, it is better to just study those networks, instead of simple shallow binary classification networks. Or, at least show that the findings apply to those richer settings, where the networks do \"real\" tasks.’\n\nThis is valid, and we are obviously also doing these analyses on deeper networks performing image classification tasks. However, there is value in doing these sorts of classification tasks. \n\nFirstly, if we find localist coding in a deeper and more complex network, how can we possibly know why those codes have appeared if we have not already analysed a much simpler version? This was the motivation behind testing the effects of drop-out and invariance and dataset size independently, so we could use these results to inform our understanding of the deep NN results. \n\nSecondly, our data-set classification task while simple, is not overly simplistic. Using precisely constructed input data with completely known qualities allows us to tease out interactions that would be harder to do. E.g., a criticism that is often applied to the grandmother cell theory and these results is that the patient who is tested is only shown a few hundred items, the number of items in the world that they could recognise is obviously much larger, so even if a cell is found that responds to only one item in the set (i.e. Jennifer Anniston) the argument can always be made that the cell might have responded to an item not shown (like Jeff Goldblum). NNs don’t get tired, so you could show them many different items, but you still cannot show them every item in the world. However, our data sets are designed so that there is a complete (and finite) set of items (i.e. every possible vector of the correct length and patten rules), so we can show the networks every possible item, and thus, find out if the neurons are truly selective. \n\nThirdly, we designed the input data so that there were invariant parts of the codewords in each category, so there was a short-cut for the NN to learn. This paper has shown how the relative size and structure of these invarients affects the likeliness of NN learning a local code, we can now investigate the structure of the invarients in the representation at the lower levels of the hippocampus to see if there is invarience in representations of category members at that level.\n\n2. (proof-reading): We will fix all proof-reading issues and typos in the revised version.\n\n3. (selectivity): The formula for the selectivity is is simply the difference between the highest value activation of one set and the lowest of the other, and the equation has been added to the paper. It was there in words, but perhaps not made clear enough. The reviewer writes: ‘And do the results depend on the cut-off used to label units as \"selective\" or not (i.e., using a higher or lower cutoff than 0.05)?’ No. I think the best way of thinking about this is the use the word ‘selective’ as a qualitative measure of whether a neuron is selective (to a category) or not, and use ‘selectivity’ for the quantity by which it is selective by, which is the difference in activation between A and not-A groups (An illustration of this was added to figure 1). To answer the question, we found that once the NN had learnt the solve the problem, there were selective neurons, further the training to reduce the error (and move the outputs values closer to 1 and 0) merely increased the selectivity and not the number of neurons. In terms of a direct comparison to single cell recording studies, it isn’t certain which level of selectivity is observable in experiments. The 0.05 cut-off amounts to 5%, which we feel is enough above zero that it is measurably above experimental error. As we could train for longer to increase the selectivity, any cut-off is somewhat arbitrary. However, the number of selective neurons did not change much after the start of training, so the results reported here are valid.\n\n4. (almost a rant): We shall rewrite the abstract and intro for tone. It wasn’t meant to be almost a rant against PDP, perhaps a little too much excitement slipped into the writing.\n\n5. (do the noise findings imply that grandmother cells observations might reflect an adaptation to increase the robustness to noise?) Yes! Or rather, we think so. Not being able to do very-long-term evolution experiments with human beings we cannot know for sure. But our approach of looking for general rules of how information is structured in noisy environments does suggest that local codes might be adaptive against excessive noise.", "Thank you for you review, and your comments that our work is original and important. \n\nRegarding your comments on significance, I think perhaps that we have failed to communicate the purpose of our research. Although we are interested in extending this work (in the future) to modern deep neural networks, that is not intended scope for this paper. We want to understand the results of single-cell recording studies in the hippocampus which found possible selective codes. As such, we are trying to elucidate the basic constraints on when such codes appear, and we are doing this by investigating when such codes appear in very simple neural networks with inputs and outputs of known (and easily modifiable) structure, in hope that we can provide hypotheses for when the brain might learn selective codes. Using a modern deep-NN with convolutions and huge number of layers is inappropriate for this task as the higher degree of complexity would obscure the basic constraints on the information representation.\n\nFurthermore, although modern deep neural networks are wonderful (and we are also investigating them), there is a lot of work that needs to be done on the underlying science of why neural networks work the way they do, which is significant for the field, as, although many engineering solutions are found by creatively playing around, there is room for finding engineering solutions by applying the results of basic investigative science (such as is presented basic science we are doing here). \n\nMinor issues and clarity: We fixed the stylistic and clarity concerns in the rewrite.", "You stated:\n'What is the selectivity of units in the input codewords? With no perturbation, and S_x=.2, w_R=50, w_P=50, the units in the prototype blocks have a high selectivity responding with 1 for all patterns in their class and with 0 for 8/9 of the patterns in the other classes. Could this explain the much higher selectivity for this case in the hidden units? I would like to see the selectivity of the input units for each of the plots/curves. This would be especially interesting for Figure 5.'\n\nWe have already started to investigate how changing the input patterns affects the number of local codes seen in the NN. It is an interesting idea to change the selectivity, by, say, using the numerical input of 0.3 to stand for 0 and 0.7 to stand for 1, I could then look and see if, after a required amount of training, if the size of the selectivity is smaller. If I have time to run this, I will add it to the revised paper. \n\nHowever, the input codes do not have very many selective units. For each of the 10 classes, we have a prototype part of the code which is 50 bits long, leaving 450 bits which are randomly assigned to be 1 with a probability of 1/3. So, for a unit to be selective in the unperturbed case, 450 input codes would have to all have a zero for that neuron, which is the chance of a zero coming up 450 times, and the chance of a single zero is 2/3. The overall chance of a unit being selective, then, is very small. To check, and because at first glance I thought that number of selective input units would vary for the data in figure 5, and this might explain some of the trend shown, I wrote some code to count the number of selective units. However, in my test case, I got 0 units (for the data with a sparseness of 0.4). As the reviewer specifically mentions the data with S_x of 0.2, I tested that as well, and also got zero selective codes. ", "Thank you very much for your detailed and helpful review, it is helpful and enthusiastic reviews like this which are the reason why peer-review is helpful for science. I’m going to go through the questions you raised point by point. \n\n‘A weakness in this paper (admitted by the authors in the Conclusions section) is the dependence of the results on the form of input representation’ ...’It will be very interesting to see the results of hidden layers in deep networks operating on natural images.’\n\nIn honesty, we agree that ‘ It will be very interesting to see the results of hidden layers in deep networks operating on natural images’. I doubt it will surprise you to know that we are also working on finding localist codes in deep neural networks. We have also started to investigate the effects of input representation (and depending on time, I might add in some of these results in the revised paper). However, I think it is worth finding the situations where local codes emerge so we can map when and where they do emerge, and thus, we can start to predict in which sorts of systems we should find them.\n\nPlease give your equation for selectivity. On Page 2 it is stated \"We use the word ‘selectivity’ as a quantitative measure of the difference between activations for the two categories, A and not-A, where A is the class a neuron is selective for (and not-A being all other classes).\" However you state that neurons were counted as a local code if the selectivity was above .05. A difference between activations for the two categories of .05 does not seem very selective, so I'm thinking you used something other than the mathematical difference.\n\nSorry for not putting in the equation for selectivity. I had written it in in words, but I guess it was not obvious that it was the whole equation. We use the word ‘selective’ as a qualitative measure of whether a neuron is selective (to a category) or not, and use ‘selectivity’ for the quantity by which it is selective by, which is the difference in activation between the disjoint sets of A and not-A, the mathematical difference, as you inferred. However, it is not the size of the difference (the selectivity) that is important. There is no way (that I can think of, anyway) to directly relate these selectivity values to outputs from (living) neurons, so the exact numerical value is largely irrelevant (and can be increased by further training, see my response to reviewer 2). A better metric is whether the neuron is selective or not. For example, we had 500 codewords separated into 10 classes. So if the activations were distributed randomly, the chances of all the members of one class ending up higher (or equivalently, lower) than the other classes is (50 choose 50) / (500 choose 50) which is tiny (4.32*10^-71). So if all the members of one class have activations which are disjoint from the set of all activations, then the neuron is selective. If the gap between the set of all A and the set of all not-A is a measure of how long the NN has been trained, then the numerical value of the selecitvity is less important than simply finding a selective unit.\n\nTo answer the question, we found that once the NN had learnt the solve the problem, there were selective neurons, further the training to reduce the error (and move the outputs values closer to 1 and 0) merely increased the selectivity and not the number of selective neurons. In terms of a direct comparison to single cell recording studies, it is not certain which level of selectivity is observable in experiments. The 0.05 cut-off amounts to 5%, and we felt was reasonably enough above zero that it could count as measurable above experimental error. However, as I said, we could train for longer to increase the selectivity, and thus, any cut-off is somewhat arbitrary. However, the number of selective neurons did not change much after a short amount of training, so the results reported here (i.e. the number of selective units) are valid. I have added further explanation of the measure of selectivity to the paper and figure 1, and added discussion of finding selective units.\n\n\n\nIt is stated that LCs emerge with longer training...’ ‘Why not investigate this by changing learning rates for one of ReLu or sigmoidal units to more closely match their training speed? It would be interesting to see if the difference is simply due to learning rate, or something deeper about the activation functions.\n\nThis is an interesting idea. We are going to look at stopping both the ReLu and sigmoidal neuron cases at the exact same loss and then comparing. But we could investigate the learning rate as well. \n\nI’ve posted this response up now, in case you would like further discussion. And I will try and run all the suggested experiments by the deadline, and I shall post the results up here as I get them. " ]
[ 3, 3, 5, -1, -1, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJXOfZ-AZ", "iclr_2018_HJXOfZ-AZ", "iclr_2018_HJXOfZ-AZ", "iclr_2018_HJXOfZ-AZ", "ryBhOOXlM", "ryuoSrKxM", "Bko4LP9eM", "Bko4LP9eM" ]
iclr_2018_SJiHOSeR-
Contextual memory bandit for pro-active dialog engagement
An objective of pro-activity in dialog systems is to enhance the usability of conversational agents by enabling them to initiate conversation on their own. While dialog systems have become increasingly popular during the last couple of years, current task oriented dialog systems are still mainly reactive and users tend to initiate conversations. In this paper, we propose to introduce the paradigm of contextual bandits as framework for pro-active dialog systems. Contextual bandits have been the model of choice for the problem of reward maximization with partial feedback since they fit well to the task description. As a second contribution, we introduce and explore the notion of memory into this paradigm. We propose two differentiable memory models that act as parts of the parametric reward estimation function. The first one, Convolutional Selective Memory Networks, uses a selection of past interactions as part of the decision support. The second model, called Contextual Attentive Memory Network, implements a differentiable attention mechanism over the past interactions of the agent. The goal is to generalize the classic model of contextual bandits to settings where temporal information needs to be incorporated and leveraged in a learnable manner. Finally, we illustrate the usability and performance of our model for building a pro-active mobile assistant through an extensive set of experiments.
rejected-papers
This paper is lacking in terms of clarity and experimentation, and would require a lot of additional work to bring it to the standards of any high quality venue.
train
[ "ByJXMOblf", "H10BwZ9xM", "Byjc8N5lz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This article propose to combine a form of contextual Thompson sampling policy with memory networks to handle dialog engagement in mobility interfaces.\n\nThe idea of using contextual bandits (especially Thompson sampling) instead of state of the art approximate RL algorithms (like DQN, AC3, or fittedQ) is surprising given the intrinsic Markovian nature of dialog. Another key difficulty of dialogs is delayed feedback. Contextual bandits are not designed to handle this delay. I saw no attempt to elaborate on that ground.\nOne possible justification, however, could be the recent theoretical works on Contextual Decision Processes (See for instance the OLIVE algorithm in \"Contextual Decision Processes with Low Bellman rank are PAC learnable\" by Jiang et al. @ NIPS 2016). A mix of OLIVE, memory networks and Thompson sampling could be worth studying.\n\nThe article is however poorly written and it reflects a severe lack of scientific methodology : the problem statement is too vague and the experimental protocol is dubious.\n\nThe dialog engagement problem considered is a special case of reinforcement learning problem where the agent is given the option of \"doing nothing\". The authors did not explain clearly the specificity of this dialog-engagement setting with regard to other RL settings. For instance why not evaluate their algorithm on video-games where doing nothing is an option?\n\nThe authors introduce the notion of regret on page 3 but it is never used after.\nIt is unclear whether the problem considered here is purely online (cold start) or off-line (repeated training on a simulator).\n\nIn order to obtain a decent experimental protocol for their future submissions I suggest that the authors provide:\n- both offline (repeated training on simulator) and online (cold-start) experiments;\n- at least more than one dialog simulation scenario (in order to convince you reader that your experimental result is more than a side-effect of your simulator bias);\n- at least a few baselines with state of the art deep and shallow RL algorithms to justify the use of a new method.\n", "The paper \"CONTEXTUAL MEMORY BANDIT FOR PRO-ACTIVE DIALOG ENGAGEMENT\" proposes to address the problem of pro-active dialog engagement by the mean of a bandit framework that selects dialog situations w.r.t. to the context of the system. Authors define a neural archiecture managing memory with the mean of a contextual attention mechanism.\n\nMy main concern about this paper is that the proposal is not enough well described. A very large amount of technical details are missing for allowing the reader to understand the model (and reproduce the experiments). The most important ones are about the exploration policies which are not described at all, while it is a very central point of the paper. The only discussion given w.r.t. the exploration policy is a very general overview about Thompson Sampling. But nothing is said about how it is implemented in the case of the proposed model. How is estimated p(\\Theta|D) ? Ok authors give p(\\Theta|D) as a product between prior and likelihood. But it is not sufficient to get p(\\Theta|D), the evidence should also been considered (for instance by using variational inference). Also, what is the prior of the parameters ? How is distributed r given a,x and \\Theta ? \n\nAlso, not enough justification is given about the general idea of the model. Authors should give more intuitions about the mechanism they propose. Figure 2 should be able to help, but no reference to this figure is given in the text, so it is very difficult to extract any information from it. Authors only (roughly) describe the architecture without justifying their choices.\n\nAt last, the experiments really fail at demonstrating the relevance of the approach, as only questionable artificial data is used. On the first hand it appears mandatory to me to consider some (even minimal) experiments on real data for such proposal. On the other one, the simulated data used there cannot correspond to cues to validate the approach since they appear very far from real scenarios: the trajectories do not depend on what is recommended. Ok only the recommended places reveal some reward but it appears not as a sufficiently realistic scenario to me. Also, very too few baselines are considered: only different versions of the proposal and a random baseline are considered. A classical contextual bandit instance (such as LinUCB) would have been a minimum.\n\nOther remarks:\n - the definition of q is not given\n - user is part of the context x in the bandit section but not after where it is denoted as u.\n - the notion of time window should be more formally described\n - How is built the context is not clear in the experiments section\n\n", "This paper attempts to use contextual bandits for a dialog system. The paper is not clear about how exactly the problem is being mapped to the contextual bandit framework. Similarly, the Thompson sampling algorithm is used, but there is no mention of a posterior or how to sample. Furthermore, the lack of systematic experiments and ablation studies casts doubts on the entire framework. Below is a detailed review and questions for the authors:\n1. Please motivate clearly the need for having a bandit framework. One would imagine that dialog systems have a huge amount of data that can be leveraged for a pro-active service. \n2. In the sentence \"the agent needs to leverage on the already gathered feedback to choose propositions that maximize the current expected reward\". The expected reward or the agent is undefined at this point in the paper. \n3. After introducing the contextual bandit problem, please give the explicit mapping of your system to this framework. What do the arms correspond to, how are they related, how is the expected reward computed at each round? Another thing which is not clear is what is the environment? It seems that a recommendation by the dialog system will cause the environment to change, in which case it's not a bandit problem? Isn't it more natural to model this problem as a reinforcement learing problem?\n4. In the sentence, \"a record of the last K successful engagements of the agent\". It is not clear what constitutes a successful engagement. Also, please justify why you are not keeping negative examples in order to learn. \n5. Section 5 describes the Thompson sampling algorithm. Again, there is no mapping between the problem at hand and the TS framework. For instance, what is the posterior in this case. How are you sampling it? Are we being Bayesian, if so what is the prior?\n6. In the sentence \"longer patterns from series of events occurred through time can motive a suggestion\", it seems that the problem you are trying to solve involves delayed feedback. Can you use the strategies in [1] over here?\n7. For equation 8, please give some intuition. Why is it necessary?\n8. In the sentence, \"Regarding the learning algorithms, hyper-parameters have been determined by cross-validation.\". Isn't the data is being collected on the fly in a bandit framework? What is the cross-validation being done on?\n9.One experiment on one dataset does not imply that the method is effective. Similarly, the lack of ablation studies for the various components of the system is worrying. \n[1] Guha, Sudipto, Kamesh Munagala, and Martin Pal. \"Multiarmed bandit problems with delayed feedback.\" arXiv preprint arXiv:1011.1161 (2010)." ]
[ 2, 3, 3 ]
[ 5, 4, 4 ]
[ "iclr_2018_SJiHOSeR-", "iclr_2018_SJiHOSeR-", "iclr_2018_SJiHOSeR-" ]
iclr_2018_BkpXqwUTZ
Iterative temporal differencing with fixed random feedback alignment support spike-time dependent plasticity in vanilla backpropagation for deep learning
In vanilla backpropagation (VBP), activation function matters considerably in terms of non-linearity and differentiability. Vanishing gradient has been an important problem related to the bad choice of activation function in deep learning (DL). This work shows that a differentiable activation function is not necessary any more for error backpropagation. The derivative of the activation function can be replaced by an iterative temporal differencing (ITD) using fixed random feedback weight alignment (FBA). Using FBA with ITD, we can transform the VBP into a more biologically plausible approach for learning deep neural network architectures. We don't claim that ITD works completely the same as the spike-time dependent plasticity (STDP) in our brain but this work can be a step toward the integration of STDP-based error backpropagation in deep learning.
rejected-papers
This paper is nowhere near standards for publication anywhere.
train
[ "rJUnrQDlG", "rkFKYjdlz", "r18pMHFgM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "- This paper is not well written and incomplete. There is no clear explanation of what exactly the authors want to achieve in the paper, what exactly is their approach/contribution, experimental setup, and analysis of their results. \n\n- The paper is hard to read due to many abbreviations, e.g., the last paragraph in page 2. \n\n- The format is inconsistent. Section 1 is numbered, but not the other sections. \n\n- in page 2, what do the numbers mean at the end of each sentence? Probably the figures? \n\n- in page 2, \"in this figure\": which figure is this referring to?\n\n\nComments on prior work:\n\np 1: authors write: \"vanilla backpropagation (VBP)\" \"was proposed around 1987 Rumelhart et al. (1985).\" \n\nNot true. A main problem with the 1985 paper is that it does not cite the inventors of backpropagation. The VBP that everybody is using now is the one published by Linnainmaa in 1970, extending Kelley's work of 1960. The first to publish the application of VBP to NNs was Werbos in 1982. Please correct. \n\np 1: authors write: \"Almost at the same time, biologically inspired convolutional networks was also introduced as well using VBP LeCun et al. (1989).\"\n\nHere one must cite the person who really invented this biologically inspired convolutional architecture (but did not apply backprop to it): Fukushima (1979). He is cited later, but in a misleading way. Please correct.\n\np 1: authors write: \"Deep learning (DL) was introduced as an approach to learn deep neural network architecture using VBP LeCun et al. (1989; 2015); Krizhevsky et al. (2012).\" \n\nNot true. Deep Learning was introduced by Ivakhnenko and Lapa in 1965: the first working method for learning in multilayer perceptrons of arbitrary depth. Please correct. (The term \"deep learning\" was introduced to ML in 1986 by Dechter for something else.)\n\np1: authors write: \"Extremely deep networks learning reached 152 layers of representation with residual and highway networks He et al. (2016); Srivastava et al. (2015).\" \n\nHighway networks were published half a year earlier than resnets, and reached many hundreds of layers before resnets. Please correct.\n\n\nGeneral recommendation: Clear rejection for now. But perhaps the author want to resubmit this to another conference, taking into account the reviewer comments.\n\n", "The paper falls far short of the standard expected of an ICLR submission. \n\nThe paper has little to no content. There are large sections of blank page throughout. The algorithm, iterative temporal differencing, is introduced in a figure -- there is no formal description. The experiments are only performed on MNIST. The subfigures are not labeled. The paper over-uses acronyms; sentences like “In this figure, VBP, VBP with FBA, and ITD using FBA for VBP…” are painful to read. \n\n\n", "The paper is incomplete and nowhere near finished, it should have been withdrawn. \n\nThe theoretical results are presented in a bitmap figure and only referred to in the text (not explained), and the results on datasets are not explained either (and pretty bad). A waste of my time." ]
[ 3, 2, 2 ]
[ 4, 5, 5 ]
[ "iclr_2018_BkpXqwUTZ", "iclr_2018_BkpXqwUTZ", "iclr_2018_BkpXqwUTZ" ]
iclr_2018_BJ_QxP1AZ
Unleashing the Potential of CNNs for Interpretable Few-Shot Learning
Convolutional neural networks (CNNs) have been generally acknowledged as one of the driving forces for the advancement of computer vision. Despite their promising performances on many tasks, CNNs still face major obstacles on the road to achieving ideal machine intelligence. One is that CNNs are complex and hard to interpret. Another is that standard CNNs require large amounts of annotated data, which is sometimes very hard to obtain, and it is desirable to be able to learn them from few examples. In this work, we address these limitations of CNNs by developing novel, simple, and interpretable models for few-shot learn- ing. Our models are based on the idea of encoding objects in terms of visual concepts, which are interpretable visual cues represented by the feature vectors within CNNs. We first adapt the learning of visual concepts to the few-shot setting, and then uncover two key properties of feature encoding using visual concepts, which we call category sensitivity and spatial pattern. Motivated by these properties, we present two intuitive models for the problem of few-shot learning. Experiments show that our models achieve competitive performances, while being much more flexible and interpretable than alternative state-of-the-art few-shot learning methods. We conclude that using visual concepts helps expose the natural capability of CNNs for few-shot learning.
rejected-papers
The paper builds on earlier work by Wang et al (2015) on Visual Concepts (VCs) and explores the use of VCs for few-shot learning setting for novel classes. The work, as pointed out by two reviewers is somewhat incremental in nature, with main novelty being the demonstration of utilities of VCs for few shot learning. This would not have been a big limitation if the paper had a carefully conducted empirical evaluation providing insights on the effect of various configuration settings/hyperparameters on the performance in few shot learning, which two of the reviewers (Anon3, Anon2) state are missing. The paper falls short of the acceptance threshold in its current form. PS: The authors posted a github link to the code on Jan 12 which may potentially compromise the anonymity of the submission (though it was after all the reviews were already in) https://openreview.net/forum?id=BJ_QxP1AZ&noteId=BJaIDpBEM
train
[ "By1Lg8Ygz", "BJaIDpBEM", "rJxG4bqlG", "rkj8j16lM", "S1SBDs27z", "SJ_tUo2QM", "HJ55NohXz", "S1WHVi3mG", "rktB7jnXf", "H1AGk5_Gz", "ry84z1fff" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public" ]
[ "My main concern for this paper is that the description of the Visual Concepts is completely unclear for me. At some point I thought I did understand it, but then the next equation didnt make sense anymore... If I understand correctly, f_p is a representation of *all images* of a specific layer *k* at/around pixel \"p\", (According to last line of page 3). That would make sense, given that then the dimensions of the vector f_p is a scalar (activation value) per image for that image, in layer k, around pixel p. Then f_v is one of the centroids (named VCs). However, this doesnt seem to be the case, given that it is impossible to construct VC activations for specific images from this definition. So, it should be something else, but it does not become clear, what this f_p is. This is crucial in order to follow / judge the rest of the paper. Still I give it a try.\n\nSection 4.1 is the second most important section of the paper, where properties of VCs are discussed. It has a few shortcomings. First, iIt is unclear why coverage should be >=0.8 and firerate ~ 1, according to the motivation firerate should equal to coverage: that is each pixel f_p is assigned to a single VC centroid. Second, \"VCs tent to occur for a specific class\", that seems rather a bold statement from a 6 class, 3 VCs experiment, where the class sensitivity is in the order 40-77%. Also the second experiment, which shows the spatial clustering for the \"car wheel\" VC, is unclear, how is the name \"car wheel\" assigned to the VC? That has have to be named after the EM process, given that EM is unsupervised. Finally the cost effectiveness training (3c), how come that the same \"car wheel\" (as in 3b) is discovered by the EM clustering? Is that coincidence? Or is there some form of supervision involved? \n\n\nMinor remarks\n- Table 1: the reported results of the Matching Network are different from the results in the paper of Vinyals (2016).\n- It is unclear what the influence of the smoothing is, and how the smoothing parameter is estimated / set.\n- The VCs are introduced for few-shot classification, unclear how this is different from \"previous few-shot methods\" (sect 5). \n- 36x36 patches have a plausible size within a 84x84 image, this is rather large, do semantic parts really cover 20% of the image?\n- How are the networks trained, with what objective, how validated, which training images? What is the influence of the layer on the performance? \n- Influence of the clustering method on VCs, eg k-means, gaussian, von-mises (the last one is proposed)?\n\nOn a personal note, I've difficulties with part of the writing. For example, the introduction is written rather \"arrogant\" (not completely the right word, sorry for that), with a sentence, like \"we have only limited insights into why CNNs are effective\" seems overkill for the main research body. The used Visual Concepts (VCs) were already introduced by other works (Wangt'15), and is not a novelty. Also the authors refer to another paper (about using VCs for detection) which is also under submission (somewhere). Finally, the introduction paragraph of Section 5 is rather bold, \"resembles the learning process of human beings\"? Not so sure that is true, and it is not supported by a reference (or an experiment). \n\nIn conclusion:\nThis paper presents a method for creating features from a (pre-trained) ConvNet. \nIt clusters features from a specific pooling layer, and then creates a binary assignment between per image extracted feature vectors and the cluster centroids. These are used in a 1-NN classifier and a (smoothed) Naive Bayes classifier. The results show promising results, yet lack exploration of the model, at least to draw conclusions like \"we address the challenge of understanding the internal visual cues of CNNs\". I believe this paper needs to focus on the working of the VCs for few-shot experiments, showing the influences of some of the choices (layer, network layout, smoothing, clustering, etc). Moreover, the introduction should be rewritten, and the the background section of VCs (Sect 3) should be clarified. Therefore, I rate the current manuscript as a reject. \n\nAfter rebuttal:\nThe writing of the paper greatly improved, still missing insights (see comments below). Therefore I've upgraded my rating, and due to better understanding now, als my confidence. ", "We have released a primary version of our source code. Please find it on https://github.com/Awcrr/FewshotVC. We will keep completing its documents. ", "The paper adds few operations after the pipeline for obtaining visual concepts from CNN as proposed by Wang et al. (2015). This latter paper showed how to extract from a CNN some clustered representations of the features of the internal layers of the network, working on a large training dataset. The clustered representations are the visual concepts. This paper shows that these representations can be used as exemplars by test images, in the same vein as bag of words used word exemplars to create the bag of words of unseen images.\n\n A simple nearest neighborhood and a likelihood model is built to assign a picture to an object class.\n\nThe results a are convincing, even if they are not state of the art in all the trials. \nThe paper is very easy to follows, and the results are explained in a very simple way.\n\n\nFew comments:\nThe authors in the abstract should revise their claims, too strong with respect to a literature field which has done many advancements on the cnn interpretation (see all the literature of Andrea Vedaldi) and the literature on zero shot learning, transfer learning, domain adaptation and fine tuning in general.", "The paper proposes a method for few-shot learning using a new image representation called visual concept embedding. Visual concepts were introduced in Wang et al. 2015, which are clustering centers of feature vectors in a lattice of a CNN. For a given image, its visual concept embedding is computed by thresholding the distances between feature vectors in the lattice of the image to the visual concepts. Using the visual concept embedding, two simple methods are used for few-shot learning: a nearest neighbor method and a probabilistic model with Bernoulli distributions. Experiments are conducted on the Mini-ImageNet dataset and the PASCAL3D+ dataset for few-shot learning.\n\nPositives:\n- The three properties of visual concepts described in the paper are interesting.\n\nNegatives:\n- The novelty of the paper is limited. The idea of visual concept has been proposed in Wang et al. 2015. Using a embedding representation based on visual concepts is straightforward. The two baseline methods for few-shot learning provide limited insights in solving the few-shot learning problem.\n\n- The paper uses a hard thresholding in the visual concept embedding. It would be interesting to see the performance of other strategies in computing the embedding, such as directly using the distances without thresholding.", " 3. Previous methods are either learning a metric among categories or learning to learn a classifier within few examples. Our method is learning a composition of semantic concepts, which are extracted from few examples. We argue that our approach differs from previous one in terms of methodology. Also, our models possess the flexible (Shown in Table 2) and interpretable (shown in Figure 5) characters unlike alternative models.\n\n 4. We agree that semantic parts have various scales. However, for the simplicity of our method, we choose the layer with the most reasonable scale of VCs. In the appendix, we carried out experiments using features from different layers. We see from Appendix Table 3 that using features from the proposed Pool-3 layer achieves the best result. Meanwhile, we notice that compared with the original features, our VC-based methods are more robust to various scales.\n\n 5. We are sorry that we only described the details of network training for PASCAL3D+ in Section 5.2 of the original version. We have added more details in the description for Mini-ImageNet in Section 5.1. The network was trained on the training split of Mini-ImageNet with the objective of cross entropy. Specifically, we use the training split of the published Mini-ImageNet split from Ravi & Larochelle 2017. We preserve some images per category in the training split to validate our network. \n\n 6. VCs were proposed in Wang et al. (2015). We adopted the basic ideas of VCs from Wang et al. (2015) but adapted them to few-shot learning problems. In the appendix, we compared results using K-Means clustering and von Mises-Fisher clustering. While von Mises-Fisher clustering is mathematically more reasonable based on our assumptions, we find that there are minor differences between different clustering methods empirically. This shows that our VC-based models are robust to different clustering methods.\n\n> Problems on the writing\nR: We have revised the writing of our paper thoroughly, removed many of the “bold” claims, and toned down the “arrogance”. Instead, we concentrated on giving a clearer description of our work, including clarifying the novelty and effectiveness of our models. Please refer to the update list to see the modifications.\n", "Thanks for your detailed comments!\n\n> Unclear description of VCs.\nR: We are sorry for the difficulties you encountered in reading this paper. We have thoroughly revised the whole paper and, in particular, improved the clarity of Section 3 and Section 4. For the specific issue you mentioned, L_k is the set of positions in the k-th layer of the CNN for an input image. That means an element p in this set is “a specific position” of the feature maps from “a specific image”. If we assume that our network has C_k feature channels at the k-th layer, then f_p will be a C_k-dimensional vector.\n\n> Problems on the properties of VCs\nR: \n 1. The validity of property 1: On the first property, we list the statistics of 6 categories in Figure. 3(a) for a concise illustration of this property (We have added more examples and stats on more VCs and object categories in the appendix of the updated paper). Our conclusions still hold for more categories. The occurrence percentage of the sensitive categories, though “only” in the order of 40-77%, substantially outnumbers the percentages of other categories and hence can provide useful information for classification. We also replaced the word “dominate” by“fire intensively” to make the meaning clearer.\n\n 2. Misunderstanding on the interpretation of VCs such as “car wheel”: On the second and the third property (removed in the updated version), there is a misunderstanding of the \"car wheel\" VC. The VCs are extracted in an unsupervised manner (e.g. no spatial and image identity information) and are indexed by an integer. We used the term \"car wheel\" to describe the VC after we visualized it and found the image patches correspond to car wheels. This term was only used informally to give an intuition for the semantic information the VCs represent. It is not a supervision used by the model. In the updated paper, we replaced the informal names with VC index in Figure 1 and Figure 3 to avoid further confusions.\n\n> Minor remarks\nR:\n 1. Please note that results in the paper of Vinyals (2016) used a private split of Mini-ImageNet. So it's impossible to re-implement their settings. In this paper, we use a public split of Mini-ImageNet proposed by Ravi & Larochelle, 2017. The results for Matching Network are the same as Ravi & Larochelle, 2017 (We have added remarks for this in the caption of Table. 1).\n\n 2. In the original paper, we stated the parameter of our smoothing filter in the second paragraph of Section 5.1 (\"For the Gaussian filter used to smooth the factorizable likelihood model, we use a \\sigma of 1.2\"). Also, we stated in the last sentence of Section 4.4 (original Section 4.3), the use of smoothing is to overcome the spareness of per pixel firing rate in our VC-likelihood model and help to improve the generalizability during testing. With this smoothing operation, our model can better handle small shiftings and deformations in the images. In our experiments, without the smoothing, our VC-likelihood model scores 61.84% in the 5-category 5-shot setting (see Table 1 for comparison). It is just slightly behind the model with smoothing. We use the smoothed results throughout the paper since the smoothing is also part of our model.\n", "Thanks for your comments! We cited previous works on CNN internal representations in Section 2 in the original version, and we modified the paper to cite these works in the introduction as well. The revision will be reflected in our updated version.", "Thanks for your comments!\n\n> Lack of novelty in this work.\nR: The novelties of this work lie both in new results for VCs and new methods for few-shot learning. Sorry that we did not make this clear enough in the original submission.\n\n 1. New results for VCs. It is not obvious that the original VCs described in Wang et al. 2015 can be applied to few-shot learning. We adapt VCs for few-shot learning and our key findings, which are critical for few-shot learning, were not addressed in Wang et al. 2015.\n\n a. Extracting VCs from CNNs trained on different object categories: We learn VCs for objects from a CNN trained on a different set of objects categories (e.g., we learn VCs for vehicles with a CNN trained on a non-vehicle dataset). By contrast, in Wang et al. 2015, the VCs were extracted from a subset of the object categories on which the CNN was trained. \n\n b. Extracting VCs from few examples: In this work, we extract VCs from very few examples per category, but Wang et al. 2015 used orders of magnitude more (around 1000 versus 25). Surprisingly, we find that these “few-shot” VCs, when used for VC-encoding, possess similar desirable properties as the traditional VCs and hence are suitable for few-shot object classification task.\n\n c. Extracting VCs without knowing the object category: In this work, we extract VCs without knowing the object category (e.g., by pooling feature vectors from different categories together and clustering over them). But in Wang et al. 2015, VCs were extracted separately for each object category to obtain category specific VCs. This modification provides sufficient samples to learn high-quality VCs in few-shot setting and encourages VC sharing among different categories. This is useful for improving data efficiency and also makes it easier to apply our VC models on multiple novel categories directly.\n\n d. We found novel properties of VCs, i.e., category sensitivity and spatial pattern, which support the extended application of VCs. We describe these properties in detail in the second and the third paragraph of Section 4.2.\n\n 2. A new approach to few-shot learning. Unlike previous few-shot learning methods, we formulate few-shot learning in terms of compositions of semantic visual cues (i.e., parts). This differs from standard approaches like metric-based methods (e.g., matching network) and meta-learning based methods (e.g., MAML or Meta-Learner). Moreover, as stated in Table. 2, our proposed models are simple and very flexible (i.e., the same model can be applied with minimal changes to different problem settings, such as 5-category classification, 6-category classification, etc.). We argue that flexibility is an important characteristic of few-shot learning for real-world applications. \n\n> The hard threshold in the VC-Embedding.\nR: First, this threshold is for binary encoding of VCs which makes it possible to learn simple Bernoulli distributions (i.e, we only need to learn a distribution over binary variables which needs very little data). As shown in our experiments, this model is slightly better than nearest neighbor. Second, we compared Nearest Neighbor using distances to VCs vs VC-Encoding. As shown in Appendix Table 1, the accuracy of using distances directly is slightly worse than using VC-Encoding, but is still better than using original features from CNN.\n", "We thank the assigned and the volunteer reviewers for their comments. We have made a major revision of the paper to tone down the writing and to make it easier to understand. We have also clarified how our work relates to earlier studies of visual concepts to make our contribution clearer. We can release the code by January 19 to address the reproducing issue mentioned by the volunteer reviewer.\n\nBesides the modifications of writing, we also updated our paper with an Appendix which is consisted of:\n\n(1) Ablations studies that include the comparison between VC distance based model and VC-Encoding based model (which was mentioned by AnonReviewer3), comparison between K-Means clustering and vMFM based method (which was mentioned by AnonReviewer2), and the study on VCs of different scales (which is the question of VCs from different layers raised by AnonReviewer2).\n\n(2) More visualization and statistics on VCs to better illustrate their properties.\n", "Thanks for your interests in our paper, and sorry to hear about the difficulty you’ve met in reproducing our results. We’d like to clarify your confusions about our proposed method.\n\n> Definition of VC & Not well self-contained.\nWhile we have cited Wang et al. 2015 to acknowledge the original idea of visual concepts, we offered a detailed review of this key background in Section 3. Please see the description and figures in that section for the in-detail explanation of VC. Moreover, we’d appreciate a lot if you can share more specific problems you have had in understanding VC to help us make the definition clearer.\n\n> Data Partition.\nPlease note in the first paragraph of Section 5.1 we have stated that “we use the split proposed by Ravi & Larochelle (2017) consist of 64 training classes, 16 validation classes and 20 testing classes”, which is a broadly used public split in few-shot learning evaluations.\n\n> Source code\nWe planned to open-source our code after the paper gets accepted, but since you raised the reproducibility issue, we will publish it soon. Nevertheless, we think this work is fairly easy to implement even from scratch. Most components of our model can be implemented using standard ML toolkit, e.g. clustering method can be found in the sklearn package(http://scikit-learn.org/) and spherecluster (https://github.com/clara-labs/spherecluster). After VC extraction, both the nearest neighbor and likelihood method can also be easily implemented by hand or using the standard toolkit.\n\n> Hiccups in Methodology\nPlease note that in the first paragraph of Section 4.2 (right behind equation (4)), we have stated that “ K(b, b’) is the similarity between the binary VC-encodings b and b’ ”. That is to say, b’ is a VC-encoding which we have defined in Section 4.1. Other hyperparameters are also stated in Section 5.1. We’d appreciate it a lot if you can share other specific hiccups to help us revise the writing.\n\n> Method to Obtain the Threshold\nPlease note that in the second paragraph of Section 4.1, we have stated that “the threshold will be found by a grid-search which outputs the smallest threshold ensuring that coverage >= 0.8 with step size 0.001”. This means the threshold is calculated in real-time for each trail. Thus we can’t provide you with an actual number of it. Also, we have defined coverage and firerate in the first paragraph of this section, which we guess would be the “other values” you have mentioned. \n\n> Hardware Specification and Computational Details\nIn our experiment, we use 1 NVIDIA Titan X GPU. Regarding the computational cost, there are two core formulas in our model, equation (4) and (5), both have complexity O(NV), where N is the number of pixels (h by w) and V is the number of visual concepts. In practice, the order of magnitude for both N and V is 2. If you vectorize all the operations, it will be quite fast. Thus, in our experiment, each trail of few-shot learning can be finished less than 2 minutes. It would be good if you can share your implementation so that we could help to troubleshoot your efficiency problem.\n\nOverall, we thank you for your valuable feedback. Further specific suggestions will also be highly appreciated.\n", "After careful and thorough analysis of this paper, we believe this paper is difficult to reproduce for several reasons. First, the paper is vague at times while attempting to explain certain concepts. The definition of visual concepts is not clear, and could use more elaboration. Second, this paper is not very well self-contained, and required consulting the source paper to properly understand the idea of visual concepts. Other concepts, such as the data partition, are also not explained in detail and required consulting the source for more elaboration. These issues increase the difficulty of reproducing this paper.\n\n\nUnfortunately, our group could not reproduce the results of this paper. One explanation is due to the lack of source code. Without the source code, it becomes much more difficult to output the same results. Thus our group had to implement the algorithm by hand. However, we noticed that the methodology contained few hiccups. For one thing, there was a lack of specificity within the paper at times that put the burden of choosing many non-obvious design choices on the us. For instance, many hyper parameters could use more explanation. One example occurs when the variable b’ is introduced in the function K(b, b’). It’s not very clear what b’ represents at first, and it took a bit of pondering to realize it is for another input VC-Encoding argument. Variables such as b’ could be provided with more detail so as to ease any attempts at reproducibility.\n\nAnother issue is that the method used to derive some values such as the threshold for creating the visual encodings is missing. Though a general process is described to find the threshold value, the exact value is not given. Because the threshold is dependent on other values, that are also not provided, it makes it difficult to estimate a value that would have the same results as this paper. Finally, the formulas provided seem to be quite resource intensive, however hardware used or the amount of computing power required to train and use these models are not provided. It would be incredibly beneficial to know what their hardware specifications are, since our basic implementation of their similarity function ran with a running time that grew quadratically with the number of visual concepts, linearly with the number of pixels, and linearly with the number of neighbours for each pixel. This turned out to be prohibitively expensive to run off a simple laptop, taking over two hours to build and classify with only 15 training images and 15 validation images, and an implementation on a GPU with 80 images took over 30 hours to complete. Maybe our implementation is poorly optimized? Unfortunately we cannot know unless we have hardware or computational details to compare to.\n\nOverall while this paper shows potential for a extremely interesting new learning algorithm, the results are difficult to duplicate. Because of the ambiguity of some of the terms and concepts, it was challenging to follow the protocol and replicate the values that were proposed by the authors. For these reason, we feel this paper could be improved by providing more parameter values, and releasing the source code so that the results may be replicated." ]
[ 5, -1, 7, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 4, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJ_QxP1AZ", "rktB7jnXf", "iclr_2018_BJ_QxP1AZ", "iclr_2018_BJ_QxP1AZ", "SJ_tUo2QM", "By1Lg8Ygz", "rJxG4bqlG", "rkj8j16lM", "iclr_2018_BJ_QxP1AZ", "ry84z1fff", "iclr_2018_BJ_QxP1AZ" ]
iclr_2018_B1i7ezW0-
Semi-Supervised Learning via New Deep Network Inversion
We exploit a recently derived inversion scheme for arbitrary deep neural networks to develop a new semi-supervised learning framework that applies to a wide range of systems and problems. The approach reaches current state-of-the-art methods on MNIST and provides reasonable performances on SVHN and CIFAR10. Through the introduced method, residual networks are for the first time applied to semi-supervised tasks. Experiments with one-dimensional signals highlight the generality of the method. Importantly, our approach is simple, efficient, and requires no change in the deep network architecture.
rejected-papers
The paper proposes a novel approach for DNN inversion mainly targeted towards semi-supervised learning. However the semi-supervised learning results are not competitive enough. Although the authors mention in the author-response that semi-supervised learning is not the main goal of the paper, the experiments and claims of the paper are mainly targeted towards semi-supervised learning. As the approach for inversion is novel, the paper could be motivated from a different angle with appropriate supporting experiments. In its current form it's not suitable for publication.
val
[ "BJDhjYXlz", "HkBSVnBxG", "HyvZDmueM", "BJ2lgU6Xf", "H1qHBZOQM", "HkqVJzQff", "rJHWkfXzM", "r1wA0Zmff" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "After reading the revision:\n\nThe authors addressed my detailed questions on experiments. It appears sometimes the entropy loss (which is not the main contribution of the paper) is essential to improve performance; this obscures the main contribution.\n\nOn the other hand, the theoretical part of the paper is not really improved in my opinion, I still can not see how previous work by Balestriero and Baraniuk 2017 motivates and backups the proposed method.\n\nMy rating of this paper would remain the same. \n\n============================================================================================\n\nThis paper propose to use the reconstruction loss, defined in a somewhat unusual way, as a regularizar for semi-supervised learning.\n\nPros:\n\nThe intuition is that the ReLU network output is locally linear for each input, and one can use the conjugate mapping (which is also linear) for reconstructing the inputs, as in PCA. Realizing that the linear mapping is the derivative of network output w.r.t. the input (the Jacobian), the authors proposed to use the reconstruction loss defined in (8). Different from typical auto-encoders, this work does not require another reconstruction network, but instead uses the \"derivative\". This observation is neat in my opinion, and does suggest a different use of the Jacobian in deep learning. The related work include auto-encoders where the weights of symmetric layers are tied. \n\nCons:\n\nThe motivation (Section 2) needs to be improved. In particular, the introduction/review of the work of Balestriero and Baraniuk 2017 not very useful to the readers. Notations in eqns (2) and (3) are not fully explained (e.g., boldface c). Intuition and implications of Theorem 1 is not sufficiently discussed: what do you mean by optimal DNN, what is the criteria for optimality? is there a generative assumption of the data underlying the theorem? and the assumption of all samples being norm 1 seems too strong and perhaps limits its application? As far as I see, section 2 is somewhat detached from the rest of the paper.\n\nThe main contribution of this paper is supposed to be the reconstruction mapping (6) and its effect in semi-supervised learning. The introduction of entropy regularization in sec 2.3 seems somewhat odd and obscures the contribution. It also bears the questions that how important is the entropy regularization vs. the reconstruction loss. In experiments, results with beta=1.0 need to be presented to assess the importance of network inversion and the reconstruction loss. Also, a comparison against typical auto-encoders (which uses another decoder networks, with weights possibly tied with the encoder networks) is missing.\n\n", "In summary, the paper is based on a recent work Balestriero & Baraniuk 2017 to do semi-supervised learning. In Balestriero & Baraniuk, it is shown that any DNN can be approximated via a linear spline and hence can be inverted to produce the \"reconstruction\" of the input, which can be naturally used to do unsupervised or semi-supervised learning. This paper proposes to use automatic differentiation to compute the inverse function efficiently. The idea seems interesting. However, I think there are several main drawbacks, detailed as follows:\n\n1. The paper lacks a coherent and complete review of the semi-supervised deep learning. Herewith some important missing papers, which are the previous or current state-of-the-art.\n\n[1] Laine S, Aila T. Temporal Ensembling for Semi-Supervised Learning[J]. arXiv preprint arXiv:1610.02242, ICLR 2016.\n[2] Li C, Xu K, Zhu J, et al. Triple Generative Adversarial Nets[J]. arXiv preprint arXiv:1703.02291, NIPS 2017.\n[3] Dai Z, Yang Z, Yang F, et al. Good Semi-supervised Learning that Requires a Bad GAN[J]. arXiv preprint arXiv:1705.09783, NIPS 2017.\n\nBesides, some papers should be mentioned in the related work such as Kingma et. al. 2014. I'm not an expert of the network inversion and not sure whether the related work of this part is sufficient or not.\n\n2. The motivation is not sufficient and not well supported. \n\nAs stated in the introduction, the authors think there are several drawbacks of existing methods including \"training instability, lack of topology generalization and computational complexity.\" Based on my knowledge, there are two main families of semi-supervised deep learning methods, classified by depending on deep generative models or not. The generative approaches based on VAEs and GANs are time consuming, but according to my experience, the training of VAE-based methods are stable and the topology generalization ability of such methods are good. Besides, the feed-forward approaches including [1] mentioned above are efficient and not too sensitive with respect to the network architectures. Overall, I think the drawbacks mentioned in the paper are not common in existing methods and I do not see clear benefits of the proposed method. Again, I strongly suggest the authors to provide a complete review of the literature.\n\nFurther, please explicitly support your claim via experiments. For instance, the proposed method should be compared with the discriminative approaches including VAT and [1] in terms of the training efficiency. It's not fair to say GAN-based methods require more training time because these methods can do generation and style-class disentanglement while the proposed method cannot.\n\n3. The experimental results are not so convincing. \n\nFirst, please systematically compare your methods with existing methods on the widely adopted benchmarks including MNIST with 20, 100 labels and SVHN with 500, 1000 labels and CIFAR10 with 4000 labels. It is not safe to say the proposed method is the state-of-the-art by only showing the results in one setting.\n\nSecond, please report the results of the proposed method with comparable architectures used in previous methods and state clearly the number of parameters in each model. Resnet is powerful but previous methods did not use that.\n\nLast, show the sensitive results of the proposed method by tuning alpha and beta. For instance, please show what is the actual contribution of the proposed reconstruction loss to the classification accuracy with the other losses existing or not?\n\nI think the quality of the paper should be further improved by addressing these problems and currently it should be rejected.", "This paper proposed a new optimization framework for semi-supervised learning based on derived inversion scheme for deep neural networks. The numerical experiments show a significant improvement in accuracy of the approach.", "We thank you for your constructive comments and targeted concerns. We performed a clarification of the tables. Concerning the semi-supervised results, given the differences between our new approach (which uses a given ‘fixed’ training set and our new inversion formula that works for arbitrary deep networks) vs. GANs (which use an ‘unlimited’ training set), we feel that a difference of less than 3% on SVHN and 5% on CIFAR is reasonable. Indeed, the semi-supervised classification results are not the main goal of the paper. They are mainly to support our main goal of developing an inversion formula for arbitrary deep networks, which has many applications beyond semi-supervised learning. The paper also demonstrates how our inversion formula enables semi-supervised learning with Resnets, which is totally novel. To summarize, the goal of this paper is not to merely improve current approaches to semi-supervised learning but to open up a new way to work with deep nets of all kinds.\nRegards", "Thanks for the rebuttal and revision. Based on the complete survey and experiments, I think the contribution of this paper is not significant. Intuitively, I cannot see why this kind of reconstruction is more useful for deep learning, compared with existing network inversion methods. Practically, semi-supervised learning is the most important scenario in this paper while the results in this setting are not competitive to the state-of-the-art in two real datasets. Therefore, I stand by my reviews. BTW, it's better to improve the layout of the article, especially the tables.", "Thanks for your review and questions. Concerning the comparison with pure auto-encoders, we refer to the ladder network in our related work which is a special case of auto-encoder used for semi-supervised learning. \n\nWe have added additional motivation for our loss functions and their implications. In particular, we have provided two new experiments on CIFAR10 and SVHN with the hyper-parameters $\\beta=1$, which eliminates the entropy term from the loss function. As you can see from the new experiments, the entropy loss contributes significantly on SVHN but not on CIFAR10.\n", "Thanks for your insightful comments and suggestions. The paper has been updated with the following changes.\n\nThe related work section has been completed by adding recent published papers on semi-supervised learning. After a careful review of recent GAN literature, we think that our concerns about GANs for semi-supervised learning were incorrect; these comments have thus been removed. \n\nHowever, we emphasize that the primary aim of the paper is a new signal reconstruction (inversion) method for a broad range of deep nets. The application to semi-supervised learning (for which we reach very reasonable results with very few changes to the supervised learning ML pipeline) is secondary. \n\nConcerning the experiments, we have updated the paper with new experiments on SVHN and CIFAR10 to support our approach (we expect the new experiments to be fully completed in 20 days). We have limited our experiments to the CNN topology with ReLU and sigmoid nonlinearities in order to provide a more clear comparison with the literature (the literature has not yet applied Resnets to semi-supervised learning). We also added details on the number of parameters of each model in the appendix.\n", "Thanks for reviewing our work. We invite additional comments as the review period progresses; we will do our best to respond asap.\n" ]
[ 5, 4, 7, -1, -1, -1, -1, -1 ]
[ 4, 5, 2, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B1i7ezW0-", "iclr_2018_B1i7ezW0-", "iclr_2018_B1i7ezW0-", "H1qHBZOQM", "rJHWkfXzM", "BJDhjYXlz", "HkBSVnBxG", "HyvZDmueM" ]
iclr_2018_ByzvHagA-
Disentangled activations in deep networks
Deep neural networks have been tremendously successful in a number of tasks. One of the main reasons for this is their capability to automatically learn representations of data in levels of abstraction, increasingly disentangling the data as the internal transformations are applied. In this paper we propose a novel regularization method that penalize covariance between dimensions of the hidden layers in a network, something that benefits the disentanglement. This makes the network learn nonlinear representations that are linearly uncorrelated, yet allows the model to obtain good results on a number of tasks, as demonstrated by our experimental evaluation. The proposed technique can be used to find the dimensionality of the underlying data, because it effectively disables dimensions that aren't needed. Our approach is simple and computationally cheap, as it can be applied as a regularizer to any gradient-based learning model.
rejected-papers
The novelty of the paper is limited and it lacks on comparisons with relevant baselines, as pointed out by the reviewers.
train
[ "H1dMyvFgG", "HJbgHsYlM", "rkhaGO9gG", "Hy0nCO6XG", "BkbMROTQG", "Sy5z-uTXf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors propose a penalization term that enforces decorrelation between the dimensions of the representation \nThey show that it can be included as additional term in cost functions to train generic models.\nThe idea is simple and it seems to work for the presented examples.\n\nHowever, they talk about gradient descent using this extra term, but I'd like to see the derivatives of the \nproposed term depending on the parameters of the model (and this depends on the model!). On the other hand, \ngiven the expression of the proposed regulatization,\nit seems to lead to non-convex optimization problems which are hard to solve. Any comment on that?.\n\nMoreover, its results are not quantitatively compared to other Non-Linear generalizations of PCA/ICA designed for similar goals (e.g. those cited in the \"related work\" section or others which have been proved to be consistent non-linear generalizations of PCA such as: Principal Polynomial Analysis, Dimensionality Reduction via Regression that follow the family introduced in the book of Jolliffe, Principal Component Analysis).\n\nMinor points: Fig.1 conveys not that much information.", "\nI think the first intuition is interesting. However I think the benefits are not clear enough. Maybe finding better examples where the benefits of the proposed regularization are stressed could help. \n\nThere is a huge amount of literature about ICA, unmixing, PCA, infomax... based on this principle that go beyond of the proposal. I do not see a clear novelty in the proposal. \n\nFor instance the proposed regularization can be achieved by just adding a linear combination at the layer which based on PCA. As shown in [Szegedy et al 2014, \"Intriguing properties of neural networks\"] adding an extra linear transformation does not change the expressive power of the representation. \n\n\n- \"Inspired by this, we consider a simpler objective: a representation disentangles the data well when its components do not correlate...\"\n\nThe first paragraph is confusing since jumps from total correlation to correlation without making clear the differences.\nAlthough correlation is a second oder approach to total correlation are not the same. This is extremely important since the whole proposal is based on that.\n\n- Sec 2.1. What prevents the regularization to enforce the weights in the linear layers to be very small and thus minimize the covariance. I think the definition needs to enforce the out-diagonal terms in C to be small with respect to the terms in the diagonal. \n\n- All the evaluation measures are based on linear relations, some of them should take into account non-linear relations (i.e. total correlation, mutual information...) in order to show that the method gets something interesting.\n\n- The first experiment (dim red) is not clear to me. The original dimensionality of the data is 4, and only a linear relation is introduced. I do not understand the dimensionality reduction if the dimensionality of the transformed space is 10. Also the data problem is extremely simple, and it is not clear the didactic benefit of using it. I think a much more complicated data would be more interesting. Besides L_1 is not well defined. If it is L_1 norm on the output coefficients the comparison is misleading. \n\n- Sec 3.3. As in general the model needs to be compared with other regularization techniques to stress its benefits.\n\n- Sec 3.4. Here the comparison makes clear that not a real benefit is obtained with the proposal. The idea behind regularization is to help the model to avoid overfitting and thus improving the quality of the prediction in future samples. However the MSE obtained when not using regularization is the same (or even smaller) than when using it. \n", "This paper presents a regularization mechanism which penalizes covariance between all dimensions in the latent representation of a neural network. This penalty is meant to disentangle the latent representation by removing shared covariance between each dimension. \n\nWhile the proposed penalty is described as a novel contribution, there are multiple instances of previous work which use the same type of penalty (Cheung et. al. 2014, Cogswell et. al. 2016). Like this work, Cheung et. al. 2014 propose the XCov penalty which penalizes cross-covariance to disentangle subsets of dimensions in the latent representation of autoencoder models. Cogswell et. al. 2016 also proposes a similar penalty (DeCov) to this work for reducing overfitting in supervised learning.\n\nThe novel contribution of the regularizer proposed in this work is that it also penalizes the variance of individual dimensions along with the cross-covariance. Intuitively, this should lead to dimensionality reduction as the model will discard variance in dimensions which are unnecessary for reconstruction. But given the similarity to previous work, the authors need to quantitatively evaluate the value in additionally penalizing variance of each dimension as compared with earlier work. Cogswell et. al. 2016 explicitly remove these terms from their regularizer to prevent the dynamic range of the activations from being unnecessarily rescaled. It would be helpful to understand how this approach avoids this issues - i.e., if you penalize all the variance terms then you could just be arbitrarily rescaling the activities, so what prevents this trivial solution?\n\nThere doesn't appear to be a definition of the L1 penalty this paper compares against and it's unclear why this is a reasonable baseline. The evaluation metrics this work uses (MAPC, CVR, TdV, UD) need to be justified more in the absence of their use in previous work. While they evaluate their method on non-toy dataset such as CIFAR, they do not show what the actual utility of their proposed regularizer serves for such a dataset beyond having no-regularization at all. Again, the utility of the evaluation metrics proposed in this work is unclear.\n\nThe toy examples are kind of interesting but it would be more compelling if the dimensionality reduction aspect extended to real datasets.\n\n> Our method has no penalty on the performance on tasks evaluated in the experiments, while it does disentangle the data\n\nThis needs to be expanded in the results as all the results presented appear to show Mean Squared Error increasing when increasing the weight of the regularization penalty.\n", "Thank you for the comments, we will take them into account. There will be some more evaluations and comparisons added to the final version (see above about Cheung et.al. and Cogswell et.al.).", "Thank you for the insightful comments, we will take them into account when preparing a final version of our paper!", "Thank you for constructive and well-researched comments. Please consider the following comments on the mentioned related work.\n\nCheung, et.al. - This paper describes a different type of regularizer that penalize the correlation between hidden units and labels. In contrast we aim to learn a hidden representation that disentangles unknown underlying factors by penalizing correlation between hidden units. Hence, our method use no labels and can go much further in disentangling the signal.\n\nCogswell et.al. - As noted by reviewer one, what sets our work apart from Cogswell et.al., is that we penalize the full covariance matrix while they only penalize the off diagonal elements. The reason that we included the diagonal is that this will lead to a lower variance hypothesis, by removing information not needed to solve the task from the representation, which in turn yields guaranteed lower excess risk (Maurer 2009). Also, we got slightly better empirical results doing this and will include comparisons to Cogswell et.al. in the final version of the paper.\nAnother difference between our model and that of Cogswell et.al. is that we use the L1 matrix norm (in contrast to the frobenius norm) in order to promote a sparse solution. \n" ]
[ 6, 5, 4, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1 ]
[ "iclr_2018_ByzvHagA-", "iclr_2018_ByzvHagA-", "iclr_2018_ByzvHagA-", "H1dMyvFgG", "HJbgHsYlM", "rkhaGO9gG" ]
iclr_2018_SySpa-Z0Z
From Information Bottleneck To Activation Norm Penalty
Many regularization methods have been proposed to prevent overfitting in neural networks. Recently, a regularization method has been proposed to optimize the variational lower bound of the Information Bottleneck Lagrangian. However, this method cannot be generalized to regular neural network architectures. We present the activation norm penalty that is derived from the information bottleneck principle and is theoretically grounded in a variation dropout framework. Unlike in previous literature, it can be applied to any general neural network. We demonstrate that this penalty can give consistent improvements to different state of the art architectures both in language modeling and image classification. We present analyses on the properties of this penalty and compare it to other methods that also reduce mutual information.
rejected-papers
All reviewers have acknowledged that the proposed regularization is novel and also results in some empirical improvements on the reported language modeling and image classification tasks. However there are serious concerns on writing and rigor (reviewers Anon1 and Anon3) of the paper. The authors have not uploaded any revision of the paper to address these concerns.
train
[ "rJgVu2HgM", "Hks-5ZwlG", "rJhMyTnlf", "r1w72HamG", "rkxNtHpXG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "The paper puts forward Activation Norm Penalty (\"ANR\", an L_2 type regularization on the activations), deriving it from the Information Bottleneck principle. As usual with Information Bottleneck style constructions, the loss takes on a variational form.\n\nThe experiments demonstrate small but consistent gains with ANR across a number of domains (Language modelling on small datasets, plus image classification) and baseline models.\n\nA couple of things that could be improved:\n\n- The abstract claims to ground the ANR in the variational dropout framework. When it is applied without dropout to image classification, shouldn't that be explained?\n\n- Maybe dropping the determinant term also deserves some justification.\n\n- Very recently, Activation Regularization by Merity (https://arxiv.org/abs/1708.01009) proposed a similar thing without theoretical justification. Maybe discuss it and the differences (if any) in the related work section?\n\n- The Information Bottleneck section doesn't feel like an integral part of the paper.\n\nMy two cents: this work has both theoretical justification (a rare thing these days) and reasonable experimental results.\n\nThere are a number of typos and oversights:\n\n- Abstract: \"variation dropout\"\n- Section 2:\n - x is never used\n - m in b = m + \\sigma\\epsilon is never defined (is it the x above?)\n- In Section 3.2, equation 11 subscript of x_i is missing\n- Section 6, Ungrammatical sentence: \"Even though L_2 ...\"\n", "This paper tries to create a mapping between activation norm penalties and information bottleneck framework using variational dropout framework. While I find the path taken interesting, the paper itself is hard to follow, mostly due to constantly flipping notation (cons section below lists some of the issues) and other linguistic errors. In the current form, this work is somewhere between a theoretical paper and an empirical one, however for a theoretical one it lacks strictness, while for empirical one - novelty.\n\nFrom theoretical perspective:\nThe main claim in this paper seems to be (10), however it is not formalised in any form of theorem, and so -- lacks a lock of strictness. Even under the assumption that it is updated, and made more strict - what is a crucial problem is a claim, that after arriving at:\ntr[ K(X, X) ] - ln( det[ K(X, X) ] )\ndropping the log determinant is anyhow justified, to keep the reasoning/derivation of the whole method sound. Authors claim that quote \"As for the determinant of the covariance matrix of Gaussian Process, we cannot easily evaluate or\nderive its gradient, so we do not include it in our computation.\" Which is not a justification for treating the derivation as a proper connection between penalising activities norm and information bottleneck idea. Terms like this will emerge in many other models, where one assumes diagonal covariance Gaussians; in fact the easiest model to justify this penalty is just to say one introduces diagonal Gaussian prior over activations, and that's it. Well justified penalty, easy to connect to many generalisation bound claims. However in the current approach the connection is simply not proven in the paper. \n\nFrom practical perspective:\nActivation norm penalties are well known objects, used for many years (L1 activation penalty for at least 6 years now, see \"Deep Sparse Rectifier Neural Networks\"; various activation penalties, including L2, changes in L2, etc. in Krueger PhD dissertation). Consequently for a strong empirical paper I would expect much more baselines, including these proposed by Krueger et al.\n\nPros:\n- empirical improvements shown on two different classes of problems.\n- interesting path through variational dropout is taken to show some equivalences\n\nCons:\n- there is no proper proof of claimed connection between IB and ANP, as it would require a determinant of K(X, X) to be 1.\n- work is not strict enough for a theoretical paper, and does not include enough comparison for empirical one\n- paper is full of typing/formatting/math errors/not well explained objects, which make it hard to read, to name a few:\n * fonts of objects used in equations change through the text - there is a \\textbf{W} and normal W, \\textbf{I} and normal I, similarly with X, Ts etc. without any explanation. I am assuming font are assigned randomly and they represent the same object.\n * dydp in (5) is in the wrong integral\n* \\sigma switches meaning between non-linearity and variance\n* what does it mean to define a normal distribution with 0 variance (like in (1)). Did authors mean an actual \"degenerate Gaussian\", which does not have a PDF? But then p(y|t) is used in (5), and under such definition it does not exist, only CDF does. \n\n* \\Sigma_1 in 3.2 is undefined, was r(t) supposed to be following N(0, \\Sigma_1) instead of written N(0, I)?", "This paper proposes an L2 norm regularization of the output of penultimate layer of the network. This regularizer is derived based on variational approximation to the information bottleneck objective. \n\nI’m giving this paper a low rating for the following main reasons:\n\n1. The activation norm penalty is derived using an approximation of the information bottleneck Lagrangian term. However, the approximation in terms of a KL divergence itself contains two terms (Equation 10) and the authors ignore one of those (log-determinant) since it is intractable. The regularizer is based only on the other term. Dropping the log-determinant term, thus quality of the resulting regularizer, is not justified at all. It is just stated that we can not easily evaluate the log-determinant term or its gradient hence it is being dropped.\n\n2. The paper is not very well written, contains errors, undefined symbols and loose sentences, which make it very hard to follow. For example:\ni) Eq. 1: 0 \\cdot I_N … what is meant by this operation is not stated, also c_n not defined\nii) “The prior of weight vector … with probability p” … not sure what it means to have a Gaussian mixture with probability p.\niii) “q”, “m” not defined in Eq. 2\niv) Eq. 5 seems broken, the last integral has two dydt terms, also the inequalities in that equation seem incorrect.\n\nAuthors show a good gain on two language modeling tasks, CIFAR-10, and CIFAR-100.", "Hi,\n\nThank you for pointing out the connection to Stephen Merity's paper (we have not read that paper). \n\nApplication of ANP without dropout is a concern we have also noticed. We chose to share that set of experiments because we think it is important to share with people that empirically we found that ANP works on all settings of the neural network. The theory framework we have chosen (variational dropout) cannot accommodate/explain this setting, but we hope other theoretical frameworks can offer insight.\n\nThe editing suggestions are very insightful, and we plan to fully investigate the effect of the log-determinant term (both computationally and theoretically) and offer a penalty that is closer to the true form of IB :)", "Hi,\n\nThank you for your detailed response and a thorough read of our paper. Thank you for recognizing our effort to tackle this problem. As you rightly pointed out, the lack of treatment for the second term (log-determinant) is unsatisfying. We cannot show whether ANP is an upper/lower bound of IB objective nor can we prove that ANP \\prop IB. \n\nThere are two approaches to integrate IB objective to neural network: \n1). Weaken the architecture: Alemi, et al. weakened to a diagonal Gaussian distribution. Similarly, we could choose to output a triangular matrix as output of RNN. We chose not to pursue this route because we are aiming to get SOTA performance on all our tasks. Weakening the architecture capacity (or altering SOTA architecture) will not give us comparable performance. In fact, we did get improvement on all SOTA architectures (in a similar setting).\n\n2). Weaken the objective: ignore the log-determinant term, resulting in a weaker theoretical claim, but it is simpler to train and optimize and has a wider applicability for all architectures. We chose this path instead.\n\nWe do plan to tackle the log determinant estimation problem fully, and investigate the effect of it (both in terms of performance improvement and runtime increase). We appreciate your comment, editing suggestions. We do not think having more empirical results is entirely necessary as language modeling and image classification are from two important domains and already heavily optimized by the community." ]
[ 7, 4, 4, -1, -1 ]
[ 3, 3, 4, -1, -1 ]
[ "iclr_2018_SySpa-Z0Z", "iclr_2018_SySpa-Z0Z", "iclr_2018_SySpa-Z0Z", "rJgVu2HgM", "Hks-5ZwlG" ]
iclr_2018_ryykVe-0W
Learning Independent Features with Adversarial Nets for Non-linear ICA
Reliable measures of statistical dependence could potentially be useful tools for learning independent features and performing tasks like source separation using Independent Component Analysis (ICA). Unfortunately, many of such measures, like the mutual information, are hard to estimate and optimize directly. We propose to learn independent features with adversarial objectives (Goodfellow et al. 2014, Arjovsky et al. 2017) which optimize such measures implicitly. These objectives compare samples from the joint distribution and the product of the marginals without the need to compute any probability densities. We also propose two methods for obtaining samples from the product of the marginals using either a simple resampling trick or a separate parametric distribution. Our experiments show that this strategy can easily be applied to different types of model architectures and solve both linear and non-linear ICA problems.
rejected-papers
The paper proposes the use of GANs to match the joint distribution of features to the product of their marginals for ICA. The approach is totally plausible but reviewers have complaints about lack of rigor and analysis in terms of (i) mixing conditions under which the proposed GAN based approach will work, given that ICA is ill-posed for general nonlinear mixing (ii) comparison with prior work on linear and PNL ICA. Further, in most scenarios where GANs are used, one of the distributions is fixed (say, the real distribution) and the other is dynamic (fake distribution) trying to come close to the fixed distribution during optimization. In the proposed method, the discriminator encodes the distance b/w joint and product of marginals which are both dynamic during the learning. It might be useful to comment whether or not it has any implications wrt increased instability of training, etc.
train
[ "SybdzO8Nf", "H1hlWndxM", "HyoEDdvxG", "ry2lpp_ez", "SytMSVTmf", "Sk1uKXTXf", "BkIBV767G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thank you for your response.\n\nMaximizing independence under general mixing conditions does not necessarily lead to the recovery of the underlying independent sources (even up to the standard ambiguities); this is one of the major motivations why the linear and post-nonlinear ICA (PNL-ICA) tasks have been considered in the literature.\n\nConstructing new general ICA 'solvers' can have certain impact, however the merits of the proposed heuristic are not illustrated/clear.\n1)In case of linear and post-nonlinear ICA: Available off-the-shelf methods can solve 1-2 orders-of-magnitude larger tasks than the ones studied with high accuracy in a numerically robust way.\n2)For general non-linear ICA tasks: \n-One should investigate whether techniques maximizing an independence measure lead to provable solution, find the hidden sources. \n-In fact, using approximate independence measures [such as (4)] raises further unhandled issues. \n\nTo sum up, it would be crucial to (i) understand the validity domain of the studied scheme, (ii) make it comparable to existing methods (in terms of scalability, precision and robustness; at least in the ICA and PNL-ICA settings), and (iii) construct new well-posed non-linear ICA tasks.\n\nMy opinion has not changed.", "The paper proposes a GAN variant for solving the nonlinear independent component analysis (ICA) problem. The method seems interesting, but the presentation has a severe lack of focus.\n\nFirst, the authors should focus their discussion instead of trying to address a broad range of ICA problems from linear to post-nonlinear (PNL) to nonlinear. I would highly recommend the authors to study the review \"Advances in Nonlinear Blind Source Separation\" by Jutten and Karhunen (2003/2004) to understand the problems they are trying to solve.\n\nLinear ICA is a solved problem and the authors do not seem to be able to add anything there, so I would recommend dropping that to save space for the more interesting material.\n\nPNL ICA is solvable and there are a number of algorithms proposed for it, some cited already in the above review, but also more recent ones. From this perspective, the presented comparison seems quite inadequate.\n\nFully general nonlinear ICA is ill-posed, as shown already by Darmois (1953, doi:10.2307/1401511). Given this, the authors should indicate more clearly what is their method expected to do. There are an infinite number of nonlinear ICA solutions - which one is the proposed method going to return and why is that relevant? There are fewer relevant comparisons here, but at least Lappalainen and Honkela (2000) seem to target the same problem as the proposed method.\n\nThe use of 6 dimensional example in the experiments is a very good start, as higher dimensions are quite different and much more interesting than very commonly used 2D examples.\n\nOne idea for evaluation: comparison with ground truth makes sense for PNL, but not so much for general nonlinear because of unidentifiability. For general nonlinear ICA you could consider evaluating the quality of the estimated low-dimensional data manifold or evaluating the mutual information of separated sources on new test data.\n\nUpdate after author feedback: thanks for the response and the revision. The revision seems more cosmetic and does not address the most significant issues so I do not see a need to change my evaluation.", "The focus of the paper is independent component analysis (ICA) and its nonlinear variants such as the post non-linear (PNL) ICA model. Motivated by the fact that estimating mutual information and similar dependency measures require density estimates and hard to optimize, the authors propose a Wasserstein GAN (generative adversarial network) based solution to tackle the problem, with illustrations on 6 (synthetic) and 3-dimemensional (audio) examples. The primary idea of the paper is to use the Wasserstein distance as an independence measure of the estimated source coordinates, and optimize it in a neural network (NN) framework.\n\nAlthough finding novel GAN applications is an exciting topic, I am not really convinced that ICA with the proposed Wasserstein GAN based technique fulfills this goal.\n \nBelow I detail my reasons:\n\n1)The ICA problem can be formulated as the minimization of pairwise mutual information [1] or one-dimensional entropy [2]. In other words, estimating the joint dependence of the source coordinates is not necessary; it is worthwhile to avoid it.\n\n2)The PNL ICA task can be efficiently tackled by first 'removing' the nonlinearity followed by classical linear ICA; see for example [3].\n\n3)Estimating information theoretic (IT) measures (mutual information, divergence) is a quite mature field with off-the-self techniques, see for example [4,5,6,8]. These methods do not estimate the underlying densities; it would be superfluous (and hard).\n\n4)Optimizing non-differentiable IT measures can computationally quite efficiently carried out in the ICA context by e.g., Givens rotations [7]; differentiable ICA cost functions can be robustly handled by Stiefel manifold methods; see for example [8,9].\n\n5)Section 3.1: This section is devoted to generating samples from the product of the marginals, even using separate generator networks. I do not see the necessity of these solutions; the subtask can be solved by independently shuffling all the coordinates of the sample.\n\n6)Experiments (Section 6): \ni) It seems to me that the proposed NN-based technique has some quite serious divergence issues: 'After discarding diverged models, ...' or 'Unfortunately, the model selection procedure also didn't identify good settings for the Anica-g model...'.\nii) The proposed method gives pretty comparable results to the chosen baselines (fastICA, PNLMISEP) on the selected small-dimensional tasks. In fact, [7,8,9] are likely to provide more accurate (fastICA is a simple kurtosis based method, which is \na somewhat crude 'estimate' of entropy) and faster estimates; see also 2).\n\nReferences:\n[1] Pierre Comon. Independent component analysis, a new concept? Signal Processing, 36:287-314, 1994.\n[2] Aapo Hyvarinen and Erkki Oja. Independent Component Analysis: Algorithms and Applications. Neural Networks, 13(4-5):411-30, 2000. \n[3] Andreas Ziehe, Motoaki Kawanabe, Stefan Harmeling, and Klaus-Robert Muller. Blind separation of postnonlinear mixtures using linearizing transformations and temporal decorrelation. Journal of Machine Learning Research, 4:1319-1338, 2003.\n[4] Barnabas Poczos, Liang Xiong, and Jeff Schneider. Nonparametric divergence: Estimation with applications to machine learning on distributions. In Conference on Uncertainty in Artificial Intelligence, pages 599-608, 2011.\n[5] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf, Alexander Smola. A Kernel Two-Sample Test. Journal of Machine Learning Research, 13:723-773, 2012.\n[6] Alan Wisler, Visar Berisha, Andreas Spanias, Alfred O. Hero. A data-driven basis for direct estimation of functionals of distributions. TR, 2017. (https://arxiv.org/abs/1702.06516) \n[7] Erik G. Learned-Miller, John W. Fisher III. ICA using spacings estimates of entropy. Journal of Machine Learning Research, 4:1271-1295, 2003.\n[8] Francis R. Bach. Michael I. Jordan. Kernel Independent Component Analysis. Journal of Machine Learning Research 3: 1-48, 2002.\n[9] Hao Shen, Stefanie Jegelka and Arthur Gretton. Fast Kernel-Based Independent Component Analysis, IEEE Transactions on Signal Processing, 57:3498-3511, 2009.\n", "\nThe idea of ICA is constructing a mapping from dependent inputs to outputs (=the derived features) such that the outputs are as independent as possible. As the input/output densities are often not known and/or are intractable, natural independence measures such as mutual information are hard to estimate. In practice, the independence is characterized by certain functions of higher order moments -- leading to several alternatives in a zoo of independence objectives. \n\nThe current paper makes the iteresting observation that independent features can also be computed via adversarial objectives. The key idea of adversarial training is adapted in this context as comparing samples from the joint distribution and the product of the marginals. \n\nTwo methods are proposed for drawing samples from the products of marginals. \nOne method is generating samples but permuting randomly the sample indices for individual marginals - this resampling mechanism generates approximately independent samples from the product distribution. The second method is essentially samples each marginal separately. \n\nThe approach is demonstrated in the solution of both linear and non-linear ICA problems.\n\nPositive:\nThe paper is well written and easy to follow on a higher level. GAN's provide a fresh look at nonlinear ICA and the paper is certainly thought provoking. \n\n\nNegative:\nMost of the space is devoted for reviewing related work and motivations, while the specifics of the method are described relatively short in section 4. There is no analysis and the paper is \nsomewhat anecdotal. The simulation results section is limited in scope. The sampling from product distribution method is somewhat obvious.\n\n\nQuestions:\n\n- The overcomplete audio source separation case is well known for audio and I could not understand why a convincing baseline can not be found. Is this due to nonlinear mixing?\nAs 26 channels and 6 channels are given, a simple regularization based method can be easily developed to provide a baseline performance, \n\n\n- The need for normalization in section 4 is surprising, as it obviously renders the outputs dependent. \n\n- Figure 1 may be misleading as h are not defined \n", "Thanks for the feedback and interesting references.\n\nMany of the criticisms here seem to be based on notions which are specific to linear ICA. Unfortunately this seems to be attributable to a lack of clarity in the paper and we'd like to emphasize that we didn't try to provide an alternative to methods which have been specifically designed for that problem. We evaluated our methods on linear ICA and PNL ICA because solutions to these problems are known and comparisons were possible but the point is that the method we propose is less dependent on the specific mixing process.\n\n\"1)The ICA problem can be formulated as the minimization of pairwise mutual information [1] or one-dimensional entropy [2]. In other words, estimating the joint dependence of the source coordinates is not necessary; it is worthwhile to avoid it.\"\n\nThe first observation is specific to the linear case but interesting to know about. Working with the entropy seems to be based on the same ideas as infomax and introduces other limitations but we consider it complementary to our approach. \n\n\"2)The PNL ICA task can be efficiently tackled by first 'removing' the nonlinearity followed by classical linear ICA; see for example [3].\"\n\nWhile we didn't aim to be optimal for the PNL case either, we'd like to point out that the approach in [3] is still an iterative procedure.\n\n\"4)Optimizing non-differentiable IT measures can computationally quite efficiently carried out in the ICA context by e.g., Givens rotations [7]; differentiable ICA cost functions can be robustly handled by Stiefel manifold methods; see for example [8,9].\"\n\nThese points seem to be specific to the linear case again but are once again interesting.\n\n\"5)Section 3.1: This section is devoted to generating samples from the product of the marginals, even using separate generator networks. I do not see the necessity of these solutions; the subtask can be solved by independently shuffling all the coordinates of the sample.\"\n\nThe first solution is indeed basically shuffling the coordinates of the sample but we admit that the text was a bit overly didactic and we shortened it a bit. The separate generator networks could be interesting in a setup in which shuffling is not desirable because there are temporal dependencies, for example. We changed the text to make this more clear.\n\n\"6)Experiments (Section 6): \ni) It seems to me that the proposed NN-based technique has some quite serious divergence issues: 'After discarding diverged models, ...' or 'Unfortunately, the model selection procedure also didn't identify good settings for the Anica-g model...'.\nii) The proposed method gives pretty comparable results to the chosen baselines (fastICA, PNLMISEP) on the selected small-dimensional tasks. In fact, [7,8,9] are likely to provide more accurate (fastICA is a simple kurtosis based method, which is a somewhat crude 'estimate' of entropy) and faster estimates; see also 2).\"\n\nThe first point is fair in that our model selection heuristic wasn't always able to identify the best model and that GAN training can be unstable. That said, the discarding of models was mainly because we performed a random search with aggressive hyperparameter ranges which could select very high learning rates, for example. The second point is fair too in that the cited methods might prove to be stronger baselines. We don't think that obtaining comparable results with a more general method is a bad thing but that is of course somewhat subjective.\n\nWe'd finally like to point out that we don't propose the use of the Wasserstein GAN loss specifically but GAN type objectives in general for learning independent features. The WGAN example in the text was mainly there to illustrate how in some cases the objective can be seen as a proxy for the mutual information.\n\nThanks again.", "We first like to thank the reviewer for the valuable feedback and suggestions.\n\nWe acknowledge that the linear and PNL ICA problems are more or less solved. However, we respectfully disagree that we should drop the treatment of these problems because we still think it is interesting that they can be solved with a new approach which in our opinion is very different from previous methods. This was not obvious to us when we started our research. \n\nA better definition of the version of the non-linear problem would indeed have been desirable in the context of source separation. While we presented the overcomplete case as a first step for evaluating the method, we surely realize that it doesn't come with any theoretical guarantees and that the obtained correlation scores are limited in interpretability. We adjusted the text to make this more clear. \n\nThe alternative to use estimates of the mutual information is certainly something we considered for both evaluation and model selection but this proved to be difficult in general. We tried both Kraskov's nearest-neighbor estimator and the Hilbert Schmidt Independence Criterion but both these estimators typically seemed to consider the features fully independent during most stages of training and don't take into account how informative they are about the input. We still like to thank the reviewer for motivating us to pursue this direction further and like to hear more in detail what is meant with the \"quality of the low-dimensional data manifold\". If there is some principled way of measuring the latter we would certainly like to investigate it.\n\nThanks.\n\n", "First of all, thanks for the feedback and suggestions.\n\nWe removed some of the text which basically reiterated the sampling process as it seemed that multiple reviewers found it redundant. As you suggested, we made the definitions of the full system in Section 4 a bit more explicit. \n\n\"The overcomplete audio source separation case is well known for audio and I could not understand why a convincing baseline can not be found. Is this due to nonlinear mixing?\"\n\nGood question and something we certainly looked at. The most important reason for our lack of a baseline here is that the real-world audio separation setting is slightly more complicated due to the different arrival times of the source signals and reverberation. Most multi-channel audio separation methods we know of work in the frequency domain to alleviate some of these issues, introducing the necessity to predict phase information to reconstruct the raw signals. Another issue was that the evaluation criteria and benchmark data sets in this domain still seem to be under active development but of course we'd love to hear about good real-world benchmarks we might have overlooked. \n\n\"As 26 channels and 6 channels are given, a simple regularization based method can be easily developed to provide a baseline performance, \"\n\nThis sounds interesting. Do you mean an auto-encoder with more standard regularization? In that case the hidden units wouldn't be trained to become independent but perhaps we misunderstood the suggestion.\n\n\"The need for normalization in section 4 is surprising, as it obviously renders the outputs dependent.\"\n\nWe normalize over samples in a batch and not over the units in the layer but admit that this was not clear in the paper. We changed the text to address this issue.\n\nWe also removed the figure you referred to as it didn't add much and took up a lot of space.\n\nThanks again.\n" ]
[ -1, 5, 3, 6, -1, -1, -1 ]
[ -1, 5, 5, 3, -1, -1, -1 ]
[ "SytMSVTmf", "iclr_2018_ryykVe-0W", "iclr_2018_ryykVe-0W", "iclr_2018_ryykVe-0W", "HyoEDdvxG", "H1hlWndxM", "ry2lpp_ez" ]
iclr_2018_ry0WOxbRZ
IVE-GAN: Invariant Encoding Generative Adversarial Networks
Generative adversarial networks (GANs) are a powerful framework for generative tasks. However, they are difficult to train and tend to miss modes of the true data generation process. Although GANs can learn a rich representation of the covered modes of the data in their latent space, the framework misses an inverse mapping from data to this latent space. We propose Invariant Encoding Generative Adversarial Networks (IVE-GANs), a novel GAN framework that introduces such a mapping for individual samples from the data by utilizing features in the data which are invariant to certain transformations. Since the model maps individual samples to the latent space, it naturally encourages the generator to cover all modes. We demonstrate the effectiveness of our approach in terms of generative performance and learning rich representations on several datasets including common benchmark image generation tasks.
rejected-papers
Reviewers recognize that the method proposed is somewhat novel but have strong reservations on the experimental evaluation. Discussion of some relevant papers is also missing (eg, Li et al, 2017 : ALICE). The authors have not responded to the many concerns expressed by the reviewers.
train
[ "Sk2jcxFlM", "SJqfr3qlf", "rJgavPneM", "B1eAjclgG", "HyPVk6p1G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "The paper proposes a modified GAN objective, summarized in Eq.(3). It consists of two parts:\n(A) Classic GAN term: \\E_{x ~ P_{data} } \\log D'(x) + \\E_{z ~ P_{Z}, z' ~ P_{Z'} } \\log D'( G(z',E(x)) )\n(B) Invariant Encoding term: \\E_{x ~ P_{data} } [ \\log D(T(x),x) + \\E_{z' ~ P_{Z'} } \\log D( G(z',E(x)), x ) ]\n\nTerm (A) is standard, except the latent space of original GAN is decomposed into (1) the feature, which should be invariant between x and T(x), and (2) the noise, which is for the diversity of generated x.\n\nTerm (B) is the proposed invariant-encoding scheme. It is essentially a conditional GAN, where the the generated sample G(z',E(x)) is conditioned on input sample x, which guarantees that the generated sample is T(x) of x. \nIn fact, this could be theoretically justified. Suggestion: the authors might want to follow the proofs of Proposition 1 or 2 in [*] to show similar conclusion, making the paper stronger.\n\n[*] ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching, NIPS 2017\n\nThe definition of feature-invariant is E(T(x))=E(x)=E(G(z',E(x))), while the objective of the paper achieves T(x)=G(z',E(x)). Applying E() to both side yields the invariant features. It might be better to make this point clear.\n\nOverall Comments:\n\nOriginality: the proposed IVE-GAN algorithm is quite novel.\nQuality: The paper could be stronger, if the theoretical justification has been provided. \nClarity: Overall clear, while important details are missing. Please see some points in Detailed Comments.\nSignificance: The idea is interesting, it would be better if the quantitative evidence has been provided to demonstrate the use of the learned invariant feature. For example, some classification task to demonstrate the learned rotation-invariant feature shows higher accuracy.\n\nDetailed Comments:\n\n-- In Figure 1, please explains that \"||\" is the concatenation operator for better illustration.\n\n-- When summarizing the contributions of the paper, it is mentioned that \"our GANs ... without mode collapsing issues\". This is a strong point to claim. While precisely identifying the \"mode collapsing issue\" itself is difficult, the authors only show that samples in all modes are generated on the toy datasets. Please consider to rephrase. \n\n-- In Section 2, y is first used to indicate true/false of x in Eq.(1), then y is used to indicate the associated information (e.g., class label) of x in Eq.(2). Please consider to avoid overloading notations.\n\n-- In Eq.(3), the expectation \\E_{z ~ P_Z} in the 3rd term is NOT clear, as z is not involved in the evaluation. I guess it may be implemented as z=E(x), where x ~ P_{data}. From the supplement tables, It seems that the novel sample G(z', E(x)) is implemented as G evaluated on the concatenation of noise sample z' ~ P_{Z'} and encoded feature z=E(x). \nI am wondering how to generate novel samples? Related to this, Please clarify how to implement: \"To generate novel samples, we can draw samples z ~ P_Z as latent space\".\n\n-- Section 5, \"include a encoding unit\" ---> \"an\"\n\n-- In Supplement, please revise G(z'E(x)) to G(z', E(x)) in every table.\n", "This paper presents the IVE-GAN, a model that introduces en encoder to the Generative Adversarial Network (GAN) framework. The model is evaluated qualitatively through samples and reconstructions on a synthetic dataset, MNIST and CelebA.\n\nSummary: \nThe evaluation is superficial, no quantitative evaluation is presented and key aspects of the model are not explored. Overall, there just is not enough innovation or substance to warrant publication at this point.\n\nImpact:\nThe motivation given throughout the introduction -- to add an encoder (inference) network to GANs -- is a bit odd in the light of the existing literature. In addition to the BiGAN/ALI models that were cited, there are a number of (not cited) papers with various ways of combining GANs with VAE encoders to accomplish exactly this. If your goal was to improve reconstructions in ALI, one could simply add an reconstruction (or cycle) penalty to the ALI objective as advocated in the (not cited) ALICE paper (Li et al., 2017 -- \"ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching\"). \n\nThe training architecture presented here is novel as far as I know, though I am unconvinced that it represents an optimum in model design space. The model presented in the ALICE paper would seem to be a more elegant solution to the motivation given in this paper.\n\nModel Feature:\nThe authors should discuss in detail the interaction between the regular GAN pipeline and the introduced variant (with the transformations). Why is the standard GAN objective thrown in? I assume it is to allow you to sample directly from the noise in z (as opposed to z' which is used for reconstruction), but this is not discussed in much detail. The GAN objective and the added IVE objective seem like they will interact in not all together beneficial ways, with the IVE component pushing to make the distribution in z complicated. This would result in a decrease in sample quality. Does it? Exploration of this aspect of the model should be included in the empirical evaluation.\n\nAlso the addition of the transformations added to the proposed IVE pipeline seem to cause the latent variations z' to encode these variations rather than the natural variations that exist in the dataset. They would seem to make it difficult to encode someone face and make some natural manipulates (such as adjusting the smile) that are not included in this transformations.\n\nEmpirical Evaluation:\nComparison to BiGAN/ALI: The authors motivate their work by drawing comparisons to BiGAN/ALI, showing CelebA reconstructions from the ALI paper in the appendix. The comparison is not fair for two reasons, (1) authors should state that their reconstructions are made at a higher resolution (seems like 128x128, which is now standard but was not so when the BiGAN/ALI papers came out, they were sampled at 64x64), also, unlike the ALI results, they authors cut the background away from the CelebA faces. This alone could account for the difference between the two models, as ICE-GAN only has to encode the variability extant in faces and hair, ALI had to additionally encode the much greater variability in the background. The failure to control the experimental conditions makes this comparison inappropriate. \n\nThere is no quantitative evaluations at all. While many GAN papers do not place an emphasis on quantitative evaluations, at this point, I consider the complete lack of such an evaluation as a weakness of the paper. \n\nFinally, based on just the samples offered in the paper, which is admittedly a fairly weak standard, the model does not seem to be among the state-of-the-art on CelebA that have been reported in the literature. Given the rapid progress that is being made, I do not feel this should be could against this particular paper, but the quality of samples cannot be considered a compelling reason to accept the paper.\n\nMinor comment:\nThe authors appear to be abusing the ICLR style file by not leaving a blank line between paragraphs. This is annoying and not at all necessary since ICLR does not have a strict page limit. \n\nFigure 1 is not consistent with the model equations (in Eqns. 3). In particular, Figure 1 is missing the standard GAN component of the model.\n\nI assume that the last term in Eqns 3 should have G(z) as opposed to G(z',E(x)). Is that right?", "This paper proposes a GAN-based approach to learning factored latent representations of images. The latent space is factored into a set of variables encoding image identity and another set of variables encoding deformations of the image. Variations on this theme have been presented previously (see, e.g. \"Learning to Generate Chairs with Convolutional Neural Networks\" by Dosvitskiy et al., CVPR 2015). There's also some resemblance to methods for learning unsupervised embeddings based on discrimination between \"invariance classes\" of images (see, e.g. \"Discriminative Unsupervised Feature Learning with Convolutional Neural Networks\" by Dosovitskiy et al., NIPS 2014). The submitted paper is the first GAN-based paper on this precise topic I've seen, but it's hard to keep up these days.\n\nThe method described in the paper applies existing ideas about learning factored image representations which split features into subsets describing image type and variation. The use of GANs is a somewhat novel extension of existing conditional GAN methods. The paper is reasonably written, though many parentheses are missing. The qualitative results seem alright, and generally comparable to concurrent work on GANs. No concrete tasks or quantitative metrics are provided. Perhaps the authors could design a simple classification-based metric using a k-NN type classifier on top of the learned representations for the MNIST or CelebA tasks. The t-SNE plots suggest a comparison with the raw data will be favourable (though stronger baselines would also be needed). \n\nI'm curious why no regularization (e.g., some sort of GAN cost) was placed on the marginal distribution of the \"image type\" encodings E(x). It seems like it would be useful to constrain these encodings to be distributionally similar to the prior from which free samples will be drawn.", "Thanks a lot for bringing this paper to our attention. It is interesting to see that now also another group comes up with a methodology we originally proposed. Indeed, Antoniou et al. basically exploit the same idea as we propose: utilizing different variations of the same entity to learn a representation of the entity itself. Apart from the motivation (they use the method for augmentation, we focus on unsupervised representation learning), the main difference is that we produce the variations by defining transformations under which the true data generating distribution is invariant (such as small rotations or shifts of an image). In contrast, the referenced paper utilizes different samples with the same class membership. The latter procedure is a subcase of a setwise invariant transformation we describe in our paper, where a set is explicitly defined by its class membership and the transformation is indirectly given by the different samples within the set. This is a subcase of our proposed (general) definition and can obviously only be utilized if class membership is given, thus being unsuitable for an unsupervised setting.", "Different motiviation, but it seems like https://arxiv.org/abs/1711.04340 follow a similar approach. Can the authors comment on this?" ]
[ 5, 4, 5, -1, -1 ]
[ 4, 5, 4, -1, -1 ]
[ "iclr_2018_ry0WOxbRZ", "iclr_2018_ry0WOxbRZ", "iclr_2018_ry0WOxbRZ", "HyPVk6p1G", "iclr_2018_ry0WOxbRZ" ]
iclr_2018_S17mtzbRb
Forced Apart: Discovering Disentangled Representations Without Exhaustive Labels
Learning a better representation with neural networks is a challenging problem, which has been tackled from different perspectives in the past few years. In this work, we focus on learning a representation that would be useful in a clustering task. We introduce two novel loss components that substantially improve the quality of produced clusters, are simple to apply to arbitrary models and cost functions, and do not require a complicated training procedure. We perform an extensive set of experiments, supervised and unsupervised, and evaluate the proposed loss components on two most common types of models, Recurrent Neural Networks and Convolutional Neural Networks, showing that the approach we propose consistently improves the quality of KMeans clustering in terms of mutual information scores and outperforms previously proposed methods.
rejected-papers
The paper proposes two regularizers for encouraging "clustered feature embeddings" (use of "disentangled" in the title is misleading). Reviewers have raised points about the lack of proper motivation and justification of the regularizers. There are also concerns on the experiments conducted to evaluate the method, including for hierarchical classification setting. Missing comparison with relevant baselines has also been pointed out as a weakness. I feel the work is not yet mature.
train
[ "BkH22ZFxG", "HyDv0CYgz", "r1qbYHogz", "BkUT6TW7G", "B1etRT-7f", "HyYLRp-7G", "BytXAaW7f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes two regularization terms to encourage learning disentangled representations. One term is applied to weight parameters of a layer just like weight decay. The other is applied to the activations of the target layer (e.g., the penultimate layer). The core part of both regularization terms is a compound hinge loss of which the input is the KL divergence between two softmax-normalized input arguments. Experiments demonstrate the proposed regularization terms are helpful in learning representations which significantly facilitate clustering performance.\n\nPros:\n(1) This paper is clearly written and easy to follow.\n\n(2) Authors proposed multiple variants of the regularization term which cover both supervised and unsupervised settings.\n\n(3) Authors did a variety of classification experiments ranging from time serials, image and text data.\n\nCons:\n(1) The design choice of the compound hinge loss is a bit arbitrary. KL divergence is a natural similarity measure for probability distribution. However, it seems that authors use softmax to force the weights or the activations of neural networks to be probability distributions just for the purpose of using KL divergence. Have you compared with other choices of similarity measure, e.g., cosine similarity? I think the comparison as an additional experiment would help explain the design choice of the proposed function.\n\n(2) In the binary classification experiments, it is very strange to almost randomly group several different classes of images into the same category. I would suggest authors look into datasets where the class hierarchy is already provided, e.g., ImageNet or a combination of several fine-grained image classification datasets.\n\nAdditionally, I have the following questions:\n(1) I am curious how the proposed method compares to other competitors in terms of the original classification setting, e.g., 10-class classification accuracy on CIFAR10. \n(2) What will happen for the multi-layer loss if the network architecture is very large such that you can not use large batch size, e.g., less than 10? \n\n(3) In drawing figure 2 and 3, if the nonlinear activation function is not ReLU, how would you exam the same behavior? Have you tried multi-class classification for the case “without proposed loss component” and does the similar pattern still happen or not?\n\nSome typos:\n(1) In introduction, “when the cosine between the vectors 1” should be “when the cosine between the vectors is 1”.\n\n(2) In section 4.3, “we used the DBPedia ontology dataset dataset” should be “we used the DBPedia ontology dataset”. \n\nI would like to hear authors’ feedback on the issues I raised.\n", "Summary\nThis paper proposes two regularizers that are intended to make the\nrepresentations learned in the penultimate layer of a classifier more conforming\nto inherent structure in the data, rather than just the class structure enforced\nby the classifier. One regularizer encourages the weights feeding into the\npenultimate layer to be dissimilar and the other encourages the activations\nacross samples (even if they belong to the same class) to be dissimilar.\n\nPros\n- The proposed regularizers are able to separate out the classes inherent in the\n data, even if this information is not provided through class labels. This is\nvalidated on several datasets using visualizations as well as quantitative\nmetrics based on mutual information.\n\nCons\n- It is not explained why it makes sense to first convert the weight vectors\n into probability distributions by applying the softmax function, and then\nmeasuring distances using KL divergence between the probability distributions.\nIt should be explained more clearly if there is there a natural interpretation\nof the weight vectors as probability distributions. Otherwise it is not obvious\nwhy the distance between the weight vectors is measured the way it is.\n\n- Similarly, the ReLU activations are also first converted into probability\n distributions by applying a softmax. It should be explained why the model does\nthis, as opposed to simply using dot products to measure similarity.\n\n- The model is not compared to simpler alternatives such as adding an\n orthogonality regularization on the weights, i.e., computing W^TW and making\nthe diagonals close to 1 and all other terms 0. Similar regularizers can be\napplied for activation vectors as well.\n\n- The objective of this paper seems to be to produce representations that are\n easy to separate into clusters. This topic has a wealth of previous work. Of\nparticular relevance are methods such as t-SNE [1], parametric t-SNE [2], and\nDEC [3]. The losses introduced in this paper are fairly straight-forward.\nTherefore it would be good to compare to these baselines to show that a simple\nloss function is sufficient to achieve the objective.\n\n- Disentangling usually refers to disentangling factors of variation, for\n example, lighting, pose, and object identity which affect the appearance of a\ndata point. This is different from separability, which is the property of a\nrepresentation that makes the presence of clusters evident. This paper seems to\nbe about learning separable representations, whereas the title suggests that it\nis about disentangled ones. \n\nQuality\nThe design choices made in the paper (such as the choice of distance function)\nis not well explained. Also, given that the modifications introduced are quite\nsimple, it can be improved by doing more thorough comparisons to other\nbaselines.\n\nClarity\nThe paper is easy to follow.\n\nOriginality\nThe novel aspect of the paper is the way distance is measured by converting the\nweights (and activations) to probability distributions and using KL divergence\nto measure distance. However, it is not explained what motivated this choice.\n\nSignificance\nThe objective of this model is to produce representations that are separable, which\nis of general interest. However, given the wealth of previous work done in\nclustering, this paper would only be impactful if it compares to other hard\nbaselines and shows clear advantages.\n\n[1] van der Maaten, Laurens and Hinton, Geoffrey. Visualizing\ndata using t-SNE. JMLR, 2008.\n\n[2] van der Maaten, Laurens. Learning a parametric embedding\nby preserving local structure. In International Conference\non Artificial Intelligence and Statistics, 2009.\n\n[3] Junyuan Xie, Ross Girshick, and Ali Farhadi. Unsupervised deep embedding for\nclustering analysis. ICML 2016.", "The paper proposes techniques for encouraging neural network representations to be more useful for clustering tasks. The paper contains some interesting experimental results, but unfortunately lacks concise motivation and description of the method and quality of writing.\n\nIntroduction:\nThe introduction is supposed to present the problem and the 'chain of events' that led to this present work, but does not do that. The first paragraph contains a too length explanation that in a classification task, representations are only concerned about being helpful for this task, and not any other task. The paragraph starting with 'Consider the case...', describes in detailed some specific neural network architecture, and what will happen in this architecture during training. The main problem with this paragraph is that it does not belong in the introduction. Indeed, other parts of the introduction have no relation to this paragraph, and the first part of the text that related to this paragraph appears suddenly in Section 3. The fact that this paragraph is two thirds of the introduction text, this is very peculiar.\n\nFurthermore, the introduction does not present the problem well: \n1) What does is a better representation for a clustering task?\n2) Why is that important?\n\nMethod:\nThere are a few problematic statements in this part:\n\"The first loss component L_single works on a single layer and does not affect the other layers in the network\". This is not exactly true, because it affect the layer it's related to, which affect upper layers through their feedforward input or bottom layer through the backward pass. \n\"Recall from the example in the introduction that we want to force the model to produce divergent representations for the samples that belong to the same class, but are in fact substantively different from each other\". It is not clear why this is a corollary of the example in the introduction (that should be moved to the method part). \n\"this loss component may help to learn a better representation only if the input to the target layer still contains the information about latent characteristics of the input data\". What does this mean? The representation always contains such information, that is relevant to the task at hand...\nAnd others. The main problem is that the work is poorly explained: starting from the task at hand, through the intuition behind the idea how to solve it. \n\nThe experiments parts contains results that show that the proposed method is superior by a substantial margin over the baseline approaches. However, the evaluation metrics and procedure are poorly explained; What are Adjusted Mutual Information (AMI) and Normalized Mutual Information (NMI)? How are they calculated? Or at least, the mutual information between what and what are they measuring?\n\n\n", "We thank the reviewers, particularly reviewers #1 and #2, for their detailed and constructive comments.\n\nTo reiterate, in this paper, our goal was to learn a representation suitable for clustering in the absence of exhaustive labels that would allow the model to learn this representation explicitly from supervision while solving a classification task. We proposed two novel loss components that substantially improve the quality of produced clusters, are simple to apply to arbitrary models and cost functions, and do not require a complicated training procedure.\n\nThe main concerns raised by the reviewers related to the choice of the similarity measure in the proposed regularization method, and its relative value compared to simpler methods such as dot products or cosine similarity on weights or activations.\nWe performed additional experiments on MNIST strokes sequences and on DBPedia using additional baseline methods mentioned by the reviewers, including cosine similarity on weights, cosine similarity on activations, and orthogonality regularization on the weights. Please see the results in the table below. \n\nThere are two main takeaways from these experiments:\n 1. The proposed general approach of forcing apart weights/activation improves the quality of the clustering regardless of the particular similarity measure used.\n 2. While in all of these experiments, the quality of clusterization in terms of AMI and NMI was improved, the regularizations we proposed in the paper consistently performed the best. \n\n+--------------------------------------------------------------------+\n| | MNIST | DBPedia |\n+ +----------------------------------------+\n| | AMI | NMI | AMI | NMI |\n+--------------------------------------------------------------------+\n| W^TW | 0.523 | 0.530 | 0.350 | 0.366 |\n+--------------------------------------------------------------------+\n| Weights cosine | 0.520 | 0.527 | 0.385 | 0.412 |\n+--------------------------------------------------------------------+\n| Activations cosine | 0.516 | 0.550 | 0.384 | 0.449 |\n+--------------------------------------------------------------------+\n| Proposed | 0.544 | 0.553 | 0.529 | 0.533 |\n+--------------------------------------------------------------------+", "Q. Have you compared with other choices of similarity measure, e.g., cosine similarity?\nA. We performed additional experiments on MNIST strokes sequences dataset and on the DBPedia dataset using the cosine similarity on weights, cosine similarity on activations, and orthogonality regularization on the weights. All of these improved the quality of clusterization in terms of AMI and NMI, however, the regularizations we propose in this paper perform the best (see the results table in the general response section above). We hypothesize that the reason for this is that the proposed method involves non-linear relationships between the change in the weights and the corresponding loss, whereas, for example, cosine similarity does not.\n\nQ. In the binary classification experiments, it is very strange to almost randomly group several different classes of images into the same category...\nA. Our goal was to look at the case where labeled classes are composed from different “types” (sub-classes) of objects. In a sense, this is a hierarchical classification, where only the labels of the first level is accessible to the network. However, we agree that additional experiments using a proper hierarchical dataset would also be informative, and we will include them.\n\nQ. I am curious how the proposed method compares to other competitors in terms of the original classification setting.\nA. We compared the classification accuracy on the binary classification task and there was no effect (or a negligible effect) of the proposed method on the classification accuracy. \n\nQ. What will happen for the multi-layer loss if the network architecture is very large such that you can not use large batch size, e.g., less than 10?\nA: Since the batches are usually formed randomly, chances are there will be samples with the same label (this is the always the case in the batch of 3 or more) and different underlying groups. Since the training is repeated for many batches, this should not be an issue. \n\nQ. In drawing figure 2 and 3, if the nonlinear activation function is not ReLU, how would you exam the same behavior? \nA: Actually, we experimented with different activation functions (ReLU, sigmoid, tanh) and they all had the same behaviour. We will mention this in the paper.\n\nQ. Have you tried multi-class classification for the case “without proposed loss component” and does the similar pattern still happen or not?\nA. We have not tried this. We expect the same behaviour as it was general for such diverse architectures and tasks, and the binary classification is just a particular case of a multi-class classification. We will, however, perform additional experiments to verify it empirically. \n", "Q. Why is softmax applied to weight vectors and ReLU activations to convert them to probability distributions, as opposed to, for example, using dot products or other simpler alternatives to measure similarity?\n\nA. We chose the proposed measure based on an intuition that its non-linear nature would be better suitable in this case as opposed to, for example, dot product. In the general response section above, we report additional experiments on the MNIST strokes sequences and on DBPedia using the W^TW regularization, as well as cosine similarity on weights and cosine similarity on activations. All of these improved the quality of clusterization in terms of AMI and NMI, however, the regularizations we propose in the paper perform the best. Interestingly, the general approach of forcing apart weights/activation improves the quality of the clustering regardless of the particular similarity measure used.\n\nQ. The objective of this paper seems to be to produce representations that are\neasy to separate into clusters. This topic has a wealth of previous work. Of\nparticular relevance are methods such as t-SNE [1], parametric t-SNE [2], and\nDEC [3]. \nA. Note that the goal of the proposed method is quite different from the objective of t-SNE, which is pure dimensionality reduction. Our goal, in contrast, is to learn a representation suitable for clustering in the absence of exhaustive labels that would allow the model to learn this representation explicitly from supervision while solving a classification task. Essentially, we are trying to address a different task, even though it also relates to recovering latent structure in the data.\n\nQ. Disentangling usually refers to disentangling factors of variation. This paper seems to be about learning separable representations.\nA. Thank you for catching that, we were actually aware of this, and have planned to change the title to avoid the confusion. \n", "Most of the comments in this review seem to stem from the reviewer’s issues with presentation and writing clarity, rather than the substantive proposal in this paper. While it seems that the other reviewers found the presentation to be clear, we will make an earnest attempt to address the concerns with the presentation flow raised in this review.\n\nQ. What are Adjusted Mutual Information (AMI) and Normalized Mutual Information (NMI)? How are they calculated? Or at least, the mutual information between what and what are they measuring?\nA. MI-based measures we use to evaluate clustering solutions are quite standard, and we give a reference to the paper that explains them in detail. However, we would be happy to include definitions in the camera-ready version of the paper.\n\nQ. “this loss component may help to learn a better representation only if the input to the target layer still contains the information about latent characteristics of the input data\" What does this mean? The representation always contains such information, that is relevant to the task at hand…\nA. As we explain in the introduction, for example, if the target task is binary classification, the representation learned by the network may only contain the information relevant to that task. As a result, clustering the data according to other latent characteristics may be impossible, since they might not be captured in this representation.\n" ]
[ 5, 5, 4, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_S17mtzbRb", "iclr_2018_S17mtzbRb", "iclr_2018_S17mtzbRb", "iclr_2018_S17mtzbRb", "BkH22ZFxG", "HyDv0CYgz", "r1qbYHogz" ]
iclr_2018_SyYYPdg0-
Counterfactual Image Networks
We capitalize on the natural compositional structure of images in order to learn object segmentation with weakly labeled images. The intuition behind our approach is that removing objects from images will yield natural images, however removing random patches will yield unnatural images. We leverage this signal to develop a generative model that decomposes an image into layers, and when all layers are combined, it reconstructs the input image. However, when a layer is removed, the model learns to produce a different image that still looks natural to an adversary, which is possible by removing objects. Experiments and visualizations suggest that this model automatically learns object segmentation on images labeled only by scene better than baselines.
rejected-papers
All reviewers acknowledge that the idea of the paper is interesting but have expressed serious concerns on empirical evaluations. The paper is not suitable for publication in its current form.
train
[ "HJSdXVqxG", "SyqB7xHlz", "rkFHVe5lf", "ry6MYLa7z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "This paper creates a layered representation in order to better learn segmentation from unlabeled images. It is well motivated, as Fig. 1 clearly shows the idea that if the segmentation was removed properly, the result would still be a natural image. However, the method itself as described in the paper leaves many questions about whether they can achieve the proposed goal.\n\nI cannot see from the formulation why would this model work as it is advertised. The formulation (3-4) looks like a standard GAN, with some twist about measuring the GAN loss in the z space (this has been used in e.g. PPGN and CVAE-GAN). I don't see any term that would guarantee:\n\n1) Each layer is a natural image. This was advertised in the paper, but the loss function is only on the final product G_K. The way it is written in the paper, the result of each layer does not need to go through a discriminator. Nothing seems to have been done to ensure that each layer outputs a natural image.\n\n2) None of the layers is degenerate. There does not seem to be any constraint either regularizing the content in each layer, or preventing any layer to be non-degenerate.\n\n3) The mask being contiguous. I don't see any term ensuring the mask being contiguous, I imagine normally without such terms doing such kinds of optimization would lead to a lot of fragmented small areas being considered as the mask.\n\nThe claim that this paper is for unsupervised semantic segmentation is overblown. A major problem is that when conducting experiments, all the images seem to be taken from a single category, this implicitly uses the label information of the category. In that regard, this cannot be viewed as an unsupervised algorithm.\n\nEven with that, the results definitely looked too good to be true. I have a really difficult time believing why such a standard GAN optimization would not generate any of the aforementioned artifacts and would perform exactly as the authors advertised. Even if it does work as advertised, the utilization of implicit labels would make it subject to comparisons with a lot of weakly-supervised learning papers with far better results than shown in this paper. Hence I am pretty sure that this is not up to the standards of ICLR.\n\nI have read the rebuttal and still not convinced. I don't think the authors managed to convince me that this method would work the way it's advertised. I also agree with Reviewer 2 that there is a lack of comparison against baselines.", "Paper summary: The paper proposes a generative model that decomposes images into multiple layers. The proposed approach is GAN-based, where the objective of the GAN is to distinguish real images from images formed by combining the layers. Some of the layers correspond to objects that are common in specific scene categories. The method has been tested on kitchen and bedroom scenes.\n\nPaper Strengths:\n+ The idea of the paper is interesting.\n+ The learned masks for objects are neat.\n+ The proposed method outperforms a number of simple baselines.\n\nPaper Weaknesses:\n\n- The evaluation of the model is not great: (1) It would be interesting to combine bedroom and kitchen images and train jointly to see what it learns. (2) It would be good to see how the performance changes for different number of layers. (3) Regarding the fine-tuning baselines, the comparison is a bit unfair since the proposed method performs pooling over images, while the baseline (average mask) is not translation invariant.\n\n- It is unclear why \"contiguous\" masks are generated (e.g., in figure 4). Is there any constraint in the optimization? This should be explained in the rebuttal. \n\n- The method should not be called \"unsupervised\" since it knows the label for the scene category. Also, it should not be called \"semantic segmentation\" since there is no semantics associated to the object. It is just a binary foreground/background mask.\n\n- The plots in Figure 5 are a bit strange. The precision increases uniformly as the recall goes up, which is weird. It should be explained in the rebuttal why that happens.\n\n- Similar to most GAN-based models, the generated images are not that appealing.\n\n- The claim about object removal should be toned down. The method is not able to remove any object from a scene. Only, the learned layers can be removed.\n", "This paper proposes a neural network architecture around the idea of layered scene composition. Training is cast in the generative adversarial framework; a subnetwork is reused to generate and compose (via an output mask) multiple image layers; the resulting image is fed to a discriminator. An encoder is later trained to map real images into the space of latent codes for the generator, allowing the system to be applied to real image segmentation tasks.\n\nThe idea is interesting and different from established approaches to segmentation. Visualization of learned layers for several scene types (Figures 3, 7) shows that the network does learn a reasonable compositional scene model.\n\nExperiments evaluate the ability to port the model learned in an unsupervised manner to semantic segmentation tasks, using a limited amount of supervision for the end task. However, the included experiments are not nearly sufficient to establish the effectiveness of the proposed method. Only two scene types (bedroom, kitchen) and four object classes (bed, window, appliance, counter) are used for evaluation. This is far below the norm for semantic segmentation work in computer vision. How does the method work on established semantic segmentation datasets with many classes, such as PASCAL? Even the ADE20K dataset, from which this paper samples, is substantially larger and has an established benchmarking methodology (see http://placeschallenge.csail.mit.edu/).\n\nAn additional problem is that performance is not compared to any external prior work. Only simple baselines (eg autoencoder, kmeans) implemented by this paper are included. The range of prior work on semantic segmentation is extensive. How well does the approach compare to supervised CNNs on an established segmentation task? Note that the proposed method need not necessarily outperform supervised approaches, but the reader should be provided with some idea of the size of the gap between this unsupervised method and the state-of-the-art supervised approach.\n\nIn summary, the proposed method may be promising, but far more experiments are needed.\n", "Thank you for reading our paper! We are glad reviewers found our paper to be interesting. \n\nSemantic segmentation models typically require large amounts of manually labeled images, which is expensive to collect. We instead develop a new method for learning to segment images without dense supervision. We propose a principle of “object removability” that we capitalize on to learn to segment images with unlabeled data. We believe this paper will have wide interest at the conference because it proposes a new signal for learning with weakly labeled data. \n\nAnonReviewer1\n\n“Each layer is a natural image.” We wish to clarify a misunderstanding. The loss function is on the final product G_K, but the final product at each iteration is formed by a randomly selected and permuted subset of the foreground layers. This is how we encode the idea of object removability; the generator must learn a set of foreground layers such that even with only a random subset of the layers included, G_K looks natural. Motivated by the example in Figure 1, we hypothesize that this will constrain the layers to learn a natural semantic representation (which our experiments suggest happens).\n\n“None of the layers is degenerate.” Our model learns layers that generally represent an object category because objects can be removed from images. However, other features besides objects can be removed (such as lighting) and still produce a natural looking image. This happens in our model as well. For example, Figure 7 (bottom right image) shows that layer 2 has learned a blue lighting. We modified the text to discuss this. \n\n“The mask being contiguous.” We do not require that the mask is contiguous, which is a strength of our method. For example, in Figure 7: Layer 2 on the bottom right and Layer 3 on the other three learn a non-contiguous mask because windows are not contiguous. On Layer 3 of the top right image, the mask learns several window-like objects, which it would not be able to do if we enforced a continuous constraint. \n\n“The paper is not unsupervised.” We have modified the paper to tone down this claim. \n\nAnonReviewer2\n\nEvaluating on many scene classes: A limitation of GANs is that the generated images have limited variability. Our method uses a similar number of categories as state-of-the-art GANs. Scaling up GANs is orthogonal to this paper and out-of-scope. We believe the evaluation is suitable to show the efficacy of our approach. \n\nComparison to prior work: Thank you for this suggestion. We do train and show how we compare to a vanilla supervised CNN in Table 1 under the Random Init row.\n\nAnonReviewer3\n\nEvaluation of model. We could combine both the kitchen and bedroom images and jointly train, however current GAN objectives have a problem of mode collapse and are not able to capture the full variability of the dataset. Correcting mode collapse is out-of-scope.\n\nThere is no constraint in the model to enforce contiguous masks. See above in the response to AnonReviewer1.\n\nWe have modified the paper to tone down the claim that we can do unsupervised segmentation, and clarified that we do not attach semantics to the object. We also have modified the paper to tone down the object removal claim\n\nIn Figure 5, we plot how the precision-recall curves for each layer compare to randomly permuting pixels. We see that, as an example in (a), one layer has high precision in the low recall regions, suggesting that the pixels that were given the highest weights capture the “window” object well. When all the other layers are evaluated on the “window” object, they do poorly, with some masks doing worse than random on the low recall regions, suggesting that the pixels activated in the window layer tend not to be activated in the other layers." ]
[ 4, 5, 4, -1 ]
[ 4, 4, 4, -1 ]
[ "iclr_2018_SyYYPdg0-", "iclr_2018_SyYYPdg0-", "iclr_2018_SyYYPdg0-", "iclr_2018_SyYYPdg0-" ]
iclr_2018_rJTGkKxAZ
Learning Generative Models with Locally Disentangled Latent Factors
One of the most successful techniques in generative models has been decomposing a complicated generation task into a series of simpler generation tasks. For example, generating an image at a low resolution and then learning to refine that into a high resolution image often improves results substantially. Here we explore a novel strategy for decomposing generation for complicated objects in which we first generate latent variables which describe a subset of the observed variables, and then map from these latent variables to the observed space. We show that this allows us to achieve decoupled training of complicated generative models and present both theoretical and experimental results supporting the benefit of such an approach.
rejected-papers
Reviewers recognize the proposed method of hierarchical extension to ALI to be potentially novel and interesting but have expressed strong concerns on the experiments section. The paper also needs to have comparisons with relevant hierarchical generative model baselines. Not suitable for publication in its current form.
val
[ "H13up9Klf", "ByKf-8jlG", "HypMNiy-G", "Sy_g2vaQG", "HkTqLPaQz", "r1F-NvpQM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposed a method called Locally Disentangled Factors for hierarchical latent variable generative model, which can be seen as a hierarchical variant of Adversarially Learned Inference (Dumoulin el atl. 2017). The idea seems to be a valid variant, however, the quality of the paper is not good. The introduction and related works sections read well, but the rest of the paper has not been written well. More specifically, the content in section 3 and experiment section is messy. Also the experiments have not been conducted thoroughly, and the results and the interpretation of the results are not complete.\n\nIntroduction:\nAlthough in introduction the author discussed a lot of works on hierarchical latent variable model and some motivating examples, after reading it the reviewer has absolutely no idea what the paper is about (except hierarchical latent variable model), what is the motivation, what is the general idea, what is the contribution of the paper. Only after carefully reading the detailed implementation in section 3.1 and section 5, did I realize that what the authors are actually doing is to use N variables to model N different parts of the observation, and one higher level variable to model the N variables. The paper should really more precisly state what the idea is throughout the paper, instead of causing confusion and ambiguity.\n\nSection 3:\n1. The concepts of \"disentanglment\" and \"local connectivity\" are really unnecessary and confusing. First, the whole paper and experiments has nothing to do with \"local connectivity\". Even though you might have the intention to propose the idea, you didn't show any support for the idea. Second, what you actually did is to use top level variable to generate N latent variables. That could hardly called \"disentanglement\". The mean field factorization in (Kingma & Welling 2013) is on the inference side (Q not P), and as found out in literature, it could not achieve disentanglement.\n\n2. In section 3.2, I understand that you want to say the hierarchical model may require less data sample. But, here you are really off-topic. It would be much better if you can relate to the proposed method, and state how it may require less data.\n\n3. Section 3.1 is more important, and is really major part of your method. Therefore, it need more extensive discussion and emphasis.\n\nExperiment:\nThis section is really bad.\n1. Since in the introduction and related works, there are already so many hierarchical latent variable model listed, the baseline methods should really not just vanilla GAN, but hierarchical latent variable models, such as the Hierachical VAE, Variational Ladder Autoencoder in (Zhao et al. 2017), ALI (not hierarchical, but should be a baseline) in (Dumoulin et al. 2017), etc.\n\n2. Since currently there is still no standard way to evaluate the quality of image generation, by giving only inception score, we can really not judge whether it is good or not. You need to give more metrics, or generation examples, recontruction examples, and so on. And equally importantly, compare and discuss about the results. Not just leave it there.\n\n3. For section 5.2, similar problems as above exist. Baseline methods might be insufficient. The paper only shows several examples, and the reviewer cannot draw any conclusion about it. Nor does the paper discuss any of the results.\n\n4. Section 5.3, second to the last line, typo: \"This is shown in 7\". Also this result is not available. \n\n5. More importantly, some experiments should be conducted to explicitly show the validity of the proposed hierarchical latent model idea. Show that it exists and works by some experiment explicitly.\n\nAnother suggestion the review would like to make is that, instead of proposing the general framework in section 2, it would be better to propose the hierarchical model in the context of section 3.1. That is, instead of saying z_0 -> z_1 ->... ->x, what the paper and experiment is really about is z_0 -> z_{1,1}, z_{1,2} ... z_{1,N} -> x_{1}, x_{2},...,x_{N}, where z_{1,1...N} are distinct variables. How section 2 is related to the learning of this might be concatenating these N distinct variables into one (if that's what you mean). Talking about the joint distribution and inference process in this way might more align with your idea. Also, the paper actually only deals with 2 level. It seems to me that it's meaningless to generalize to n levels in section 2, since you do not have any support of it.\n\nIn conclusion, the reviewer thinks that this work is incomplete and does not worth publishing with its current quality.\n==============================================================\nThe reviewer read the response from the authors. However, I do not think the authors resolved the issues I mentioned. And I am still not convinced by the quality of the paper. I would say the idea is not bad, but the paper is still not well-prepared. So I do not change my decision.\n", "The paper investigates the potential of hierarchical latent variable models for generating images and image sequences. The paper relies on the ALI model from [Dumoulin et al, ICLR'16] as the main building block. The main innovation in the paper is to propose to train several ALI models stacked on top of each other to create a hierarchical representation of the data. The proposed hierarchical model is trained in stages. First stage is an original ALI model as in [Dumoulin et al]. Each subsequent stage is constructed by using the Z variables from the previous stage as the target data to be generated.\n\nThe paper constructs models for generatation of images and image sequences. The model for images is a 2-level ALI. The first level is similar to PatchGAN from [1] but is trained as an ALI model. The second layer is another ALI that is trained to generate latent variables from the first layer. \n\n[1] Isola et al. Image-to-Image Translation with Conditional Adversarial Networks, CVPR'17 \n\nIn the the model for image sequences the hierarchy is somewhat different. The top layer is directly generating images and not patches as in the image-generating model.\n\nSummary: I think this paper presents a direct and somewhat straightforward extension of ALI. Therefore the novelty is limited. I think the paper would be stronger if it (1) demonstrated improvements when compared to ALI and (2) showed advantages of hierarchical training on other datasets, not just the somewhat simple datasets like CIFAR and Pacman. \n\nOther comments / questions: \n\n- baseline should probably be 1-level ALI from [Dumoulin et al.]. I believe in the moment the baseline is a standard GAN.\n\n- I think the paper would be stronger if it directly reproduced the experiments from [Dumoulin et al.] and showed how hierarchy compares to standard ALI without hierarchy. \n\n- the reference Isola et al. [1] should ideally be cited since the model for image genration is similar to PatchGAN in [1]\n\n- Why is the video model in this paper not directly extending the image model? Is it due to limitation of the implementation or direclty extending the iamge model didn't work? \n", "Training GAN in a hierarchical optimization schedule shows promising performance recently (e.g. Zhao et al., 2016). However, these works utilize the prior knowledge of the data (e.g. image) and it's hard to generalize it to other data types (e.g. text). The paper aims to learn these hierarchies directly instead of designing by human. However, several parts are missing and not well-explained. Also, many claims in paper are not proved properly by theory results or empirical results. \n\n(1) It is not clear to me how to train the proposed algorithm. My understanding is train a simple ALI, then using the learned latent as the input and train the new layer. Do the authors use a separate training ? or a joint training algorithms. The authors should provide a more clear and rigorous objective function. It would be even better to have a pseudo code. \n\n(2) In abstract, the authors claim the theoretical results are provided. I am not sure whether it is sec 3.2 The claims is not clear and limited. For example, what's the theory statement of [Johnsone 200; Baik 2005]. What is the error measure used in the paper? For different error, the matrix concentration bound might be different. Also, the union bound discussed in sec 3.2 is also problematic. Lats, for using simple standard GAN to learn mixture of Gaussian, the rigorous theory result doesn't seem easy (e.g. [1]) The author should strive for this results if they want to claim any theory guarantee. \n\n(3) The experiments part is not complete. The experiment settings are not described clearly. Therefore, it is hard to justify whether the proposed algorithm is really useful based on Fig 3. Also, the authors claims it is applicable to text data in Section 1, this part is missing in the experiment. Also, the idea of \"local\" disentangled LV is not well justified to be useful.\n\n[1] On the limitations of first order approximation in GAN dynamics, ICLR 2018 under review\n", "Thank you for your interesting response. \n\n\"1. The concepts of \"disentanglment\" and \"local connectivity\" are really unnecessary and confusing. First, the whole paper and experiments has nothing to do with \"local connectivity\". Even though you might have the intention to propose the idea, you didn't show any support for the idea. Second, what you actually did is to use top level variable to generate N latent variables. That could hardly called \"disentanglement\". The mean field factorization in (Kingma & Welling 2013) is on the inference side (Q not P), and as found out in literature, it could not achieve disentanglement.\"\n\nSo ultimately our goal was to learn a local representation for a part of the example which simplifies its structure as much as possible while having a 1:1 mapping with raw data for that part of the example. One can imagine specific types of data for which this should be possible. I think that if the disentanglement isn't perfect, it just lowers the potential benefit of our model, but it could still help. \n\n\"2. Since currently there is still no standard way to evaluate the quality of image generation, by giving only inception score, we can really not judge whether it is good or not. You need to give more metrics, or generation examples, recontruction examples, and so on. And equally importantly, compare and discuss about the results. Not just leave it there.\"\n\nI think that Inception scores, perhaps along with FID, are reasonable to use. However we agree that we definitely need to have a stronger baseline. \n\nHowever I do think that showing faster convergence here is a compelling result. ", "\"- baseline should probably be 1-level ALI from [Dumoulin et al.]. I believe in the moment the baseline is a standard GAN.\"\n\nThis is a fair point, although ALI did not dramatically outperform the standard GAN in terms of generation quality, for example, in terms of inception score. \n\n\"- the reference Isola et al. [1] should ideally be cited since the model for image genration is similar to PatchGAN in [1]\"\n\nThat's a fair point. PatchGAN is different from our approach, but would serve as a reasonable baseline. \n\n\"Summary: I think this paper presents a direct and somewhat straightforward extension of ALI. Therefore the novelty is limited.\"\n\nI don't agree with this. Learning generative models which learn joints over larger and more complex objects is an important direction. For example, learning a joint distribution over a complete day of video or audio data. With standard approaches, this quickly becomes computationally intractable. Only a few approaches have been proposed to deal with this issue. To our knowledge, synthetic gradients and UORO are the most prominent. The Locally Disentangled Factors approach, while still in its infancy, could be an important method in this area. ", "\"(1) It is not clear to me how to train the proposed algorithm. My understanding is train a simple ALI, then using the learned latent as the input and train the new layer. Do the authors use a separate training ? or a joint training algorithms. The authors should provide a more clear and rigorous objective function. It would be even better to have a pseudo code. \"\n\nOur method uses a separate decoupled training objective which trains the higher level module after the lower level has finished training. We agree that having pseudocode could make this clearer. \n\n\"(3) The experiments part is not complete. The experiment settings are not described clearly. Therefore, it is hard to justify whether the proposed algorithm is really useful based on Fig 3. \"\n\nThe main goal of our experiments is to show that exploiting the decoupling from locally disentangled factors can allow for faster training and higher capacity models. Our inception scores on MNIST provide some evidence for the former and our video generation results provide some evidence for the latter. \n\n\"Also, the idea of \"local\" disentangled LV is not well justified to be useful.\"\n\nIf the data generating process actually uses locally disentangled factors, then I think the benefit is fairly apparent, in that the complexity of the learning task is greatly simplified. Whether this actually occurs in practice is an interesting open question. " ]
[ 4, 6, 3, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1 ]
[ "iclr_2018_rJTGkKxAZ", "iclr_2018_rJTGkKxAZ", "iclr_2018_rJTGkKxAZ", "H13up9Klf", "ByKf-8jlG", "HypMNiy-G" ]
iclr_2018_rJL6pz-CZ
Transfer Learning on Manifolds via Learned Transport Operators
Within-class variation in a high-dimensional dataset can be modeled as being on a low-dimensional manifold due to the constraints of the physical processes producing that variation (e.g., translation, illumination, etc.). We desire a method for learning a representation of the manifolds induced by identity-preserving transformations that can be used to increase robustness, reduce the training burden, and encourage interpretability in machine learning tasks. In particular, what is needed is a representation of the transformation manifold that can robustly capture the shape of the manifold from the input data, generate new points on the manifold, and extend transformations outside of the training domain without significantly increasing the error. Previous work has proposed algorithms to efficiently learn analytic operators (called transport operators) that define the process of transporting one data point on a manifold to another. The main contribution of this paper is to define two transfer learning methods that use this generative manifold representation to learn natural transformations and incorporate them into new data. The first method uses this representation in a novel randomized approach to transfer learning that employs the learned generative model to map out unseen regions of the data space. These results are shown through demonstrations of transfer learning in a data augmentation task for few-shot image classification. The second method use of transport operators for injecting specific transformations into new data examples which allows for realistic image animation and informed data augmentation. These results are shown on stylized constructions using the classic swiss roll data structure and in demonstrations of transfer learning in a data augmentation task for few-shot image classification. We also propose the use of transport operators for injecting transformations into new data examples which allows for realistic image animation.
rejected-papers
Learning identity-preserving transformations from unlabeled data is definitely an important and useful direction. However the paper does not have convincing experiments to establish the effectiveness of the proposed method on real datasets which is a crucial limitation in my view, given that the paper is largely based on an earlier published work by Culpepper and Olshausen (2009).
train
[ "rypjqWBlf", "ryjTQZ9xz", "HklkGPeeG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper propose to learn manifold transport operators via a dictionary learning framework that alternatively optimize a dictionary of transformations and coefficients defining the transformation between random pairs of data points. Experiments on the swiss roll and synthetic rotated images on USPS digits show that the proposed method could learn useful transformations on the data manifold.\n\nHowever, the experiments in the paper is weak. As the paper mentioned, manifold learning algorithms tend to be quite sensitive to the quality of data, usually requiring dense data at each local neighborhood to successfully learn the manifold well. However, this paper, claiming to be learn more rubust representations, lacks solid supporting experiments. The swiss roll is a very simple synthetic dataset. The USPS is also simple, and the manifold learning is performed on synthetic (rotated) USPS digits with only 1 manifold dimension. I would recommend testing the proposed algorithm on more complicated datasets (e.g. Imagenet or even CIFAR images) to see how well it performs in practice, in order to provide stronger empirical supports for the proposed method. At the current state, I don't think it is good for publishing at ICLR.\n\n=========================\nPost-rebuttal comments\n\nThanks for the updates of the paper and added experiments. I think the paper has improved over the previous version and I have updated my score.", "Summary:\n\nThe paper considers the framework of manifold transport operator learning of Culpepper and Olshausen (2009), and interpret it as obtaining a MAP estimate under a probabilistic generative model. Motivated by this interpretation, the authors propose a new similarity metric between data points, which leads to a new manifold embedding method. This also leads the authors to propose a new transfer learning mechanism that can lead to improvements in classification accuracy.\nSome representative simulation results are included to demonstrate the efficacy of the proposed methods.\n\nMain comments:\n\nThis direction is interesting. But unfortunately the paper is confusingly written and several points are never made clear. The conveyed impression is that the proposed methods are mainly incremental additions to the framework of Culpepper and Olshausen.\n\nIt would be far more helpful if the authors would have clearly described the following in more detail:\n- The new manifold embedding algorithm in Section 2 -- a proper explanation of the similarity measure, what the role of the MSE is in this algorithm, how to choose the parameters gamma and zeta etc.\n- Why the authors claim that this method is more robust than other classical manifold learning methods. There certainly seems to be some robustness improvement over Isomap -- but this is a somewhat weak strawman since Isomap is notoriously prone to improper neighborhood selection.\n- Why the transport operator viewpoint is an improvement over other out-of-sample approaches in manifold learning.\n- Why the data augmentation using learned transport operators would be more beneficial than augmentation using other mechanisms (manual rotations, other generative models).\n\netc.\n\n\nOther comments/questions:\n\n- Bit confused about the experiment for Figure 1. Why set gamma = 0? Also, you seem to be fixing the number of dictionary elements to two (suggesting an ell-0 constraint), but also impose an ell-1 constraint. Why both?\n- From what distribution are the random coefficients governing the transport operators drawn (uniform? gaussian?) how to choose the anchor points?\n- The experiment in USPS digits is somewhat confusing. Rotations are easy to generate, so the \"true rotation\" curve is probably the easiest to implement and also the best performing -- so why go through the transport operator training process at all? In any case, I would be careful to not draw too many conclusions from a single experiment on MNIST.\n\n================\n\nPost-rebuttal comments:\n\nThanks for the response. Still not convinced, unfortunately. I would go back to the classification example: it is unclear what the benefits of the transport operator viewpoint is over simply augmenting the dataset using rotations (or \"true rotations\" as you call them), or translations, or some other well-known parametric family. Even for the faces dataset, it seems that the transformations to model \"happiness\" or \"sadness\" are fairly simple to model and one does not need to solve a complicated sparse regression problem to guess the basis elements. Consider fleshing this angle out a bit more in detail with some more compelling evidence (perhaps test on a bigger/more complex dataset?). \n", "Overview:\nThe paper aim to model non-linear, intrinsically low-dimensional structure, in data by estimating \"transport operators\" that predict how points move along the manifold. This is an old idea, and the stated contribution of the paper is:\n\"The main contribution of this paper is to show that the manifold representation learned in the transport operators is valuable both as a probabilistic model to improve general machine learning tasks as well as for performing transfer learning in classification tasks.\" \nThe paper provide nice illustrative experiments arguing why transport operators may be a useful modeling tool, but does not go beyond illustrative experiments.\nWhile I follow the intuitions behind transport operators I am doubtful if they will generalize beyond very simple manifold structures (see detailed comments below).\n\nQuality:\nThe paper is well-written and fairly easy to follow. In particular, I appreciate that the authors make no attempt to overclaim contributions. From a methodology point-of-view, the paper has limited novelty (transport operators, and learning thereof has been studied elsewhere), but there are some technical insights (likelihood model, use in data augmentation). Since the provided experiments are mostly illustrations, I would argue that the significance of the paper is limited. I'd say that to really convince a broader audience that this old idea is worth revisiting, the work must go beyond illustrations and apply to a real data problem.\n\nDetailed Comments and Questions:\n*) Equation 1 of the paper describe the key dynamics of the applied transport operators. Basically, the paper assume that the underlying data manifold is locally governed by a linear differential equation. This is a very suitable assumption, e.g., for the swiss roll data set, but it is unclear to this reader why it is a suitable assumption beyond such toy data. I would very much appreciate a detailed discussion of when this is a suitable modeling choice, and when it is not. My intuition is that this is mostly a suitable model when the data manifold appears due to simple transformations (e.g. rotations) of data. This is also exactly the type of data considered in the paper.\n*) In Eq. 3, should it be \"expm\" instead of \"exp\" ?\n*) The first two paragraphs of Sec. 2 are background material, whereas paragraph 3 and beyond describe material that is key to the paper. I would recommend introducing a \\subsection (or something like it) to make this more clear.\n*) The idea of working with transformations of data rather than the actual data is the corner-stone of Ulf Grenander's renowned \"Pattern Theory\". A citation to this seminal work would be appropriate.\n*) In the first paragraph of the introduction links are drawn to the neuroscience literature; it would be appropriate to cite a suitable publication.\n\nPros(+) & Cons(-):\n+ Well-written.\n+ Good illustrative experiments.\n- Real-life experiments are lacking.\n- Limited methodology contribution.\n- The assumed dynamics might be too simplistic (at least a discussing of this is missing).\n\nFor the AC:\nThe submitted paper acknowledges several grants (including grant numbers), which can directly be tied to the authors identity. This may be a violation of the double blind review policy. I did not use this information to determine the authors identity, though, so this review is still double blind.\n\nPost-rebuttal comments:\nThe paper has improved with the incorporated revisions, but my main concerns remain. I find the Swiss Roll / rotated-USPS examples to be too contrived as the dynamics are exactly tailored to the linear ODE assumption. These are examples where the model assumptions are perfect. What is unclear is how the model behaves when the linear ODE assumption is not-quite-correct-but-also-not-totally-incorrect, i.e. how the model behaves in real life. I didn't get that from the newly added experiment. So, I'll keep my rating as is. " ]
[ 5, 4, 4 ]
[ 4, 4, 4 ]
[ "iclr_2018_rJL6pz-CZ", "iclr_2018_rJL6pz-CZ", "iclr_2018_rJL6pz-CZ" ]
iclr_2018_HJaDJZ-0W
Block-Sparse Recurrent Neural Networks
Recurrent Neural Networks (RNNs) are used in state-of-the-art models in domains such as speech recognition, machine translation, and language modelling. Sparsity is a technique to reduce compute and memory requirements of deep learning models. Sparse RNNs are easier to deploy on devices and high-end server processors. Even though sparse operations need less compute and memory relative to their dense counterparts, the speed-up observed by using sparse operations is less than expected on different hardware platforms. In order to address this issue, we investigate two different approaches to induce block sparsity in RNNs: pruning blocks of weights in a layer and using group lasso regularization with pruning to create blocks of weights with zeros. Using these techniques, we can create block-sparse RNNs with sparsity ranging from 80% to 90% with a small loss in accuracy. This technique allows us to reduce the model size by roughly 10x. Additionally, we can prune a larger dense network to recover this loss in accuracy while maintaining high block sparsity and reducing the overall parameter count. Our technique works with a variety of block sizes up to 32x32. Block-sparse RNNs eliminate overheads related to data storage and irregular memory accesses while increasing hardware efficiency compared to unstructured sparsity.
rejected-papers
Pros -- Interesting approach to induce sparsity, trains faster than alternative approaches Cons -- Fairly complex set of heuristics for pruning weights -- Han et al. works well, although the authors claim it takes more time to train, which may not not hold for all training sets and doesn’t seem like a strong enough reason to choose an alternative appraoch Given these comments, the AC recommends that the paper be rejected.
train
[ "ByH5FWfgf", "rJWbr5zxM", "Hk8Ah7_eG", "Hkid-pzNf", "HJCr-6z4z", "rkyLQgkNM", "HktlUv6XM", "B19rTJ8GM", "S1QJz0Wfz", "rkegZRbMz", "r18hlCZzG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "The authors propose a block sparsity pruning approach to compress RNNs. There are several ways. One is using group LASSO to promote sparsity. The other is to prune, but with a very specialized schedule as to the pruning and pruning weight, motivated by the work of Narang et al 2017 for non-group sparsity. The block sizes used in experiments are about 4x4, 8x8, up to 32 x 32. The relative performance degradation ranges between 10% to 96%, depending on the method, severity of compression, and task. The speedup for a matrix multiply is between 1.5x to 4x, and varies according to batch size.\n\nThis is certainly a well-motivated problem, and the procedure is simple but makes sense. Also, the paper contains a good overview of related work in compression, and is not hiding anything. One paper that is missing is\n\nScardapane, S., Comminiello, D., Hussain, A., & Uncini, A. (2017). Group sparse regularization for deep neural networks. Neurocomputing, 241, 81-89.\n\nA major complaint is the lack of comparison of results against other compression techniques. Since it is a block sparsity approach, and the caching / fetching overhead is reduced, one does not need to have competitively superior results to basic pruning approaches, but one should come close on the same types of problems. This is not well presented. Additionally, the speedup should be superior to the sparse methods, which is also not shown (against previously published results, not personally run experiments.) \n\nAnother issue I find is the general writing, especially for the results section, is not entirely clear. For example, when showing a relative performance degradation of 96%, why is that happening? Is that significant? What should an implementer be aware of in order to avoid that? \n\nFinally, a meta issue to address is, if the block size is small (realistically, less than 64 x 64) usually I doubt there will be significant speedup. (4x is not considered terribly significant.) What we need to see is what happens when, say, block size is 256 x 256? What is the performance degradation? If you can give 10x speedup in the feedforward part (testing only) then if you have a 10% degradation in performance that might be acceptable in certain applications. \n\nOverall, I believe this is a very promising and well-motivated work, but needs to be \"marinated\" further to be publishable. Actually, with the inclusion of 2-3 tables against known, previously published results, and clearly stated benefits, I would change my review to accept. \n\nMinor complaints:\n\nThe axis labels/numbers in figure 2 are too small. \n\nAlso, please reread for some grammar / writing issues (last paragraph of 1st page, caption of figure 2, for example)\n\nI think also figure 2 should be rerun with more trials. The noise in the curves are not showing a highly interpretable trend. (Though, actually, batch size = 8 being super low might have a significance; can this be explained?)\n\n", "Compressing/pruning of neural networks is required to enable running on devices with limited compute resources. While previous works have proposed to 0 out weights, especially for the case of RNNs, in an unstructured way, the current paper proposes to 0 out weights blocks at a time via thresholding. The process is further aided by utilizing group lasso regularization. The resulting networks are sparse, memory efficient and can be run more efficiently while resulting in minimal loss in accuracy when compared to networks learned with full density. The proposed techniques are evaluated on RNNs for speech recognition and benefits clearly spelled out. Further experiments thresh out how much benefit is provided by thresholding (block sparsity) and regularizing via group lasso.\n\nThe paper quality seems high, presentation clarity sufficient, the ideas presented (especially the use of group lasso) well thought out and original, and the work seems significant. If I were to nitpick then I would suggest trying out these techniques on RNNs meant for something other than speech recognition (machine translation perhaps?).", "Thanks to the authors for their response.\n\nThough the paper presents an interesting approach, but it relies heavily on heuristics (such as those mentioned in the initial review) without a thorough investigation of scenarios in which this might not work. Also, it might be helpful to investigate if there ways to better group the variables for group lasso regularization. The paper therefore needs further improvements towards following a more principled approach.\n\n=====================================\nThis paper presents methods for inducing sparsity in terms of blocks of weights in neural networks which aims to combine benefits of sparsity and faster access based on computing architectures. This is achieved by pruning blocks of weights and group lasso regularization. It is demonstrated empirically that model size can be reduced by upto 10 times with some loss in prediction accuracy.\n\nThough the paper presents some interesting evaluations on the impact of block based sparsity in RNNs, some of the shortcomings of the paper seem to be :\n\n- The approach taken consists of several heuristics rather than following a more principled approach such as taking the maximum of the weights in a block to represent that block and stop pruning till 40% training has been achieved. Also, the algorithm for computing the pruning threshold is based on a new set of hyper-parameters. It is not clear under what conditions the above settings will (not) work.\n\n - For the group lasso method, since there are many ways to group the variable, it is not clear how the variables are grouped. Is there a reasoning behind a particular grouping of the variables. Individually, group lasso does not seem to work, and gives much worse results. The reasons for worse performance could be investigated. It is possible that important weights are in different groups, and group sparsity is forcing some of them to be zero, and hence leading to worse results. It would be insightful to explain the kind of solver used for group lasso regularization, and if that works for large-scale problems.\n\n - The results for various kinds of sparsity are unclear in the sense that it is not clear how to set the block size a-priori for having minimum reduction in accuracy and still significant sparsity without having to repeat the process for various choices.\n\nOverall, the paper does not seem to present novel ideas, and is mainly focused on evaluating the impact of block-based sparsity instead of weight pruning by Han etal. As mentioned in Section 2, regularization has been used earlier to achieve sparsity in deep networks. In this view the significance over existing work is relatively narrow, and no explicit comparison with existing methods is provided. It is possible that an existing method leads to pruning method such as by Han etal. leads to 8x decrease in model size while retaining the accuracy, while the proposed method leads to 10x decrease while also decreasing the accuracy by 10%. Scenarios like these need to be evaluated to understand the impact of the method proposed in this paper.", "Continuing the response from the previous comment.\n\n3. Accuracy of Han et al. at 25 epochs\n\nThe work Han et. al involves pruning a pre-trained model and re-training it 2 or 3 times to achieve accuracy close to the original baseline. In our work, we prune a pre-trained speech recognition model (trained for 20 epochs) with varying sparsity (75%, 85% and 90%) and retrain it. The accuracy of these pruned models after 25 epochs are 10%, 23% and 30% respectively worse than the dense models. However, training the model for longer allows the model to recover the loss in accuracy. We decided to report the 60 epoch accuracy since this closely resembles the original work published in Han et al.  \n\nIn addition, the work by Yu et. al involves pruning the smallest k weights at a specific epoch during training. This is similar to the work by Han et. al. but it doesn't increase model training by using a hard threshold during the first training iteration. We demonstrate that our gradual block pruning approach outperforms this hard thresholding approach for fixed number of training epochs.\n\n4. Speedup figures\n\nThe speedup figures are run for 100,000 trials which is 10x more than the trials with the previous version of the paper. This should be sufficient to eliminate any noise in the performance benchmarking.\n\n5. Group lasso discussion\n\nWe don't broadly claim that block-pruning outperforms group lasso regularization in general. But we are confident in our results for these specific speech recognition RNN models. These baseline models were SOTA at the time of their publication and they don't employ any L1, L2, dropout, or dropconnect. These models don't benefit from such regularization strategies.\n \n\nSome of our analysis of this issue was omitted from the paper for brevity. For example, we analyzed the distribution of nonzero weight magnitudes in our baseline, block pruning (BP), group lasso (GL), and group-lasso-with-pruning (GLP) runs. Across baseline, BP, and GLP runs, the distributions looked similar. GL runs' distributions, conversely, were more concentrated near zero. This seem to indicate that group lasso reduces the weight magnitude of all the groups of weights in the network including non-zero ones. This led to our underfitting hypothesis for the group lasso runs. On the other hand, the block pruning approach allows the other blocks to remain unconstrained and weights can have large magnitude. This may be the reason why block pruning performs better than group lasso. Future work of the paper involves diagnosing the exact cause of poor accuracy with group lasso regularization. \n\n \nThanks again for taking the time to review our paper. ", "Thank you for the quick response to the new revision. Here are some thoughts and responses to your questions.\n\n1. Comparisons to other baselines\n \nHan et al.\n\nWe cite https://arxiv.org/abs/1510.00149. Both Han's approach and ours look to reduce neural network compute and memory requirements by producing sparsity in weight matrices, so we feel comparison is relevant. Han's approach offers greater accuracy at the expense of longer training time and other compute/memory drawbacks, as detailed in Table 6 in our paper. \n\nYu et al.\n\nThe work by Yu et al. focusses on pruning fully connected layers during training. It's similar to Han et. al. since they keep the top k weights at given point training. However, it doesn't increase the training time and uses a hard threshold instead of a gradual pruning approach (Narang et. al). Therefore, we added this comparison to a paper to give another datapoint with fixed number of epochs. We demonstrate that the our block pruning approach achieves better accuracy with fewer parameters than Yu et al.\n\n>>>>After skimming the Yu paper, it seems that they are also using block sparsity, but any comparison between this paper and Yu is limited to a performance metric in which Yu outperforms this paper, and no runtime or memory usage comparison is given\n\nIn their paper, Yu et al. propose a different data structure to store sparse weights, however, the sparsity enforced is still unstructured and random. Their new data structure (described in Section 3.3 of the paper), achieves slightly more memory savings (6.6x v/s 5x for 90% sparsity for the RNN layer) than the CSR matrix format. However, since an implementation of this format isn't available, we don't think it's fair to add this to the paper. \n\nIn Table 6, we picked the unstructured sparsity model with the best accuracy and computed memory and runtime savings for it. We didn't include results from Yu et al. or Narang et al. in Table 6 since they induce random sparsity and have worse accuracy than Han et al. Please let us know if we have missed something.\n\nMao et al.\n\nWe cite https://arxiv.org/pdf/1705.08922. We agree that comparison to Mao et al would be valuable. Their paper analyzes various approaches to structured sparsity, and their vector pruning approach in particular could be applied to speech recognition RNN models. \n\nWen et al.\n\nWe cite https://arxiv.org/abs/1608.03665. They focus on sparsity in convolutional layers, so their approach isn't directly applicable to RNN models. We also cite https://arxiv.org/abs/1709.05027. This paper is fairly new (mid-Sep 2017 as compared to our late Oct submission date). We agree that a comparison of this approach applied to speech recognition RNN models would be valuable. In addition, for this approach, we can directly compare weight count and accuracy for neural language modeling results on the Penn Tree Bank dataset:\n\n# Parameters Perplexity (% Loss) Algorithm\n\n66.0M 78.29 N/A\n\n25.2M 76.03 (+2.9%) Wen2017              \n\n21.8M 78.65 (-0.5%) Wen2017\n\n23.1M 77.04 (+1.6%) Ours (BP)\n\n11.6M 80.25 (-2.5%) Ours (BP)\n\n7.95M 82.72 (-5.7%) Ours (BP)\n\nWe didn't include this comparison in our paper because it's not entirely fair: in Wen et al's experiments, they only induce sparsity in the RNN and softmax layers, whereas we also induce sparsity in the embedding layer.\n\n\n2. Memory & Compute savings for different block sizes\n \nTable 6 highlights the speedup, memory compression and accuracy for different models with different block sizes. For brevity, we omitted the sparsity of the model from the table. This information is available in Tables 2 and 4. Since the 16x16 model is about 5% more sparse than the 32x32 model, it achieves superior speedup and compression. Additionally, we're hoping that our work will motivate people to develop efficient block-sparse kernels for modern hardware like Volta with array datapaths. \n\n> the claim that this paper’s algorithm is agnostic to block size is not convincing.\n\nWe can modify this claim in the paper to state that our approach works for block sizes up to 32x32.", "After reading the revision I am still unable to change to accept.\n\n1) The main thing, although there are added comparisons (against Han and Yu, etc) none of them seem particularly close to what the paper is trying to do. On the other hand, Mao and Wen, which seem to have more block-like structured sparsity, is not compared. After skimming the Yu paper, it seems that they are also using block sparsity, but any comparison between this paper and Yu is limited to a performance metric in which Yu outperforms this paper, and no runtime or memory usage comparison is given. So though there is more comparison, the comparisons added are still not entirely fair.\n\n2) It is still not clear to me that 32 x 32 block size is big enough. Actually, table 6 is a bit strange, as there is no clear correlation between speedup and blocksize. I accept that there exist hardware that can exploit 16 x 16 block sparsity, but the claim that this paper’s algorithm is agnostic to block size is not convincing.\n\n3) Table 2: the comparison against Han, what is the performance of Han at 25 epochs? The claim that Han is better but requires more epochs isn’t really solid without that datapoint.\n\n4) Figures 2,3 still seem a bit messy. Are they done over many trials? \n\n5) The explanation that group LASSO is underperforming because it is underfitting is not very satisfying. Both penalty and thresholding form of model complexity reduction have the same overfitting and underfitting tradeoffs, so it is not convincing that there does not exist a scheme under which group penalty regularization cannot do as well as thresholding. Of course the exact mechanism isn’t the same, so there may actually be a performance superiority for thresholding (even if all lambda choices are swept) but here should be a different explanation for this.\n\n6) (minor) lasso —> LASSO", "We have submitted a new revision to the paper. The changes (as requested by the reviewers) include:\n\n- Comparison to other baselines (including Han et. al.)\n- Results on Neural Language Modeling with the Penn Tree Bank dataset\n- Inference performance section to highlight the benefits of block sparsity over unstructured sparsity\n- Speedup with more repeats and batch sizes. Also, added results from NVIDIA's block - CSR format for block-sparse layers\n- Add motivation for the choice of blocks for group lasso\n- Other grammatical and stylistic fixes\n- Moved some of the discussion section to the appendix\n\nWe thank the reviewers for their consideration and helpful feedback. ", "I guess what I'm looking for is a performance vs speedup plot comparison, parametrized by different block sizes. If the plot looks \"knee-like\", e.g. you can get a lot of speedup for not that much performance hit, then the experiments will be consistent with the motivation. But I think it will be hard to motivate this method if your block size is limited to 8x8; in general people are fine with waiting 2x for an experiment if it means not rewriting an entire numerical library. ", "We thank the reviewer for the helpful feedback and comments. Thank you for pointing out the missing reference. We have added it to the paper and will update it along with rest of the changes.  \n\nWe agree that we should compare our results to more prior approaches. In the paper, we have compared the block pruning approaches to prior pruning approaches like Narang et. al. In our paper, Group Lasso only experiments are also another baseline. However, this isn't clear in the paper. We are working on adding comparisons to more prior work including Han et al. However, all prior work induces random unstructured sparsity in the network and therefore the speedup is limited. We will add these baselines in our paper and clearly state the benefits of our approach. \n\nGroup Lasso alone doesn't work for our models. Without regularization, the baseline dense model doesn't overfit the dataset. Therefore, adding regularization to generate sparse blocks hurts the accuracy of the model. We suspect that group lasso regularization is resulting in underfitting. Therefore, we don't advocate using Group Lasso alone to introduce block sparsity when training on large datasets. Instead, we recommend using block pruning or group lasso combined with block pruning which produces good accuracy results. \n\nIt is true that for certain processors (like a Tensor Processing Unit) larger block sizes like 256x256 would realize higher speedup. However, for GPUs and processors with SIMD units, smaller block sizes can result in higher speedup with efficient kernels. For example, ARM CPUs support 16x1 vectors and the NVIDIA Volta TensorCores support 16x16 blocks. For 4x4 and 16x16 blocks, we show that the speedup ranges from 4.5x to 1.5x depending on the matrix size and minibatch size using libraries that were not tuned explicitly for block sparse RNNs. Recently, new block sparse kernel libraries have been released (https://blog.openai.com/block-sparse-gpu-kernels/) which demonstrate that it is possible to achieve speedup close to the theoretical maximum of 1(1-s/100) where s is the sparsity of the network. The work shows that smaller blocks of 32x32 even with 80% sparsity can achieve about 4x speedup. \n\nWhile benchmarking speedups, we run each GEMM 10,000 times. We will run the benchmark with more iterations to ensure that the speedup holds true. Additionally, we will run with more minibatch sizes to confirm the trend. Also, at batch size of 8, the dense kernels are very fast and therefore speedup observed with sparse kernels is small. For batch sizes larger than 8, the dense time increases significantly resulting in improved speedup. \n\nWe will update the paper fixing axis labels and reread for grammar issues. Thanks again for the review and feedback. ", "Thank you for your review and feedback. We are working on extending this approach to Language Modelling and will hopefully have results on small dataset soon. We will update the paper with the results if they are available before the rebuttal deadline.", "We thank the reviewer for their comments and helpful feedback.\n\nWe present several heuristics related to hyperparameters, and we regard these heuristics as an aid for practitioners, to narrow the range of their hyperparameter search. We agree with the reviewer that it's not clear under what conditions these heuristics might break down. The requirement for hyperparameter tuning is a drawback of our approach, but other pruning approaches within the field share this drawback.\n \nFor our group lasso experiments, we pick groups to exactly match the blocks in our block pruning experiments. This is a regular tiling of 2D blocks across an individual 2D weight matrix. Unlike some prior work using group lasso, our reasoning for this grouping is not based on any expectation about grouping or correlation in the input features. Instead, we choose this grouping purely to induce a block sparsity format in the weight matrix that is efficient for hardware implementation. We'll update the paper to clarify these points.\n\nGroup Lasso alone doesn't work for our models due to underfitting. Without regularization, the baseline dense model doesn't overfit the dataset. Therefore, adding regularization to generate sparse blocks hurts the accuracy of the model. Group Lasso could be more effective in inducing block sparsity in a data-limited problem.\n\nIn section 4.3, we demonstrate that our block pruning approach works for block sizes up to 32x32. The section demonstrates a tradeoff between block size and sparsity. The exact choice of block sizes will depend on the underlying hardware. E.g. The best block size for the Nvidia Tesla V100 is 16x16 due to the array data paths used by the TensorCores. We will add some notes to the paper to aid a practitioner in making this choice. \n\nFinally, we are working on adding comparisons to previous work including Han et. al. and will update the paper with these results including the pros and cons of each approach.  \n\nOur work is novel because this is first approach to introduce block sparsity for Recurrent Neural Networks with vanilla RNN and GRU cells. To the best of our knowledge, no prior work has applied Group Lasso Regularization to large RNN models to induce block sparsity. Additionally, our block pruning algorithm does not increase training time unlike some prior work. " ]
[ 5, 7, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJaDJZ-0W", "iclr_2018_HJaDJZ-0W", "iclr_2018_HJaDJZ-0W", "rkyLQgkNM", "rkyLQgkNM", "B19rTJ8GM", "iclr_2018_HJaDJZ-0W", "S1QJz0Wfz", "ByH5FWfgf", "rJWbr5zxM", "Hk8Ah7_eG" ]
iclr_2018_B1NGT8xCZ
Principled Hybrids of Generative and Discriminative Domain Adaptation
We propose a probabilistic framework for domain adaptation that blends both generative and discriminative modeling in a principled way. Under this framework, generative and discriminative models correspond to specific choices of the prior over parameters. This provides us a very general way to interpolate between generative and discriminative extremes through different choices of priors. By maximizing both the marginal and the conditional log-likelihoods, models derived from this framework can use both labeled instances from the source domain as well as unlabeled instances from \emph{both} source and target domains. Under this framework, we show that the popular reconstruction loss of autoencoder corresponds to an upper bound of the negative marginal log-likelihoods of unlabeled instances, where marginal distributions are given by proper kernel density estimations. This provides a way to interpret the empirical success of autoencoders in domain adaptation and semi-supervised learning. We instantiate our framework using neural networks, and build a concrete model, \emph{DAuto}. Empirically, we demonstrate the effectiveness of DAuto on text, image and speech datasets, showing that it outperforms related competitors when domain adaptation is possible.
rejected-papers
Pros -- Nice way to formulate domain adaptation in a Bayesian framework that explains why autoencoder and domain difference losses are useful. Cons -- Model closely follows the framework, but the overall strategy is similar to previous models (but with improved rationale). -- Experimental section can be improved. It would interesting to explore and develop the relationship between the proposed technique and Tzeng et al. Given the aforementioned cons, the AC is recommending that the paper be rejected.
test
[ "SyeuDWqef", "HJ6gA_L4G", "HkTrtoKlf", "SySd0xngf", "B14UUUxzM", "SJ9QI8ezf", "BJne88xff", "rkAhH8eff" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This is a very well-written paper that shows how to successfully use (generative) autoencoders together with the (discriminative) domain adversarial neural network (DANN) of Ganin et al.\nThe construction is simple but nicely backed by a probabilistic analysis of the domain adaptation problem.\n\nThe only criticism that I have towards this analysis is that the concept of shared parameter between the discriminative and predictive model (denoted by zeta in the paper) disappear when it comes to designing the learning model. \n\nThe authors perform numerous empirical experiments on several types of problems. They successfully show that using autoencoder can help to learn a good representation for discriminative domain adaptation tasks. On the downside, all these experiments concern predictive (discriminative) problems. Given the paper title, I would have expected some experiments in a generative context. Also, a comparison with the Generative Adversarial Networks of Goodfellow et al. (2014) would be a plus.\nI would also like to see the results obtained using DANN stacked on mSDA representations, as it is done in Ganin et al. (2016).\n\nMinor comments:\n- Paragraph below Equation 6: The meaning of $\\phi(\\psi)$ is unclear \n- Equation (7): phi and psi seems inverted \n- Section 4: The acronym MLP is used but never defined.\n\n=== update ===\nI lowered my score and confidence, see my new post below.", "The other reviews made me realize that the contribution can be improved. Consequently, I lowered my score from 7 to 6.\n\nI honestly don't remember what I was thinking when I wrote my comment about the zeta term. The authors made it clear in their rebuttal that there is a direct correspondence between this and the shared parameters of the network. However, there is clearly room to analyze more the influence of the proportion of shared parameters in the model. Currently, the probabilistic framework says that we need those to some extent, and the experiments use a fix model without much explanation.\n", "This paper proposed a probabilistic framework for domain adaptation that properly explains why maximizing both the marginal and the conditional log-likelihoods can achieve desirable performances. \nHowever, I have the following concerns on novelty. \n\n1. Although the paper gives some justiification why auto-encoder can work for domain adaptation from perspective of probalistics model, it does not give new formulation or algorithm to handle domain adaptation. At this point, the novelty is weaken.\n2. In the introduction, the authors mentioned “limitations of mSDA is that it needs to explicitly form the covariance matrix of input features and then solves a linear system, which can be computationally expensive in high dimensional settings.” However, mSDA cannot handle high dimension setting by performing the reconstruction with a number of random non-overlapping sub-sets of input features. It is not clear why mSDA cannot handle time-series data but DAuto can. DAuto does not consider the sequence/ordering of data either. \n3. If my understanding is not wrong, the proposed DAuto is just a simple combination of three losses (i.e. prediction loss, reconstruction loss, domain difference loss). As far as I know, this kind of loss is commonly used in most existing methods. ", "The authors propose a probabilistic framework for semi-supervised learning and domain adaptation. By varying the prior distribution, the framework can incorporate both generative and discriminative modeling. The authors emphasize on one particular form of constraint on the prior distribution, that is weight (parameter) sharing, and come up with a concrete model named Dauto for domain adaptation. A domain confusion loss is added to learn domain-invariant feature representations. The authors compared Dauto with several baseline methods on several datasets and showed improvement. \n\nThe paper is well-organized and easy to follow. The probabilistic framework itself is quite straight-forward. The paper will be more interesting if the authors are able to extend the discussion on different forms of prior instead of the simple parameter sharing scheme. \n\nThe proposed DAuto is essentially DANN+autoencoder. The minimax loss employed in DANN and DAuto is known to be prone to degenerated gradient for the generator. It would be interesting to see if the additional auto-encoder part help address the issue. \n\nThe experiments miss some of the more recent baseline in domain adaptation, such as Adversarial Discriminative Domain Adaptation (Tzeng, Eric, et al. 2017). \n\nIt could be more meaningful to organize the pairs in table by target domain instead of source, for example, grouping 9->9, 8->9, 7->9 and 3->9 in the same block. DAuto does seem to offer more boost in domain pairs that are less similar. ", "We'd like to thank the reviewer for the thoughtful questions, and we want to address them here.\n\nQ: Although the paper gives some justiification why auto-encoder can work for domain adaptation from perspective of probabilistic model, it does not give new formulation or algorithm to handle domain adaptation. At this point, the novelty is weaken.\n\nBesides providing a probabilistic justification for autoencoders, we also propose DAuto as a new model to handle domain adaptation. At the end DAuto boils down to DANN+autoencoders, and to the best of our knowledge we didn't see any existing work using this combination of structures for domain adaptation, which we treat as a novel contribution.\n\n\nQ: However, mSDA cannot handle high dimension setting by performing the reconstruction with a number of random non-overlapping sub-sets of input features. It is not clear why mSDA cannot handle time-series data but DAuto can. DAuto does not consider the sequence/ordering of data either. \n\nYes, we agree with the reviewer that performing the reconstruction by random non-overlapping subsets of features could serve as an approximation to the solution of the original linear system that mSDA attempts to solve. Perhaps we could phrase it as \"One of the limitations of mSDA is that it needs to explicitly form the covariance matrix of input features and then solves a linear system, which can be computationally expensive to solve exactly in high dimensional settings, but approximate scheme exists.\"\n\nWhat we would like to emphasize is the fact that mSDA is more like a two-stage algorithm, where feature learning and discriminative training are separated. On the other hand, DAuto is more end-to-end, which combines feature learning, discriminative training and domain adaptation in a unified model. In time-series modeling the latter might be more favorable because it has the power to adapt to the change in distribution along time. We've removed the sentence \"On the other hand, it is not clear how to extend mSDA so that it can also be applied for time-series modeling.\" from related work.\n", "Thanks for the accurate summarization and all the comments! We attempt to clarify several points below.\n\nQ: The only criticism that I have towards this analysis is that the concept of shared parameter between the discriminative and predictive model (denoted by zeta in the paper) disappear when it comes to designing the learning model. \n\nzeta does not disappear in DAuto, and in fact DAuto is precisely designed to follow the probabilistic principle discussed in Section 3.1 to 3.3. In Fig. 2, the zeta corresponds to the shared component f: f and g together instantiate p(x), while f and h together instantiate p(y|x). In other words, phi = parameters in f union parameters in g, and psi = parameters in f union parameters in h. So zeta = phi intersects psi = parameters in f.\n\n\nQ: Also, a comparison with the Generative Adversarial Networks of Goodfellow et al. (2014) would be a plus.\n\nThis is indeed a great question, and we'd like to clarify here. Although DAuto is designed to consider the marginal distribution over x, and we use kernel density estimation to construct this density explicitly, DAuto does not have efficient sampling schemes to actually generate samples from the distribution. On the other hand, from GAN we cannot compute the density of a given instance explicitly, but it allows us to draw samples from it in a straightforward way. \n\n\nQ: I would also like to see the results obtained using DANN stacked on mSDA representations, as it is done in Ganin et al. (2016).\n\nStacking with mSDA representations can indeed boost the performances of all the models, including DANN, Ladder and DAuto uniformly on the Amazon dataset. On average we see around 2 percent improvements in classification accuracy on Amazon.\n\n\nThanks again for all the detailed comments as well! We've updated them in the revised version.\n", "Thank you for providing thoughtful comments and suggestions. We attempt to answer the questions below. \n\nQ: The paper will be more interesting if the authors are able to extend the discussion on different forms of prior instead of the simple parameter sharing scheme. \n\nAs we show in section 3.1, one necessary condition under the probabilistic framework is that phi and psi cannot be independent, otherwise the maximization of marginal distribution over x would be independent of the discriminative task, hence phi and psi need to be correlated. While there are many ways that phi and psi could be correlated, one sufficient and frequently used assumption in practice is that they share some common parameters, under which we show that this corresponds to minimizing reconstruction loss using autoencoders. Other possible choices include phi being a function of psi, etc., but such choice has not been frequently used in practice, so we focus on the parameter sharing scheme in this work. \n\n\nQ: The minimax loss employed in DANN and DAuto is known to be prone to degenerated gradient for the generator. It would be interesting to see if the additional auto-encoder part help address the issue. \n\nYes, that's right. As we observed empirically in our experiments the reconstruction loss from autoencoders do indeed tend to stablize the joint optimization, but theoretically we don't know how to formally prove this yet. \n\n\nQ: The experiments miss some of the more recent baseline in domain adaptation, such as Adversarial Discriminative Domain Adaptation (Tzeng, Eric, et al. 2017). \n\nThanks for pointing out this work! We've incorporated a comparison to ADDA as well, please check the revised submission. As a high-level summary, given that all the methods use exactly the same network structures and training protocols, ADDA does perform very well on several adaptation tasks, while DAuto excels on other tasks. This shows that using asymmetric encoders and using reconstruction loss as regularization can both contribute to adaptation, perhaps in an orthogonal way. \n", "We thank all the reviewers for the time devoted to provide thoughtful comments and suggestions. We attempt to answer the questions separately below. " ]
[ 6, -1, 5, 5, -1, -1, -1, -1 ]
[ 3, -1, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_B1NGT8xCZ", "SJ9QI8ezf", "iclr_2018_B1NGT8xCZ", "iclr_2018_B1NGT8xCZ", "HkTrtoKlf", "SyeuDWqef", "SySd0xngf", "iclr_2018_B1NGT8xCZ" ]
iclr_2018_B1tC-LT6W
Trace norm regularization and faster inference for embedded speech recognition RNNs
We propose and evaluate new techniques for compressing and speeding up dense matrix multiplications as found in the fully connected and recurrent layers of neural networks for embedded large vocabulary continuous speech recognition (LVCSR). For compression, we introduce and study a trace norm regularization technique for training low rank factored versions of matrix multiplications. Compared to standard low rank training, we show that our method leads to good accuracy versus number of parameter trade-offs and can be used to speed up training of large models. For speedup, we enable faster inference on ARM processors through new open sourced kernels optimized for small batch sizes, resulting in 3x to 7x speed ups over the widely used gemmlowp library. Beyond LVCSR, we expect our techniques and kernels to be more generally applicable to embedded neural networks with large fully connected or recurrent layers.
rejected-papers
Pros -- Shows alternative strategies to train low-rank factored weight matrices for recurrent nets. Cons -- Minor modifications (and gains) over other forms of regularization like L2. -- Results are only on an ASR task, so it’s not entirely clear how they’ll work on other tasks. As pointed out by the reviewers, unless the authors show that the techniques generalize well to other tasks, and larger datasets it hard to accept it to the main conference. The AC, therefore, recommends that the paper be rejected.
train
[ "Hk1q_liEG", "Bk-k0Ctgz", "BJgpEMDEM", "HJcmDCFgz", "By29r_AeG", "HkhMCaWQM", "BJ_PnTW7M", "S1UpspbmG" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Indeed, at this point, it seems hard to escape the conclusion that trace norm regularization is not substantially or at all superior to L2 regularization with respect to the number of parameters versus CER trade-off. In retrospect, we should have written the paper quite differently to highlight the comparison of regularization techniques as well as some of our other main contributions.\n\nFor the record and for the benefit of possible future viewers of this page, we would like to list one more time some of these contributions:\n\n1. The initial baseline we faced, based on the literature, is actually the unregularized green points in Figure 4. Producing the strong L2 regularized baseline is itself a contribution of this paper. We found that for both trace norm and L2 regularization, getting such strong results requires separate regularization strengths for the hidden-to-hidden and input-to-hidden weights of recurrent layers. See Figure 1.\n\n2. We showed that, whether using L2 or trace norm regularization, it is not necessary to train \"stage 1\" models fully. A few epochs should suffice. This could substantially speed up training of large models. (See Figure 5.)\n\n3. We created and made publicly available efficient GEMM kernels for small batch sizes on the ARM platform.\n\nFinally, as a pointer for possible future readers, we would like to mention that we do not think the approximate doubling of parameters in stage 1 using trace norm regularization is a serious obstacle. We suspect using rank r = min(m,n)/2 instead of r = min(m, n) in stage 1 would not impact results much for most problems. However, to be clear, we have not tested this and do not report this in the paper.", "The problem considered in the paper is of compressing large networks (GRUs) for faster inference at test time. \n\nThe proposed algorithm uses a two step approach: 1) use trace norm regularization (expressed in variational form) on dense parameter matrices at training time without constraining the number of parameters, b) initializing from the SVD of parameters trained in stage 1, learn a new network with reduced number of parameters.\n\nThe experiments on WSJ dataset are promising towards achieving a trade-off between number of parameters and accuracy. \n\nI have the following questions regarding the experiments:\n1. Could the authors confirm that the reported CERS are on validation/test dataset and not on train/dev data? It is not explicitly stated. I hope it is indeed the former, else I have a major concern with the efficacy of the algorithm as ultimately, we care about the test performance of the compressed models in comparison to uncompressed model. \n\n2. In B.1 the authors use an increasing number units in the hidden layers of the GRUs as opposed to a fixed size like in Deep Speech 2, an obvious baseline that is missing from the experiments is the comparison with *exact* same GRU (with 768, 1024, 1280, 1536 hidden units) *without any compression*. \n\n3. What do different points in Fig 3 and 4 represent. What are the values of lamdas that were used to train (the l2 and trace norm regularization) the Stage 1 of models shown in Fig 4. I want to understand what is the difference in the two types of behavior of orange points (some of them seem to have good compression while other do not - it the difference arising from initialization or different choice of lambdas in stage 1. \n\nIt is interesting that although L2 regularization does not lead to low \\nu parameters in Stage 1, the compression stage does have comparable performance to that of trace norm minimization. The authors point it out, but a further investigation might be interesting. \n\nWriting:\n1. The GRU model for which the algorithm is proposed is not introduced until the appendix. While it is a standard network, I think the details should still be included in the main text to understand some of the notation referenced in the text like “\\lambda_rec” and “\\lambda_norec”", "After the correction in Figure 4, for final compression performance, trace norm regularization proposed by this paper has performance comparable to more standard L2 performance. In the light of this new experiment, there is not enough evidence to prefer using trace norm regularization and factorized weights in stage 1. In fact, the factorized representation doubles the number of parameters to be learned in stage 1. \n\nThe experiments do not seem to validate the significance of the main contribution of paper - namely using a trace norm regularization in stage 1 for better performance after compression with low rank factorization. Am I missing something here?\n\n", "The authors propose a strategy for compressing RNN acoustic models in order to deploy them for embedded applications. The technique consists of first training a model by constraining its trace norm, which allows it to be well-approximated by a truncated SVD in a second fine-tuning stage. Overall, I think this is interesting work, but I have a few concerns which I’ve listed below:\n\n1. Section 4, which describes the experiments of compressing server sized acoustic models for embedded recognition seems a bit “disjoint” from the rest of the paper. I had a number of clarification questions spefically on this section:\n- Am I correct that the results in this section do not use the trace-norm regularization at all? It would strengthen the paper significantly if the experiments presented on WSJ in the first section were also conducted on the “internal” task with more data.\n- How large are the training/test sets used in these experiments (for test sets, number of words, for training sets, amount of data in hours (is this ~10,000hrs), whether any data augmentation such as multi-style training was done, etc.)\n- What are the “tier-1” and “tier-2” models in this section? It would also aid readability if the various models were described more clearly in this section, with an emphasis on structure, output targets, what LMs are used, how are the LMs pruned for the embedded-size models, etc. Also, particularly given that the focus is on embedded speech recognition, of which the acoustic model is one part, I would like a few more details on how decoding was done, etc.\n- The details in appendix B are interesting, and I think they should really be a part of the main paper. That being said, the results in Section B.5, as the authors mention, are somewhat preliminary, and I think the paper would be much stronger if the authors can re-run these experiments were models are trained to convergence.\n- The paper focuses fairly heavily on speech recognition tasks, and I wonder if it would be more suited to a conference on speech recognition. \n\n2. Could the authors comment on the relative training time of the models with the trace-norm regularizer, L2-regularizer and the unconstrained model in terms of convergence time.\n\n3. Clarification question: For the WSJ experiments was the model decoded without an LM? If no LM was used, then the choice of reporting results in terms of only CER is reasonable, but I think it would be good to also report WERs on the WSJ set in either case.\n\n4. Could the authors indicate the range of values of \\lambda_{rec} and \\lambda_{nonrec} that were examined in the work? Also, on a related note, in Figure 2, does each point correspond to a specific choice of these regularization parameters?\n\n5. Figure 4: For the models in Figure 4, it would be useful to indicate the starting CER of the stage-1 model before stage-2 training to get a sense of how stage-2 training impacts performance.\n\n6. Although the results on the WSJ set are interesting, I would be curious if the same trends and conclusions can be drawn from a larger dataset -- e.g., the internal dataset that results are reported on later in the paper, or on a set like Switchboard. I think these experiments would strengthen the paper.\n\n7. The experiments in Section 3.2.3 were interesting, since they demonstrate that the model can be warm-started from a model that hasn’t fully converged. Could the authors also indicate the CER of the model used for initialization in addition to the final CER after stage-2 training in Figure 5.\n\n8. In Section 4, the authors mention that quantization could be used to compress models further although this is usually degrades WER by 2--4% relative. I think the authors should consider citing previous works which have examined quantization for embedded speech recognition [1], [2]. In particular, note that [2] describes a technique for training with quantized forward passes which results in models that have smaller performance degradation relative to quantization after training.\nReferences:\n[1] Vincent Vanhoucke, Andrew Senior, and Mark Mao, “Improving the speed of neural networks on cpus,” in Deep Learning and Unsupervised Feature Learning Workshop, NIPS, 2011.\n[2] Raziel Alvarez, Rohit Prabhavalkar, Anton Bakhtin, “On the efficient representation and execution of deep acoustic models,” Proc. of Interspeech, pp. 2746 -- 2750, 2016.\n\n9. Minor comment: The authors use the term “warmstarting” to refer to the process of training NNs by initializing from a previous model. It would be good to clarify this in the text.", "Paper is well written and clearly explained. The paper is a experimental paper as it has more content on the experimentation and less content on problem definition and formulation. The experimental section is strong and it has evaluated across different datasets and various scenarios. However, I feel the contribution of the paper toward the topic is incremental and not significant enough to be accepted in this venue. It only considers a slight modification into the loss function by adding a trace norm regularization.", "Thank you for your very thorough review and detailed feedback.\n\nBefore addressing specific questions and comments, we would like to elaborate more on why we chose this venue to present our work and on the principles behind our paper’s organization. We agree that a lot of attention is being paid to the speech-specific aspects in our paper. This is natural given the title and since all of the experimental results we report are for speech recognition. However, it is our firm conviction that the technique we present as well as the techniques of Prabhavalkar et al. are much more broadly applicable.\n\nConsequently, we had originally somewhat broader ambitions. We hoped to compare sparsity and low-rank factorization in more detail, on both speech recognition and other tasks like language modelling (on, say, Penn Treebank or the billion words corpus). Due to resource constraints, we could not run all experiments we hoped to and had to prioritize. Our organizing principle was to put what we thought was of more general interest in the main text, and shift the more preliminary work and the work that we felt was very speech-specific to appendices. We think that material is nonetheless valuable and we hope our inclusion of it can invite further research from the community to expand upon the issues raised and the solutions offered.\n\nWhat is in the main text of the paper, we believe could in principle be applied just as well to any other deep models involving dense or recurrent layers.\n\nRegarding your specific points:\n\n1. As described above, it was a conscious decision to not focus too much on the speech-specific aspects in the main text of the paper. We plan to add some more details to the appendix later on, but detailed experiments we will need to relegate, as you suggested, to a possible future speech conference submission. Regarding section 4: We do wish we could have reported trace-norm-regularized results here, but in the end that would have cannibalized too many resources from other higher-priority experiments. (A single training run on our large 10,000+ hour speech datasets may occupy 16 GPU’s and take, with interruptions, around 3 to 4 weeks to complete.) As a result we introduced the newly developed kernels with only a few models we had already started training before the techniques in Section 3 were developed. \n\n2. Unfortunately, due to training on different types of hardware with various interruptions, we could not compile meaningful wall-clock training time comparisons.\n\n3. Correct, for WSJ we did not use a language model. As we were interested in relative performance of different compression techniques for the acoustic model only, we decided to keep the WSJ experiments as simple as possible.\n\n4. Figure 1 shows the lambda values examined for stage 1. Yes, for Figure 2 we show the variation with respect to one of the lambda's when the other lambda is fixed at 0.\n\n5. Thank you for the suggestion. We have fixed Figure 4 (please see our response to Reviewer 2 for further details) and the behavior for all points is more consistent now. All the stage 1 models used for warm-starting the points in this figure had below 6.8 final CER. We look at stage 1 CER vs. stage 2 CER in response to your point 7 below, where the effect is more interesting.\n\n6. This is a great suggestion for follow-up work. Unfortunately, due to resource constraints, we could not pursue this for the present paper.\n\n7. Great suggestion. We have updated Figure 5 to include this information. As is clearer now, the stage 1 models trained for only a few epochs are really very far from being fully converged and yet are still good enough to be used for warm-starting successful stage 2 models.\n\n8. Good point. As the relative WER losses we saw from compressing the language and acoustic models were much larger the relative loss from quantization, we chose to not pursue quantization further for this particular study. However, as you suggest, we should at least point to the relevant literature. We have added these citations and clarified this in the text.\n\n9. Thank you for pointing this out. We have clarified this in the text.\n", "Thank you for your thorough review. Thanks to your comment 3 in particular, we found an error in our preparation of Figure 4. We have remedied this situation and the figure looks more reasonable now. Unfortunately, our claim of more “consistent” good results appears weakened through this finding. However, the rest of the paper is not affected by this mistake.\n\nTo respond to your points in detail:\n\n1.Yes, we did the hyperparameter comparisons on a validation set that is separate from the train set. We have clarified this in the text.\n\n2. Figure 3 shows fully converged models that are uncompressed (as we only do the compression for stage 2). The green points correspond to the baseline model mentioned in B.1, trained without any regularization. Without regularization, this baseline model is seen to perform quite poorly in terms of final CER. For WSJ, we found that models benefit greatly from regularization. Therefore, to have fair baselines to compare trace norm regularization against, we tuned L2 regularized models just as extensively as we tuned trace norm regularized models. The L2 regularized models are the orange points. We have clarified this in the text.\n\n3. Thank you for drawing our attention again to the orange points in Figure 4. It turns out we made an error: the stage 1 models used for Figure 4 (for both trace norm and L2 regularization) were actually selected to another criterion regarding CER vs. rank at 90% trade-off we considered earlier on, rather than just best CER as we indicated in the text. After fixing the criterion to “best CER” as we had intended, there is no longer such drastically different behavior between the orange points. We have corrected the figure and updated the claims about more consistent training.\nThe corrected lambda values and the CER values of the models that were used as starting points for the stage 2 experiments are as follows:\n\tL2 models, CER, 𝜆nonrec,𝜆rec \n\t1, 6.6963, 0.05, 0.01\n\t2, 6.7536, 0.05, 0.005\n\t3, 6.7577, 0.05, 0.0025\n\tTrnorm models, CER, 𝜆nonrec,𝜆rec \n\t1, 6.6471, 0.02, 0.001\n\t2, 6.7475, 0.02, 0.005\n\t3, 6.7823, 0.02, 0.0005\n\n\nWriting: We have clarified the meaning of “rec” and “nonrec” in the main body of the text. We did not want to go into the full details of the Deep Speech 2 architecture in the main text, as we feel the details are not very pertinent to our present study and may distract the reader from the generality of the ideas. However, we have tried to provide more detail on those parts that are relevant. We hope the balance we struck now improves the exposition.", "We would like to thank you for taking the time to review our paper. Although these particular results for speech may appear incremental, we believe the methodologies and insights go far beyond speech recognition and should be of interest to researchers working on low rank methods and compressing large non-convolutional neural networks. In addition to the systematic modification to the loss function that we propose, we also present a methodology for training models with this modified loss function and also a methodology for studying the effectiveness and making fair comparisons of such techniques. There are also other critical insights that were necessary to make this work, such as the need to treat the recurrent (hidden-to-hidden) and non-recurrent (input-to-hidden) weights of recurrent layers separately, and regularize them with different strengths.\n\nOur goal in this paper is to lay the groundwork to attract more attention from the community and invite further study of the technique we present as well as the techniques of Prabhavalkar et al. and others that we build upon. The combination of these techniques, as we have shown, could also be potentially useful for speeding up the training of large networks. We hope that our work will promote the use of factorized matrices in the research community, resulting in a more compact representation of neural networks. \n" ]
[ -1, 4, -1, 5, 5, -1, -1, -1 ]
[ -1, 3, -1, 5, 3, -1, -1, -1 ]
[ "BJgpEMDEM", "iclr_2018_B1tC-LT6W", "BJ_PnTW7M", "iclr_2018_B1tC-LT6W", "iclr_2018_B1tC-LT6W", "HJcmDCFgz", "Bk-k0Ctgz", "By29r_AeG" ]
iclr_2018_SkJd_y-Cb
Word2net: Deep Representations of Language
Word embeddings extract semantic features of words from large datasets of text. Most embedding methods rely on a log-bilinear model to predict the occurrence of a word in a context of other words. Here we propose word2net, a method that replaces their linear parametrization with neural networks. For each term in the vocabulary, word2net posits a neural network that takes the context as input and outputs a probability of occurrence. Further, word2net can use the hierarchical organization of its word networks to incorporate additional meta-data, such as syntactic features, into the embedding model. For example, we show how to share parameters across word networks to develop an embedding model that includes part-of-speech information. We study word2net with two datasets, a collection of Wikipedia articles and a corpus of U.S. Senate speeches. Quantitatively, we found that word2net outperforms popular embedding methods on predicting held- out words and that sharing parameters based on part of speech further boosts performance. Qualitatively, word2net learns interpretable semantic representations and, compared to vector-based methods, better incorporates syntactic information.
rejected-papers
Pros -- Extends embeddings to use a richer representation; simple yet interesting improvement on Mikolov et al. work. Cons -- All of the reviewers pointed out that the experimental evaluations needs improvement. The authors should find better ways to improve both quantitative (e.g., accuracy in analogies as in Mikolov et al., or by using the model for an external task if that’s the end goal) and qualitative (using functional similarity for the baseline) evaluations. Given these comments, the AC recommends that the paper be rejected.
train
[ "rkSCZ0uxG", "SkH7fQYez", "rJADQOqlG", "H1a4LD6mz", "BkmVBP6mM", "H1z6NDTmz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper extends SGNS as follows. In SGNS, each word x is associated with vectors a_x and r_x. Given a set of context words C, the model calculates the probability that the target word is x by a dot product between a_x and the average of {r_c: c in C}. The paper generalizes this computation to an arbitrary network: now each word x is associated with some network N_x whose input is a set of context words C and the output is the aforementioned probability. This is essentially an architectural change: from a bag-of-words model to a (3-layer) feedforward model. \n\nAnother contribution of the paper is a new form of regularization by tying a subset of layers between different N_x. In particular, the paper considers incorporating POS tags by tying within each POS group. For instance, the parameters of the first layer are shared across all noun words. (This assumes that POS tags are given.)\n\nWhile this is a natural extension to word2vec, the reviewer has some reservations about the execution of this work. Word embeddings are useful in large part because they can be used to initialize the parameters of a network. None of the chosen experiments shows this. Improvement in the log likelihood over SGNS is somewhat obvious because there are more parameters. The similarity between \"words\" now requires a selection of context vectors (7) which is awkward/arbitrary. The use of POS tags is not very compelling (though harmless). It's not necessary: contrary to the claim in the paper, word embeddings captures syntactic information if the context width is small and/or context information is provided. A more sensible experiment would be to actually plug in the entire pretrained word nets into an external model and see how much they help. \n\nEDIT: It's usually the case that even if the number of parameters is the same, extra nonlinearity results in better data fitting (e.g., Berg-Kirkpatrick et al, 2010), it's still not unexpected. \n\nAll of this is closely addressed in the following prior work: \n\nLearning to Embed Words in Context for Syntactic Tasks (Tu et al., 2017)\n\nQuality: Natural but questionable extension, see above. \n\nClarity: Clear. \n\nOriginality: Acceptable, but a very similar idea of embedding contexts is presented in Tu et al. (2017) which is not cited. \n\nSignificance: Minor/moderate, see above. \n\n", "The paper presents a method to use non-linear combination of context vectors for learning vector representation of words. The main idea is to replace each word embedding by a neural network, which scores how likely is the current word given the context words. This also allowed them to use other context information (like POS tags) for word vector learning. I like the approach, although not being an expert in the area, cannot comment on whether there are existing approaches for similar objectives.\n\nI think the experimental section is weak. Most work on word vectors are evaluated on several word similarity and analogy tasks (See the Glove paper). However, this paper only reports numbers on the task of predicting next word.\n\nResponse to rebuttal:\n\nI am still not confident about the evaluation. I feel word vectors should definitely be tested on similarity tasks (if not analogy). As a result, I am keeping my score the same. ", "This paper presents another variant on neural language models used to learn word embeddings. In keeping with the formulation of Mikolov et al, the model learned is a set of independent binary classifiers, one per word. As opposed to other work, each classifier is not based on the dot product between an embedding vector and a context vector but instead is a per-word neural network which takes the context as input and produces a score for each term. An interesting consequence of using networks instead of vectors to parametrize the embeddings is that it's easy to see many ways to let the model use side information such as part-of-speech tags. The paper explores one such way, by sharing parameters across networks of all words which have the same POS tag (effectively having different parameterizations for words which occur with multiple POS tags).\n\nThe idea is interesting but the evaluation leaves doubts. Here are my main problems:\n 1. The quantitative likelihood-based evaluation can easily be gamed by making all classifiers output numbers which are close to 1. This is because the model is not normalized, and no attempt at normalization is claimed to be made during the likelihood evaluation. This means it's likely hyperparameter tuning (of, say, how many negative examples to use per positive example) is likely to bias this evaluation to look more positive than it should.\n 2. The qualitative similarity-based evaluation notes, correctly, that the standard metric of dot product / cosine between word embeddings does not work in the case of networks, and instead measures similarity by looking at the similarity of the predictions of the networks. Then all networks are ranked by similarity to a query network to make the now-standard similar word lists. While this approach is interesting, the baseline models were evaluated using the plain dot product. It's unclear whether this new evaluation methodology would have also produced nicer word lists for the baseline methods.\n\nIn the light that the evaluation has these two issues I do not recommend accepting this paper.", "Reviewer 3 gives a good summary of the paper and we thank them for useful references to related work.\n\n>> Improvement in the log likelihood over SGNS is somewhat obvious because there are more parameters.\n\nIn our experiments we also compare models with the *same* number of parameters, e.g., by making the word vectors longer (see Tables 2, 5, and 6, the number of parameters per word p/V is in the third column). Our experimental study shows that word2net outperforms existing methods both with the same context dimension (K) and the same number of parameters (p/V), especially when we use POS information.\n\n>> The similarity between \"words\" (7) is awkward/arbitrary.\n\nWe use Equation (7) to compute functional similarities between networks. It captures the idea that similar networks map similar inputs to similar outputs. The reason we use the context vectors as input is because they span typical inputs to the networks; linear combinations of context vectors have been used as input to the networks during training.\n\n>> The use of POS tags is not necessary: contrary to the claim in the paper, word embeddings capture syntactic information if the context width is small and/or context information is provided.\n\nAs Reviewer 3 notes correctly, when we train a word embedding method such as word2vec, the embeddings of some words might capture some syntactic information, especially when the context size is small. However, work by Andreas and Klein, 2014, shows that over the entire vocabulary, the embeddings do not encode much syntactic information. One contribution of our work is to develop a method that allows us to incorporate syntactic information into the task of learning semantic representations.\n\n>> A more sensible experiment would be to actually plug in the entire pretrained word nets into an external model and see how much they help.\n\nWe agree that evaluating word2net on downstream tasks is an excellent idea for future work. We also suspect that there are applications where existing embedding methods are not applicable since they only learn vectors, and the asymmetric compositionality of neural networks is necessary to solve the task.\n\n>> This is closely addressed in the following prior work: Learning to Embed Words in Context for Syntactic Tasks (Tu et al., 2017)\n\nThe workshop paper of Tu et al. is very interesting. However, it addresses the entirely different task of learning a separate embedding for each occurrence of a word (i.e., token embeddings). The token embeddings are then used as features for downstream tasks such as POS tagging.\nIn contrast, our aim is to learn a representation for each word depending on its POS tag, which means we use the POS tags during training as additional information we want the embedding to capture.\n\n>> Originality: Acceptable, but a very similar idea of embedding contexts is presented in Tu et al. (2017).\n\nWe will include the citation to this interesting workshop paper (thank you for pointing us to it), but we also think that the main contribution of our work of learning a neural network for each word rather than a vector is complementary to the ideas in Tu et al.\n", "We thank Reviewer 2 for the thoughtful comments.\n\n>> I like the approach, although not being an expert in the area, cannot comment on whether there are existing approaches for similar objectives.\n\nWhile there is existing work to learn a vector for each word or a Gaussian distribution for each word, this is the first work to learn a neural network for each word.\n\n>> Most work on word vectors are evaluated on several word similarity and analogy tasks. However, this paper only reports numbers on the task of predicting next word.\n\nThank you for the comment. In unsupervised learning methods, held-out predictions are a standard evaluation of model fitness. Our evaluation shows that due to word2net's capacity to learn nonlinear relationships between contexts and word occurrence, it fits the data better than existing methods.\n\nThe focus of our paper is not to obtain better analogies. While there are numerous papers on embeddings that are not evaluated on analogies, we agree that this is an interesting point to explore in future work.\n", "We thank Reviewer 1 for the excellent feedback. The summary shows a firm understanding of our work and we appreciate the constructive feedback for the experimental section.\n\n>> Like Mikolov et al, the model learned is a set of independent binary classifiers, one per word. \n\nCorrect. One contribution of this work is to demonstrate that the skipgram formulation of Mikolov et al., originally derived as an approximation to multiclass classification, can be viewed as a binary classification, one for each word. The binary classifiers are not fully independent; rather, they are coupled through the context vectors, which are the input to the classifiers.\n\n>> An interesting consequence of using networks instead of vectors is that it's easy to use side information such as part-of-speech tags. \n\nCorrect. Incorporating side information into embeddings is a difficult task and has only been partially addressed in previous work. In particular, syntactic information has been difficult to incorporate.\n\n>> The paper explores one such way, by sharing parameters across networks of all words which have the same POS tag.\n\nYes, we effectively learn a different representation for each word - POS tag combination.\n \n>> The quantitative likelihood-based evaluation can easily be gamed. \n\nIt is true that the quantitative evaluation can be gamed without proper normalization. To avoid that, we were careful to make the comparison as fair as possible. During training, all methods used the same number of negative samples, and the reported held-out likelihood values correspond to the average over both positive and negative samples, giving the same weights to positive and negative samples.\n \n>> In the qualitative similarity-based evaluation the vector representations are ranked by cosine similarity while the word networks are ranked by functional similarity (cosine similarity does not apply). While this approach is interesting, it is unclear if it would produce nicer lists for the baseline methods as well.\n\nThank you. Evaluating the word vectors on \"functional similarity\" is an excellent idea. We reran the similarity queries and found that some queries changed slightly while others did not change. For example, in the 4 queries in Table 3, when evaluating the CBOW baseline with the functional similarity, only 2 queries changed by one word, while the other two queries remained unchanged. We leave it for future work to study the difference between functional and cosine similarity. If the functional similarity we developed for word2net is indeed better for vector-based embeddings this might be a worthwhile contribution by itself." ]
[ 5, 4, 4, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_SkJd_y-Cb", "iclr_2018_SkJd_y-Cb", "iclr_2018_SkJd_y-Cb", "rkSCZ0uxG", "SkH7fQYez", "rJADQOqlG" ]
iclr_2018_Hyig0zb0Z
Gated ConvNets for Letter-Based ASR
In this paper we introduce a new speech recognition system, leveraging a simple letter-based ConvNet acoustic model. The acoustic model requires only audio transcription for training -- no alignment annotations, nor any forced alignment step is needed. At inference, our decoder takes only a word list and a language model, and is fed with letter scores from the acoustic model -- no phonetic word lexicon is needed. Key ingredients for the acoustic model are Gated Linear Units and high dropout. We show near state-of-the-art results in word error rate on the LibriSpeech corpus with MFSC features, both on the clean and other configurations.
rejected-papers
Pros -- Competitive results on LibriSpeech. Cons -- Limited novelty, and lacks enough comparisons. -- Comparison with other end-to-end approaches, and on other commonly used datasets, like WSJ, missing. -- Gated convnets have already been proposed. -- Letter based systems have been shown to be competitive to phone based systems. -- Optimization criterion is quite similar to lattice-free MMI proposed by Povey et al., but with a letter based LM and a slightly different HMM topology. Given the cons pointed out by reviews, the AC is recommending that the paper be rejected.
train
[ "SkEUwDtkG", "Hyp5jKfxf", "Byw1O6Fgz", "Byv3IsSzG", "Bkdm2tXMG", "S1R0OMGGG", "Sk7YcE1fG", "rkU2e9DJM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "official_reviewer", "author", "public" ]
[ "This paper applies gated convolutional neural networks [1] to speech recognition, using the training criterion ASG [2]. It is fair to say that this paper contains almost no novelty.\n\nThis paper starts by bashing the complexity of conventional HMM systems, and states the benefits of their approach. However, all of the other grapheme-based end-to-end systems enjoy the same benefit as CTC and ASG. Prior work along this line includes [3, 4, 5, 6, 7].\n\nUsing MFSC, or more commonly known as log mel filter bank outputs, has been pretty common since [8]. Having a separate subsection (2.1) discussing this seems unnecessary.\n\nArguments in section 2.3 are weak because, again, all other grapheme-based end-to-end systems have the same benefit as CTC and ASG. It is unclear why discriminative training, such as MMI, sMBR, and lattice-free MMI, is mentioned in section 2.3. Discriminative training is not invented to overcome the lack of manual segmentations, and is equally applicable to the case where we have manual segmentations.\n\nThe authors argue that ASG is better than CTC in section 2.3.1 because it does not use the blank symbol and can be faster during decoding. However, once the transition scores are introduced in ASG, the search space becomes quadratic in the number of characters, while CTC is still linear in the number characters. In addition, ASG requires additional forward-backward computation for computing the partition function (second term in eq 3). There is no reason to believe that ASG can be faster than CTC in both training and decoding.\n\nThe connection between ASG, CTC, and marginal log loss has been addressed in [9], and it does make sense to train ASG with the partition function. Otherwise, the objective won't be a proper probability distribution.\n\nThe citation style in section 2.4 seems off. Also see [4] for a great description of how beam search is done in CTC.\n\nDetails about training, such as the optimizer, step size, and batch size, are missing. Does no batching (in section 3.2) means a batch size of one utterance?\n\nIn the last paragraph of section 3.2, why is there a huge difference in real-time factors between the clean and other set? Something is wrong unless the authors are using different beam widths in the two settings.\n\nThe paper can be significantly improved if the authors compare the performance and decoding speed against CTC with the same gated convnet. It would be even better to compare CTC and ASG to seq2seq-based models with the same gated convnet. Similar experiments should be conducted on switchboard and wsj because librespeech is several times larger than switchboard and wsj. None of the comparison in table 4 is really meaningful, because none of the other systems have parameters as many as 19 layers of convolution. Why does CTC fail when trained without the blanks? Is there a way to fix it besides using ASG? It is also unclear why speaker-adaptive training is not needed. At which layer do the features become speaker invariant? Can the system improve further if speaker-adaptive features are used instead of log mels? This paper would be much stronger if the authors can include these experiments and analyses.\n\n[1] R Collobert, C Puhrsch, G Synnaeve, Wav2letter: an end-to-end convnet-based speech recognition system, 2016\n\n[2] Y Dauphin, A Fan, M Auli, D Grangier, Language modeling with gated convolutional nets, 2017\n\n[3] A Graves and N Jaitly, Towards End-to-End Speech Recognition with Recurrent Neural Networks, 2014\n\n[4] A Maas, Z Xie, D Jurafsky, A Ng, Lexicon-Free Conversational Speech Recognition with Neural Networks, 2015\n\n[5] Y Miao, M Gowayyed, F Metze, EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding, 2015\n\n[6] D Bahdanau, J Chorowski, D Serdyuk, P Brakel, Y Bengio, End-to-end attention-based large vocabulary speech recognition, 2016\n\n[7] W Chan, N Jaitly, Q Le, O Vinyals, Listen, attend and spell, 2015\n\n[8] A Graves, A Mohamed, G Hinton, Speech recognition with deep recurrent neural networks, 2013\n\n[9] H Tang, L Lu, L Kong, K Gimpel, K Livescu, C Dyer, N Smith, S Renals, End-to-End Neural Segmental Models for Speech Recognition, 2017", "The paper describes some interesting work but for a combination of reasons I think it's more like a workshop-track paper.\nThere is not much that's technically new in the paper-- at least not much that's really understandable. There is some text about a variant of CTC, but it does not explain very clearly what was done or what the motivation was.\nThere are also quite a few misspellings. \nSince the system is presented without any comparisons to alternatives for any of the individual components, it doesn't really shed any light on the significance of the various modeling decisions that were made. That limits the value.\nIf rejected from here, it could perhaps be submitted as an ICASSP or Interspeech paper.", "The paper is interesting, but needs more work, and should provide clear and fair comparisons. Per se, the model is incrementally new, but it is not clear what the strengths are, and the presentations needs to be done more carefully.\n\nIn detail:\n- please fix several typos throughout the manuscript, and have a native speaker (and preferably an ASR expert) proofread the paper\n\nIntroduction\n- please define HMM/GMM model (and other abbreviations that will be introduced later), it cannot be assumed that the reader is familiar with all of them (\"ASG\" is used before it is defined, ...)\n- The standard units that most ASR systems use can be called \"senones\", and they are context dependent sub-phonetic units (see http://ssli.ee.washington.edu/~mhwang/), not phonetic states. Also the units that generate the alignment and the units that are trained on an alignment can be different (I can use a system with 10000 states to write alignments for a system with 3000 states) - this needs to be corrected.\n- When introducing CNNs, please also cite Waibel and TDNNs - they are *the same* as 1-d CNNs, and predate them. They have been extended to 2-d later on (Spatio-temporal TDNNs)\n- The most influential deep learning paper here might be Seide, Li, Yu Interspeech 2011 on CD-DNN-HMMs, rather than overview articles\n- Many papers get rid of the HMM pipeline, I would add https://arxiv.org/abs/1408.2873, which predates Deep Speech\n- What is a \"sequence-level variant of CTC\"? CTC is a sequence training criterion\n- The reason that Deep Speech 2 is better on noisy test sets is not only the fact they trained on more data, but they also trained on \"noisy\" (matched) data\n- how is this an end-to-end approach if you are using an n-gram language model for decoding? \n\nArchitecture\n- MFSC are log Filterbanks ...\n- 1D CNNs would be TDNNs\n- Figure 2: can you plot the various transition types (normalized, un-normalized, ...) in the plots? not sure if it would help, but it might\n- Maybe provide a reference for HMM/GMM and EM (forward backward training)\n- MMI was also widely used in HMM/GMM systems, not just NN systems\n- the \"blank\" states do *not* model \"garbage\" frames, if one wants to interpret them, they might be said to model \"non-stationary\" frames between CTC \"peaks\", but these are different from silence, garbage, noise, ...\n- what is the relationship of the presented ASG criterion to MMI? the form of equation (3) looks like an MMI criterion to me?\n\nExperiments\n- Many of the previous comments still hold, please proofread\n- you say there is no \"complexity\" incrase when using \"logadd\" - how do you measure this? number of operations? is there an implementation of \"logadd\" that is (absolutely) as fast as \"add\"?\n- There is discussion as to what i-vectors model (speaker or environment information) - I would leave out this discussion entirely here, it is enough to mention that other systems use adaptation, and maybe re-run an unadapted baselien for comparsion\n- There are techniques for incremental adaptation and a constrained MLLR (feature adaptation) approaches that are very eficient, if one wnats to get into this\n- it may also be interesting to discuss the role of the language model to see which factors influence system performance\n- some of the other papers might use data augmentation, which would increase noise robustness (did not check, but this might explain some of the results in table 4)\n- I am confused by the references in the caption of Table 3 - surely the Waibel reference is meant to be for TDNNs (and should appear earlier in the paper), while p-norm came later (Povey used it first for ASR, I think) and is related to Maxout\n- can you also compare the training times? \n\nConculsion\n- can you show how your approach is not so computationally expensive as RNN based approaches? either in terms of FLOPS or measured times\n", "Sorry for the confusion, what we meant is:\nASG computes P(Y_r|X_r) = S(Y_r|X_r)/sum_{Y}S(Y|X_r) \nMMI computes P(Y_r|X_r) = P(X_r|Y_r)P(Y_r) / (sum_{Y}P(X_r|Y)P(Y))\nThe difference in the conditioning order of Ys and Xs are within the summation. Another difference of ASG with vanilla MMI is that P(Y_r|X_r) is computed using unnormalized scores.", "\"Concerning recommended citations: TIMIT ones are not very relevant, as they report phone error rate (no WER). TIMIT is also a tiny dataset. WSJ-related citations are far from reaching SOTA — and WSJ is an order of magnitude smaller than LibriSpeech.\"\n\nTIMIT/WSJ citations should not be missing because \"tiny dataset\" or \"far from reaching SOTA\" .\n1] Research is progressive, we start off with small problems then move onto bigger ones. We should not discount prior work because they were done on smaller problems or weren't as successful. We should acknowledge and compare to prior work.\n2] WSJ is not far from SOTA, see:\nJan Chorowski and Navdeep Jaitly, \"Towards better decoding and language model integration in sequence to sequence models\", in INTERSPEECH 2017. They achieve near DNN-HMM (when compared w/o speaker adaptation).\n\nCitation missing Alex Grave's paper, this is arguably the paper that kicked off the whole field of end2end ASR.\nAlex Graves and Navdeep Jaitly, \"Towards End-To-End Speech Recognition with Recurrent Neural Networks\" in ICML 2014. This citation is critical and missing. \n\nCitation missing for Hori's work. Their CTC+attention model surpasses DNN-HMM models for both chinese and japanese compared to the Kaldi MMI recipe:\nHori et al., \"Advances in Joint CTC-Attention based End-to-End Speech Recognition with a Deep CNN Encoder and RNN-LM\", in INTERSPEECH 2017.\n\nThe authors also selected a weird dataset librespeech instead of WSJ. The vast majority of prior literature on end2end ASR has been done on WSJ, including CTC and seq2seq. A fair comparison needs to be done to CTC and seq2seq, it is very unclear reading this paper how the model compares to other end2end results (i.e., how would it fair compared to label smoothing, CTC+attention model, Latent Sequence Decomposition; all of which is published on WSJ). There should be no reason to avoid comparing new work to prior work/literature.\n\nAlso, one might argue this paper is not end2end as it requires a n-gram LM to get reasonable results. CTC/ASG models do not perform well w/o LM (relative to seq2seq).", "Thanks for the reply.\n\n> ASG can be viewed as related to a MMI criterion with a letter-based LM. However, ASG uses a discriminative model P(label|sound) = P(Y|X) instead of P(X|Y) in MMI. This goes out of the scope of this paper, though.\n\nI hope you are just misremembering. MMI is maximizing P(Y|X). See [1, 2]. MMI is a general loss function, and need not be tied to lattices and HMMs. The only minor difference between MMI and ASG is whether the segmentations are marginalized. Sometimes people marginalize segmentations when using MMI, e.g., in [2]. In this case, MMI and ASG are equivalent.\n\n> It's quadratic in the # of characters in the dictionary, linear in sequence length. CTC is linear in both (dict size/sentence length): it is faster for languages with a large # of characters (Chinese). For English the runtime is dominated by sequence length anyway.\n\nThis should be noted in the paper. CTC can be applied to label sets beyond characters [3, 4], so the dependency on the size of the label set matters.\n\n> This is the first paper to show that letter-based systems can reach phone/senone-based systems performance, on a standard dataset, with no additional data.\n\n> All systems are trained on LibriSpeech and without limitations, our comparison is as meaningful as it gets.\n\nI agree with this statement and I agree the authors have detailed the how, but the more important question is why. Is it because of ASG? Is it because of switching from plain CNNs to gated CNNs? Or is it just because of better tuning? Building a system with known techniques and better tuning should not be considered as a contribution. This is also why I said the comparison in table 4 is meaningless. The authors should at least conduct experiments to show where the improvements are coming from. Please at least compare against CTC, and at least compare plain CNNs against gated CNNs. It would be even better if the authors can use the same network architectures as the other papers appeared in table 4 to compare CTC and ASG. Having the right control experiments is the very basic of a scientific study.\n\n[1] L Bahl, P Brown, P de Souza, R Mercer, Maximum Mutual Information Estimation of Hidden Markov Model Parameters for Speech Recognition, 1986\n\n[2] D Povey, PC Woodland, Minimum Phone Error and I-Smoothing for Improved Discriminative Training, 2007\n\n[3] H Soltau, H Liao, H Sak, Neural speech recognizer: Acoutic-to-word LSTM model for large vocabulary speech recognition, 2016\n\n[4] H Liu, Z Zhu, X Li, S Satheesh, Gram-CTC: Automatic Unit Selection and Target Decomposition for Sequence Labelling, 2017", "General comments:\n\nThe point of the paper is that letter-based systems can compete with phone/senone-based systems, with no extra training data. We will clarify that letter-based systems (also called grapheme-based systems) predate all the recommended citations (see \"Context-dependent acoustic modeling using graphemes for large vocabulary speech recognition\", 2002 or \"Grapheme based speech recognition\", 2003). However, previous letter-based work report WER far behind from phone-based systems.\n\nConcerning recommended citations: TIMIT ones are not very relevant, as they report phone error rate (no WER). TIMIT is also a tiny dataset. WSJ-related citations are far from reaching SOTA — and WSJ is an order of magnitude smaller than LibriSpeech. We will add:\n- https://arxiv.org/pdf/1508.01211.pdf: closer than other work to SOTA, and comparable to LibriSpeech in size — not reproducible though (Google data).\n- https://arxiv.org/pdf/1708.00531.pdf: compares ASG from a formal standpoint.\n\nWe never claimed MFSC are novel. We will keep their descriptions for persons less familiar with speech features. We will switch to the more common name \"log mel-filterbanks\".\n\nReviewer 1:\n\n- how is this an end-to-end approach if you are using an n-gram language model for decoding?\n\nSame concept of end-to-end than other existing approaches (e.g. Deep Speech 2 uses an n-gram LM).\n\n- MMI was also widely used in HMM/GMM systems, not just NN systems\n\nIndeed - we will reword.\n\n- what is the relationship of the presented ASG criterion to MMI? [...]\n\nASG can be viewed as related to a MMI criterion with a letter-based LM. However, ASG uses a discriminative model P(label|sound) = P(Y|X) instead of P(X|Y) in MMI. This goes out of the scope of this paper, though.\n\n- you say there is no \"complexity\" incrase when using \"logadd\" [...]\n\nWe meant code complexity (one only needs to replace \"max\" by \"logadd\") - we will fix.\n\n- There is discussion as to what i-vectors model [...]\n\nWe will add a Kaldi baseline with no speaker adapation.\n\n- I am confused by the references in the caption of Table 3 [...]\n\nIndeed - we will fix.\n\n- can you also compare the training times?\n\nHard to do so without ending up comparing \"implementations\".\n\nReviewer 2:\n\n- There is not much that's technically new in the paper [...] There is some text about a variant of CTC [...]\n\nWe are working on a version of Table 2 with CTC-trained models. CTC and ASG leads to similar results in our experience; the advantage of ASG is that there is no blank state, which simplifies the decoder implementation.\n\nReviewer 3:\n\n- It is fair to say that this paper contains almost no novelty.\n\nThis is the first paper to show that letter-based systems can reach phone/senone-based systems performance, on a standard dataset, with no additional data.\n\n- This paper starts by bashing the complexity of conventional HMM systems [...]\n\nWe do not bash! We cite and explain the lineage of speech recognition systems.\n\n- It is unclear why discriminative training, such as MMI [...], is mentioned\n\n\"discriminative\" is not present in our paper, we mention these criterions as they relate to CTC and ASG.\n\n- The authors argue that ASG is better than CTC [...] because it does not use the blank symbol and can be faster during decoding.\n\nWe argue that ASG (without blank labels) makes the decoder's code simpler, not computationally more efficient.\n\n- [...] in ASG, the search space becomes quadratic in the number of characters, while CTC is still linear [...]\n\nIt's quadratic in the # of characters in the dictionary, linear in sequence length. CTC is linear in both (dict size/sentence length): it is faster for languages with a large # of characters (Chinese). For English the runtime is dominated by sequence length anyway.\n\n- The citation style in section 2.4 seems off.\n\nWe will fix.\n\n- Details about training [...] are missing. Does no batching [...]\n\nWe will fix; yes: batch = 1 utterance in section 3.2.\n\n- [...] why is there a huge difference in real-time factors between the clean and other set? Something is wrong unless [...]\n\nThere is nothing wrong, the decoder has a score-based limit on the beam width, and noisy speech produces way larger beams.\n\n- The paper can be significantly improved if the authors compare the performance and decoding speed against CTC with the same gated convnet\n\nWe are working on it.\n\n- None of the comparison in table 4 is really meaningful [...]\n\nAll systems are trained on LibriSpeech and without limitations, our comparison is as meaningful as it gets.\n\n- It is also unclear why speaker-adaptive training is not needed\n\nWe did not say it is not needed; it is likely that speaker adaptation helps to reduce WER even further (future work).\n\n- At which layer do the features become speaker invariant?\n\nIt gets very hard to classify speakers (and thus speaker invariant) after the first few layers. (part of future work).\n\n- Can the system improve further if speaker-adaptive [...]?\n\nPossibly (future work).", "The paper seems to completely ignore a set of works on character-based ASR with attention networks:\n - Chorowski et al. \"Attention-based models for speech recognition.\", 2015\n - Chan et al. \"Listen, attend and spell\", 2015\n - Bahdanau et al. \"End-to-end attention-based large vocabulary speech recognition.\", 2016\nand some milestone works with CTC loss function:\n - Graves and Jaitly \"Towards end-to-end speech recognition with recurrent neural networks.\" , 2014\n - Zhang et al. \"Towards end-to-end speech recognition with deep convolutional neural networks.\", 2017\n\nThis work uses rather unusual corpus LibriSpeech, therefore the performance is not comparable to works listed above (that benchmark mainly on WSJ). LibriSpeech is great, but the paper lacks comparison to prior work on end-to-end recognition.\n\nMFSC features are presented as an invention in this paper. Such features are usually referred as \"log-mel filterbank\" in the literature." ]
[ 3, 6, 4, -1, -1, -1, -1, -1 ]
[ 5, 5, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Hyig0zb0Z", "iclr_2018_Hyig0zb0Z", "iclr_2018_Hyig0zb0Z", "S1R0OMGGG", "iclr_2018_Hyig0zb0Z", "Sk7YcE1fG", "iclr_2018_Hyig0zb0Z", "iclr_2018_Hyig0zb0Z" ]
iclr_2018_S1Ow_e-Rb
How do deep convolutional neural networks learn from raw audio waveforms?
Prior work on speech and audio processing has demonstrated the ability to obtain excellent performance when learning directly from raw audio waveforms using convolutional neural networks (CNNs). However, the exact inner workings of a CNN remain unclear, which hinders further developments and improvements into this direction. In this paper, we theoretically analyze and explain how deep CNNs learn from raw audio waveforms and identify potential limitations of existing network structures. Based on this analysis, we further propose a new network architecture (called SimpleNet), which offers a very simple but concise structure and high model interpretability.
rejected-papers
The reviewers rightly point out that presented analysis is limiting and that the experimental results are not extensive enough. Moreover, several existing work that use raw waveforms have interesting analysis of what the network is trying to learn. Given these comments, the AC recommends that the paper be rejected.
train
[ "B1DXXLUNG", "r19T4gcgM", "HydgKG5ez", "rJ37LJolM", "SkMmiXhXG", "HyycjgXzz", "B1wpueQGf", "rJc5ue7fG", "ByLRPe7fz", "SJZsPxmGM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "0. Noted, thanks for clarifying this.\n\n1. I'm still not convinced. You say that \"the processing of raw waveforms will also go through a similar time-feature representation\", but how do you know where that is? At which point in the network do you call something a \"time-feature representation\"? Isn't every layer in the network technically producing a time-feature representation? I've actually come to suspect you're talking about the point in the network where the time dimension completely disappears (e.g. through pooling), but you should definitely state this more clearly in that case. It is incorrect and confusing to call one type of representation a \"temporal signal\" and another \"features\". A temporal signal also consists of (a sequence of) features.\n\n2. I think Section 2.2.3 does not show convincingly that useful information is destroyed at all. I don't dispute that there will be aliasing effects, but these are only problematic when looking at each feature in isolation. A neural network layer produces a vector of complementary features, and these can disambiguate each other even if a lot of aliasing occurs in each feature individually. So I still disagree fundamentally with this argument.\n\n3. To compare models fairly, first choose a basis of comparison, e.g. model capacity (which could be expressed in # learnable parameters) or inference speed, or something else. I think the former is most appropriate here. Once you fix the number of parameters it should be pretty straightforward to compare a few different architectures of different depths.\n\n4. While I agree that the spectral properties of audio signals and images are very different, most salient information in images is high-frequency, and the point of most neural nets is precisely to pick up on this salient information. Since pooling layers work just fine in discriminative networks trained on images (admittedly they are not strictly necessary, either), I don't think this argument makes sense either. Sure, pooling filtered audio signals will remove information, but you simply cannot ignore the interaction with nonlinearities here, nor the fact that that you always have multiple feature signals that are complementary to each other and can disambiguate each other.\n\n\nI've decided not to change my rating as I think almost all of my criticisms still hold for the current version of the paper.", "Summary: \n\nThe authors aim to analyze what deep CNNs learn, and end up proposing “SimpleNet”, which essentially reduces the early feature extraction stage of the network to a single convolutional layer, which is initialized using pre-defined filters. The authors step through a specific example involving bandpass filters to illustrate that multiple layers of filtering can be reduced to a single layer in the linear case, as well as the limitations of pooling. Results on a 4-class speaker emotion recognition task demonstrate some advantage over other, deeper architectures that have been proposed, as well as predefined feature processing.\n\nReview:\n\nThe authors’ arguments and analysis are in my assessment rudimentary---the effects of pooling and cascading multiple linear convolutions are well appreciated by researchers in the field. Furthermore, the adaptation of “front-end” signal processing modules in and end-to-end manner has been considered extensively before (e.g. Sainath et al., 2015), and recent work on very deep networks for signal processing that shows gains on more substantial tasks have not been cited (e.g. Dai, Wei, et al. below). Finally, the experimental results, considering the extensive previous work in this area, are insufficient to establish novel performance in lieu of novel ideas.\n\nOverall Recommendation: \n\nOverall, this paper is a technical report that falls well below the acceptance threshold for ICLR for the reasons cited above. Reject. \n", "The paper proposes a CNN-based based approach for speech processing using raw waveforms as input. An analysis of convolution and pooling layers applied on waveforms is first presented. An architecture called SimpleNet is then presented and evaluated on two speech tasks: emotion recognition and gender classification. \n\nThis paper propose a theoretical analysis of convolution and pooling layers to motivate the SimpleNet architecture. To my understanding, the analysis is flawed (see comments below). The SimpleNet approach is interesting but not sufficiently backed with experimental results. The network analysis is minimal and provides almost no insights. I therefore recommend to reject the paper. \n\nDetailed comments:\n\nSection 1:\n\n* “Therefore, it remains unknown what actual features CNNs learn from waveform”. This is not true, several works on speech recognition have shown that a convolution layer taking raw speech as input can be seen as a bank of learned filters. For instance in the context of speech recognition, [9] showed that the filters learn phoneme-specific responses, [10] showed that the learned filters are close to Mel filter banks and [7] showed that the learned filters are related to MRASTA features and Gabor filters. The authors should discuss these previous works in the paper.\n\nSection 2:\n\n* Section 2.1 seems unnecessary, I think it’s safe to assume that the Shannon-Nyquist theorem and the definition of convolution are known by the reader.\n\n* Section 2.2.2 & 2.2.3: I don't follow the justification that stacking convolutions are not needed: the example provided is correct if two convolutions are directly stacked without non-linearity, but the conclusion does not hold with a non-linearity and/or a pooling layer between the convolutions: two stacked convolutions with non-linearities are not equivalent to a single convolution. To my understanding, the same problem is present for the pooling layer: the presented conclusion that pooling introduces aliasing is only valid for two directly stacked pooling layers and is not correct for stacked blocks of convolution/pooling/non-linearity.\n\n* Section 2.2.5: The ReLU can be seen as a half-wave rectifier if it is applied directly to the waveform. However, it is usually not the case as it is applied on the output of the convolution and/or pooling layers. Therefore I don’t see the point of this section. \n\n* Section 2.2.6: In this section, the authors discuss the differences between spectrogram-based and waveforms-based approaches, assuming that spectrogram-based approach have fixed filters. But spectrogram can also be used as input to CNNs (i.e. using learned filters) for instance in speech recognition [1] or emotion recognition [11]. Thus the comparison could be more interesting if it was between spectrogram-based and raw waveform-based approaches when the filters are learned in both cases. \n\nSection 3:\n\n* Figure 4 is very interesting, and is in my opinion a stronger motivation for SimpleNet that the analysis presented in Section 2.\n\n* Using known filterbanks such as Mel or Gammatone filters as initialization point for the convolution layer is not novel and has been already investigated in [7,8,10] in the context of speech recognition. \n\nSection 4:\n\n* On emotion recognition, the results show that the proposed approach is slightly better, but there is some issues: the average recall metric is usually used for this task due to class imbalance (see [1] for instance). Could the authors provide results with this metric ? Also IEMOCAP is a well-used corpus for this task, could the authors provide some baselines performance for comparison (e.g. [11]) ? \n\n* For gender classification, there is no gain from SimpleNet compared to the baselines. The authors also mention that some utterances have overlapping speech. These utterances are easy to find from the annotations provided with the corpus, so it should be easy to remove them for the train and test set. Overall, in the current form, the results are not convincing.\n\n* Section 4.3: The analysis is minimal: it shows that filters changed after training (as already presented in Figure 4). I don't follow completely the argument that the filters should focus on low frequency. It is more informative, but one could expect that the filters will specialized, thus some of them will focus on high frequencies, to model the high frequency events such as consonants or unvoiced event. \nIt could be very interesting to relate the learned filters to the labels: are some filters learned to model specific emotions ? For gender classification, are some filters focusing on the average pitch frequency of male and female speaker ?\n\n* Finally, it would be nice to see if the claims in Section 2 about the fact that only one convolution layer is needed and that stacking pooling layers can hurt the performance are verified experimentally: for instance, experiments with more than one pair of convolution/pooling could be presented.\n\nMinor comments:\n\n* More references for raw waveforms-based approach for speech recognition should be added [3,4,6,7,8,9] in the introduction.\n\n* I don’t understand the first sentence of the paper: “In the field of speech and audio processing, due to the lack of tools to directly process high dimensional data …”. Is this also true for any pattern recognition fields ? \n\n* For the MFCCs reference in 2.2.2, the authors should cite [12].\n\n* Figure 6: Only half of the spectrum should be presented.\n\nReferences: \n\n[1] H. Lee, P. Pham, Y. Largman, and A. Y. Ng. Unsupervised feature learning for audio classification using convolutional deep belief networks. In Advances in Neural Information Processing Systems 22, pages 1096–1104, 2009.\n\n[2] Schuller, Björn, Stefan Steidl, and Anton Batliner. \"The interspeech 2009 emotion challenge.\" Tenth Annual Conference of the International Speech Communication Association. 2009.\n\n[3] N. Jaitly, G. Hinton, Learning a better representation of speech sound waves using restricted Boltzmann machines, in: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2011, pp. 5884–5887.\n\n[4] D. Palaz, R. Collobert, and M. Magimai.-Doss. Estimating Phoneme Class Conditional Probabilities from Raw Speech Signal using Convolutional Neural Networks, INTERSPEECH 2013, pages 1766–1770.\n\n[5] Van den Oord, Aaron, Sander Dieleman, and Benjamin Schrauwen. \"Deep content-based music recommendation.\" Advances in neural information processing systems. 2013.\n\n[6] Z.Tuske, P.Golik, R.Schluter, H.Ney, Acoustic Modeling with Deep Neural Networks Using Raw Time Signal for LVCSR,\nin: Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), Singapore, 2014, pp. 890–894.\n\n[7] P. Golik, Z. Tuske, R. Schlu ̈ter, H. Ney, Convolutional Neural Networks for Acoustic Modeling of Raw Time Signal in LVCSR, in: Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2015, pp. 26–30.\n\n[8] Yedid Hoshen and Ron Weiss and Kevin W Wilson, Speech Acoustic Modeling from Raw Multichannel Waveforms, International Conference on Acoustics, Speech, and Signal Processing, 2015.\n\n[9] D. Palaz, M. Magimai-Doss, and R. Collobert. Analysis of CNN-based Speech Recognition System using Raw Speech as Input, INTERSPEECH 2015, pages 11–15.\n\n[10] T. N. Sainath, R. J. Weiss, A. Senior, K. W. Wilson, and O. Vinyals. Learning the Speech Front-end With Raw Waveform CLDNNs. Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2015.\n\n[11] Satt, Aharon & Rozenberg, Shai & Hoory, Ron. (2017). Efficient Emotion Recognition from Speech Using Deep Learning on Spectrograms. 1089-1093. Interspeech 2017.\n\n[12] S. Davis and P. Mermelstein. Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Transactions on Acoustics, Speech and Signal Processing, 28(4):357–366, 1980.", "The paper provides an analysis of the representations learnt in convolutional neural networks that take raw audio waveforms as input for a speaker emotion recognition task. Based on this analysis, an architecture is proposed and compared to other architectures inspired by other recent work. The proposed architecture overfits less on this task and thus performs better.\n\nI think this work is not experimentally strong enough to draw the conclusions that it draws. The proposed architecture, aptly called \"SimpleNet\", is relatively shallow compared to the reference architectures, and the task that is chosen for the experiments is relatively small-scale. I think it isn't reasonable to draw conclusions about what convnets learn in general from training on a single task, and especially not a small-scale one like this. \n\nMoreover, SoundNet, which the proposed architecture is compared to, was trained on (and designed for) a much richer and more challenging task originally. So it is not surprising at all that it overfits dramatically to the tasks chosen here (as indicated in table 1), and that a much shallower network with fewer parameters overfits less. This seems obvious to me, and contrary to what's claimed in the paper, it provides no convincing evidence that shallow architectures are inherently better suited for raw audio waveform processing. This is akin to saying that LeNet-5 is a better architecture for image classification than Inception, because the latter overfits more on MNIST. Perhaps using the original SoundNet task, which is much more versatile, would have lent some more credibility to these claims.\n\nThe analysis in section 2.2 is in-depth, but also not very relevant: it ignores the effects of nonlinearities, which are an essential component of modern neural network architectures. Studying their effects in the frequency domain would actually be quite interesting. It is mentioned that the ReLU nonlinearity acts as a half-wave rectifier, but the claim that its effect in the frequency domain is small compared to aliasing is not demonstrated. The claim that \"ReLU and non-linear activations can improve the network performance, but they are not the main factors in the inner workings of CNNs\" is also unfounded.\n\nThe conclusion that stacking layers is not useful might make sense in the absence of nonlinearities, but when each layer includes a nonlinearity, the obvious point of stacking layers is to improve the expressivity of the network. Studying aliasing effects in raw audio neural nets is a great idea, but I feel that this work takes some shortcuts that make the analysis less meaningful.\n\n\n\n\nOther comments:\n\nThe paper is quite lengthy (11 pages of text) and contains some sections that could easily be removed, e.g. 2.1.1 through 2.1.3 which explain basic signal processing concepts and could be replaced by a reference. In general, the writing could be much more concise in many places.\n\nThe paper states that \"it remains unknown what actual features CNNs learn from waveforms.\". There is actually some prior work that includes some analysis on what is learnt in the earlier layers of convnets trained on raw audio: \n\"Learning the Speech Front-end With Raw Waveform CLDNNs\", Sainath et al.\n\"Speech acoustic modeling from raw multichannel waveforms\", Hoshen et al.\n\"End-to-end learning for music audio\", Dieleman & Schrauwen\nOnly the first one is cited, but not in this context. I think saying \"it remains unknown\" is a bit too strong of an expression.\n\nThe meaning of the following comment is not clear to me: \"because in computer vision, the spatial frequency is not the only information the model can use\". Surely the frequency domain and the spatial domain are two different representations of the same information contained in an image or audio signal? So in that sense, spatial frequency does encompass all information in an image.\n\nThe implication that high-frequency information is less useful for image-based tasks (\"the spatial frequency of images is usually low\") is incorrect. While lower frequencies dominate the spectrum more obviously in images than in audio, lots of salient information (i.e. edges, textures) will be high-frequency, so models would still have to learn high-frequency features to perform useful tasks.\n\nWaveNet is mentioned (2.2.4) but not cited. WaveNet is a fairly different architecture than the ones discussed in this paper and it would be useful to at least discuss it in the related work section. A lot of the supposed issues discussed in this paper don't apply to WaveNet (e.g. there are no pooling layers, there is a multiplicative nonlinearity in each residual block).\n\nThe paper sometimes uses concepts without clearly defining them, e.g. \"front-end layers\". Please clearly define each concept when it is first introduced.\n\nThe paper seems to make a fairly arbitrary distinction between layers that perform signal filtering operations, and layers that don't - but every layer can be seen as a (possibly nonlinear) filtering operation. Even if SimpleNet has fewer \"front-end layers\", surely the later layers in the network can still introduce aliasing? I think the implicit assumption that later layers in the network perform a fundamentally different kind of operation is incorrect.\n\nIt has been shown that even random linear filters can be quite frequency-selective (see e.g. \"On Random Weights and Unsupervised Feature Learning\", Saxe et al.). This is why I think the proposed \"changing rate\" measure is a poor choice to show effective training. Moreover, optimization pathways don't have to be linear in parameter space, and oscillations can occur. Why not measure the difference with the initial values (at iteration 0)? It seems like that would prove the point a bit better.\n\nManually designing filters to initialize the weights of a convnet has been done in e.g. Sainath et al. (same paper as mentioned before), so it would be useful to refer to it again when this idea is discussed.\n\nIn SpecNet, have the magnitude spectrograms been log-scaled? This is common practice and it can make a dramatic difference in performance. If you haven't tried this, please do.\n", "We fixed the typo in the Section 2.2.4, and adjusted the references according to the reviewers' suggestion in the new version.", "We thank the reviewer for the very helpful review of our paper! There are some points we would like to discuss with the reviewer (listed below). We would appreciate if the reviewer could provide us with another round of comments/suggestions.\n\n1. The reviewer says our work is rudimentary since the effect of pooling/convolutional layers are well appreciated by researchers in the field. We do not agree with this statement; experimental improvements are of course important and meaningful, but when it conflicts with theoretical analysis, there must be something worth to study and investigate. Especially for widely used tools, we actually need more studies instead of completely trusting them. \n\nFor example, in [1] (the reference provided), the 18-layer network actually has minor improvements compared to [2], which is a 4-layer network. The author of [1] states that the limited improvement is due to 1) [2] uses 10-folder cross-validation, [2] uses hold-out validation. There should not be much of a difference and 10-folder cross-validation is actually more reasonable. Thus, we do not think that this is a strong reason. 2) [2] uses 44100Hz recordings, [1] uses 8000Hz recording. If this is the reason limiting the achievable improvement, this means that downsampling will lower the performance, which proves our point to ‘not using pooling layers aggressively’.\n\nAnother example is in Table.6 of [3]; the authors also report that the output of the middle layer (rather than the last layer) of the network is the most discriminative feature, which shows more layers make the representation worse and thus also partly proves our conclusion that ‘stacking convolutional/pooling layers is not always effective’.\n\nIn summary, current studies on using deep learning to process raw waveforms achieve performance improvement empirically, but we also see some phenomena that cannot be explained. In this work, we try to analyze it from the signal processing perspective, which is proved by our experiment, and our analysis can also explain some phenomena of previous work. Therefore, we do not think that this work is rudimentary.\n\n2. Reference/experiment. We apologize for missing some references in this paper. However, due to the very large volume of works that use DNNs to process raw waveforms, we are not able to cite all them in our paper. Similarly, there are a large number of tasks in the audio field, including speech recognition, speech emotion recognition, speaker recognition, and environment recognition. We are not able to show experiments for all these tasks. Therefore, we would appreciate suggestions from the reviewer what kind of experiments could make our conclusions most convincing to allow us to improve this work.\n\n[1] Dai, W., Dai, C., Qu, S., Li, J., & Das, S. (2016). Very deep convolutional neural networks for raw waveforms. arXiv preprint arXiv:1610.00087.\n\n[2] Piczak, K. J. (2015, September). Environmental sound classification with convolutional neural networks. In Machine Learning for Signal Processing (MLSP), 2015 IEEE 25th International Workshop on (pp. 1-6). IEEE.\n\n[3] Aytar, Y., Vondrick, C., & Torralba, A. (2016). Soundnet: Learning sound representations from unlabeled video. In Advances in Neural Information Processing Systems (pp. 892-900).", "We thank the reviewer for the very thorough and helpful review (especially the extensive references)! There are some points we would like to discuss with the reviewer (listed below). We would appreciate if the reviewer can give us another round of comments/suggestions.\n\n1. In [1,2,3], the analysis was focused on only the first and second convolution layers (since these networks are relatively shallow). For deeper networks, there is no investigation yet of the functioning and inner workings of deeper layers, e.g., in [4], the authors only analyze the first convolutional layer, while SoundNet has 8 layers. Further, in Table.6 of [4], the authors also report that the output of the middle layer (rather than the last layer) is the most discriminative feature, but they do not provide an analysis. In fact, it shows that more layers make the representation worse and thus partly proves our conclusion. In computer vision research, the functioning of the first convolutional layer is trivial, but the functioning of deeper layers was not clear until the work in [5] was presented. Thus, we believe that the study of the inner workings of deeper layers is different from previous research, but also very interesting.\n\n2. The reviewer argues that ReLU can be seen as a half-wave rectifier only if it is applied directly to the waveform, not the output of the convolutional/pooling layer. In fact, there is no real ‘original’ waveform. The waveform we input to the DNNs has already been explicitly or implicitly filtered/downsampled by the recording device. The convolutional/pooling layer just performs another round of filtering/downsampling. Thus, the output of the convolutional/pooling layer is still a temporal signal and follows the basic rules of signal processing. Hence, ReLU still can be regarded as the half-wave rectifier when it is applied to the output of mid-layers. Another concern of the reviewer is that aliasing is not valid for the output of stacked blocks of convolutional/pooling/non-linearity; for the same reason stated above, the output of the stacked blocks is still a temporal signal, which will be affected by the aliasing effect.\n\nNote that this analysis is from the perspective of signal processing. We actually can think of the process of front-end layers from two different perspectives: the perspective of machine learning and the perspective of signal processing. From the perspective of machine learning, the deeper architecture and the non-linearities all add to the expressivity (capacity) of the model and thus possibly (but not always) help improve the performance. From the perspective of signal processing, the processing of the front-end layers should extract useful information from the signal, but at least not destroy the information of the signal. The aggressive pooling layer/ aliasing effect actually destroys the information of the signal (this is similar to aggressively downsampling the input waveform), which is a strong motivation for SimpleNet.\n\n3. The review is correct that stacking convolutional layers with non-linearity cannot be replaced by a single convolutional layer. But as stated in Section 2.2.5, the most widely used non-linearity ReLU, from the perspective of signal processing, only adds harmonic frequency components to the signal, which leads to a distortion (rather than a meaningful feature extraction). Thus, we do have reason to doubt the effectiveness of stacking convolutional layers. \n\n4. The reviewer says the spectrogram can be used as input for CNNs [6,7]. This is correct, and we do compare to specNet in our experiment. The architecture of specNet is very similar to the architecture proposed in [7]. Note that due to the limited number of FFT points, the spectrogram usually have fixed and a limited number of FFT bins; suppose the spectrogram has 80 FFT bins for [0Hz,8000Hz] and the average energy/magnitude of each 100Hz range will be placed in the same bin, then the difference of each spectra components within this 100Hz range cannot be recognized in the further processing. Also, the size/number of bins are fixed. In contrast, for the raw waveform approach, the learnable filters (with enough number of points) can perform flexible filtering.", "5. About the dataset and experiment, similar to other works, we combine happy and excited together as the new happy class, which makes the classes roughly balanced. The IEMOCAP database is not designed for gender recognition, thus some utterances contain multiple speakers, but are only annotated as a single speaker. We would greatly appreciate any suggestions by the reviewer on how to avoid this issue. In our experiments, we conducted a strict leave-one-session-out evaluation strategy. In [8], the authors use the same evaluation strategy and achieved 52.84%, which is similar to SimpleNet (52.9%), while SimpleNet is a simpler approach.\n\n[1] Golik, P., Tüske, Z., Schlüter, R., & Ney, H. (2015). Convolutional neural networks for acoustic modeling of raw time signal in LVCSR. In Sixteenth Annual Conference of the International Speech Communication Association.\n\n[2] Hoshen, Y., Weiss, R. J., & Wilson, K. W. (2015, April). Speech acoustic modeling from raw multichannel waveforms. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on (pp. 4624-4628). IEEE.\n\n[3] Sainath, T. N., Weiss, R. J., Senior, A., Wilson, K. W., & Vinyals, O. (2015). Learning the speech front-end with raw waveform CLDNNs. In Sixteenth Annual Conference of the International Speech Communication Association.\n\n[4] Aytar, Y., Vondrick, C., & Torralba, A. (2016). Soundnet: Learning sound representations from unlabeled video. In Advances in Neural Information Processing Systems (pp. 892-900).\n\n[5] Zeiler, M. D., & Fergus, R. (2014, September). Visualizing and understanding convolutional networks. In European conference on computer vision (pp. 818-833). Springer, Cham.\n\n[6] Lee, H., Pham, P., Largman, Y., & Ng, A. Y. (2009). Unsupervised feature learning for audio classification using convolutional deep belief networks. In Advances in neural information processing systems (pp. 1096-1104).\n\n[7] Satt, A., Rozenberg, S., & Hoory, R. (2017). Efficient Emotion Recognition from Speech Using Deep Learning on Spectrograms. Proc. Interspeech 2017, 1089-1093.\n\n[8] Ghosh, S., Laksana, E., Morency, L. P., & Scherer, S. (2016). Representation Learning for Speech Emotion Recognition. In INTERSPEECH (pp. 3603-3607).", "We thank the reviewer for the very thorough and helpful review of the paper! There are some points we would like to discuss with the reviewer (listed below) and we would appreciate it if the reviewer could give us another round of comments/suggestions.\n\n0. The ‘waveNet’ in Section 2.2.4 is a typo, we actually meant the network proposed in [1], which we refer as waveRNN in the paper, this network structure is also used as a baseline in [2] for a variety of tasks, thus we chose to compare our work to waveRNN. \n\n1. The definition of ‘front-end layers’. A spectrogram can be regarded as a time-frequency representation, i.e., a matrix of shape [num_time_steps, num_frequency_bins], each element of the matrix is the magnitude or energy of the frequency component. If we regard each element as a feature, we can also consider it as a time-feature representation. As described in Section 2.2.1, no matter if we explicitly use windowing or implicitly fewer pooling layers, the processing of raw waveforms will also go through a similar time-feature representation, as shown in the white box in Figure 5. We define the layers before the time-feature representation as front-end layers and the layers after the time-feature representation as high-level layers. One of the reviewer’s concern is that the high-level layers will also introduce aliasing effects. However, this is not the case, because the aliasing effect only impacts the temporal signal; the high-level layers are processing the features. \n\n2.The purpose of the work is not to claim that shallow networks can outperform deep networks; actually, in our experiments, when we compare SimpleNet-CNN and SimpleNet-RNN, we show that deeper high-level architectures can be beneficial. Instead, we doubt the effectiveness of deep front-end architectures. We actually can think of the process of front-end layers from two different perspectives: the perspective of machine learning and the perspective of signal processing. From the perspective of machine learning, the deeper architecture and the non-linearities all add to the expressivity (capacity) of the model and thus possibly (but not always) help improve the performance. From the perspective of signal processing, the processing of front-end layers should extract useful information from the signal, but at least not destroy the information in the signal. As shown in Section 2.2.3, the pooling layer/aliasing effect destroys the useful information unrecoverably. ReLU adds harmonic frequency components, which also leads to distortion. Thus, from the perspective of signal processing, it is difficult to support the claim that deep front-end structures are useful. \n\n3. We agree with the reviewer that more experiments (e.g., more tasks and more datasets) will make our conclusion more convincing. However, we do not agree that the overfitting problem makes the comparison unfair. In fact, one can always argue that the dataset is too small for large networks. Then the comparison of deep and shallow networks will be impossible. The IEMOCAP dataset we use has more than 5000 utterances in the selected 4 classes, which is at least large enough for waveRNN, because waveRNN was originally tested using a similar size dataset. SoundNet uses a much larger training set since it is semi-unsupervised, but even using such a large training set, in Table.6 of [3], the authors also report that the output of the middle layer (rather than the last layer) is the most discriminative feature, which shows more layers make the representation worse and therefore partly supports our conclusion. In addition, we use early stop for all experiments. If the reviewer could provide additional information on how to make the comparison more fair for networks with different numbers of layers (and what kinds of experiments would make our conclusion more convincing), this would greatly help us with improving our work. ", "4. The reviewer is correct that high-frequency information is important in image-based tasks, and we are not denying this point. For images, the spatial frequency is relatively low compared to the aliasing threshold, and thus, the aliasing effect is less likely to happen when the images are down-sampled. Images with high spatial frequency can be found at https://web.stanford.edu/class/ee368b/Projects/panu/pages/aliasing.html; natural images, e.g., images in ImageNet, rarely have such high spatial frequency. In contrast, audio waveforms usually cannot handle aggressive down-sampling, because the sampling rate is usually chosen as the minimum value to keep the useful information and avoid aliasing, e.g., for human speech that was sampled at 16,000Hz, the sound quality will substantially drop after a pooling layer with pooling size of 4. The reason to do this comparison is to argue that we should not always follow the approach used to design DNNs for images when designing DNNs for audios, because they are different representations. \n\nReferences:\n\n[1] Trigeorgis G, Ringeval F, Brueckner R, et al. Adieu features? End-to-end speech emotion recognition using a deep convolutional recurrent network[C]//Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016: 5200-5204.\n\n[2] Schuller, B., Steidl, S., Batliner, A., Bergelson, E., Krajewski, J., Janott, C., ... & Warlaumont, A. S. (2017, February). The INTERSPEECH 2017 computational paralinguistics challenge: Addressee, cold & snoring. In Computational Paralinguistics Challenge (ComParE), Interspeech 2017.\n\n[3] Aytar, Y., Vondrick, C., & Torralba, A. (2016). Soundnet: Learning sound representations from unlabeled video. In Advances in Neural Information Processing Systems (pp. 892-900).\n" ]
[ -1, 3, 3, 2, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 5, 5, -1, -1, -1, -1, -1, -1 ]
[ "SJZsPxmGM", "iclr_2018_S1Ow_e-Rb", "iclr_2018_S1Ow_e-Rb", "iclr_2018_S1Ow_e-Rb", "iclr_2018_S1Ow_e-Rb", "r19T4gcgM", "HydgKG5ez", "HydgKG5ez", "rJ37LJolM", "rJ37LJolM" ]
iclr_2018_BybQ7zWCb
“Style” Transfer for Musical Audio Using Multiple Time-Frequency Representations
Neural Style Transfer has become a popular technique for generating images of distinct artistic styles using convolutional neural networks. This recent success in image style transfer has raised the question of whether similar methods can be leveraged to alter the “style” of musical audio. In this work, we attempt long time-scale high-quality audio transfer and texture synthesis in the time-domain that captures harmonic, rhythmic, and timbral elements related to musical style, using examples that may have different lengths and musical keys. We demonstrate the ability to use randomly initialized convolutional neural networks to transfer these aspects of musical style from one piece onto another using 3 different representations of audio: the log-magnitude of the Short Time Fourier Transform (STFT), the Mel spectrogram, and the Constant-Q Transform spectrogram. We propose using these representations as a way of generating and modifying perceptually significant characteristics of musical audio content. We demonstrate each representation's shortcomings and advantages over others by carefully designing neural network structures that complement the nature of musical audio. Finally, we show that the most compelling “style” transfer examples make use of an ensemble of these representations to help capture the varying desired characteristics of audio signals.
rejected-papers
The paper extends an existing work with three different frequency representations of audios and necessary network structure modifications for music style transfer. It is an interesting study but does not provide "sufficiently novel or justified contributions compared to the baseline approach of Ulyanov and Lebedev". Also the revisions can not fully address reviewer 2's concerns.
train
[ "rJAGTitgG", "B1Lbmvmef", "rJ978GcgG", "BJPoXdaXf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "This paper describes improvements to a system described in a blog post for musical style transfer. Such a system is difficult to evaluate, but examples are presented where the style of one song is applied to the content of another. These audio examples show that the system produces somewhat reasonable mixtures of two songs, but suggest that if the system instead followed the (mostly) accepted rules for cover song generation, it could make the output much more pleasant to listen to. Additional evaluation includes measuring correlations between style songs and the output to ensure that it is not being used directly as well as some sort of measure of key invariance that is difficult to interpret. The paper does not completely define the mathematical formulation of the system, making it difficult to understand what is really going on.\n\nThe current paper changes the timbre, rhythm, and harmony of the target content song. Changing the harmony is problematic as it can end up clashing with the generated melody or just change the listener's perception of which song it is. I suggest instead attempting to generate a cover version of the content song in the style of the style song. Cover songs are re-performances of an existing (popular) song by another artist. For example, Jimi Hendrix covered Bob Dylan's \"All along the watchtower\" and the Hendrix version became more popular than the original. This is essentially artist A performing a song by artist B, which is very similar to the goal of the current paper. Cover songs almost always maintain the lyrics, melody, and harmony of the original, while changing the timbre, vocal style, tempo, and rhythmic information. This seems like a good way to structure the problem of musical style transfer. Many systems exist for identifying cover songs, see the relevant publications at the International Society for Music Information Retrieval (ISMIR) Conference. Few systems do something with cover songs after they have been identified, but they could be used for training a system like the one proposed here, if it could be trained. \n\nAnother musically questionable operation is pooling across frequency in the constant-Q transform representation. In western music, adjacent notes are very different from one another and are usually not played in the same key, for example C and C#. Thus, pooling them together to say that one of them is present seems to lose useful information. As part of the pooling discussion, the paper includes an investigation of the key-invariance of the model. Results from this are shown in figure 5, but it is difficult to interpret this figure. What units is the mean squared error measured in? What would be a big value? What would be a small value? What aspects of figure 5 specifically \"confirm that changing key between style [and] content has less of an effect on our proposed key-invariant content representations\"?\n\nSection 3.1, which describes the specifics of the model, is confusing. What exactly are S, C, W, and G? What are their two dimensions indexed by i and j? How do you compute them from the input? Which parameters in this model are learned and which are just calculated? Is there any training or is L(X,C,S) just optimized at test time?\n\nFinally, the evaluation of the texture generation part of the algorithm could be compared to existing texture generation algorithms (there are several) such as McDermott & Simoncelli (2011, NEURON), which even has code available online.\n\n\n\nMinor comments\n--------------\n\np2: \"in the this work\" typo\n\np2: \"an highly\" typo\n\np2: \"The first method... The latter class of methods\" confusing wording. Is the second one a different method or referring back to the previous method? If it's different, say \"The second method...\"\n\np7: Please describe kernel sizes in real units (e.g., ms, Hz, cents) as well as numbers of bins\n\n\n\nAfter revision/response\n--------------------------------\nThe revisions of the paper have made it clearer as to what is going on, although the description of the algorithm itself could still be described more mathematically to really make it clear. It is more clear what's going on in figure 5, although it could also be further clarified whether the green bars are showing the distance between log magnitude STFTs of the transposed \"style\" snippets and the untransposed \"content\" snippets directly and so provide an upper bound on the distances. My overall rating of the paper has not changed.", "This paper studies style transfer for musical audio, and largely proposes some additions to the framework proposed by Ulyanov and Lebedev. The changes are designed to improve the long-term temporal structure and harmonic matching of the stylized audio. They carry out a few experiments to demonstrate how their proposed approach improves upon the baseline model.\n\nOverall, I don't think this paper provides sufficiently novel or justified contributions compared to the baseline approach of Ulyanov and Lebedev. It largely studies what happens when a different spectrogram representation is used on the input, and when somewhat different network architectures are used. These changes are interesting, but don't provide a lot of additional information which I believe would be interesting to the ICLR community. They seem better suited for an (audio) signal processing venue, or a more informal venue. In addition, the results are not terribly compelling. If the proposed changes (which are heuristically, not theoretically motivated) resulted in huge improvements to sound quality, I might be convinced. More concretely, the results are still very far away from being able to be used in a commercial application (in contrast with image style transfer, whose impressive results were immediately commercially applied). One reason I think the results remain bad is that the audio signal is still fundamentally represented as a phase-invariant representation. Even if you backpropagate through the time-frequency transformation, the transformation itself discards phase, and so various signals (with different phase characteristics) will appear the same after transformation. I believe this contributes the most to the fact that the resulting audio sounds very artifact-ridden and unrealistic. If the paper had been able to overcome this limitation, I might be more convinced, but as-is I don't think it warrants acceptance at ICLR.\n\nSpecific comments:\n\n- The description of Ulyanov & Lebedev's algorithm in 3.1 is confusingly structured. For example, the sentence \"The L2\ndistance between the generated audio and the content audio’s feature maps...\" is basically a concatenation of roughly 6 thoughts which should be separated into different sentences. The location of the equations (1), (2) do not correspond to where they are introduced in the text. In addition, I don't understand how S and C are generated. It is written that S and C are the \"log-magnitude feature maps for style and content\". But the \"feature maps\" X are themselves a log-magnitude time frequency representation (x) convolved with the filterbank. So how are S and C \"log-magnitude feature maps\"? Surely you aren't computing the log of the output of the filterbank? More equations would be helpful here. Finally, it would be helpful to provide an equation both for G and W instead of just saying that W is analogously defined.\n- I don't see any reason to believe that a mel-scaled spectrogram would better capture longer time scales or rhythmic information. Mel-scaling your spectrogram just changes the frequency axis to a mel scale, which makes it somewhat closer to our perception; it does not modify the way time is represented in any way. In fact, in musical machine learning tasks usually swapping between CQT and mel-scaled spectrograms (with a comparable number of frequency bins) has little effect, so I don't see any compelling reason to use one for \"rhythm\". You need to provide strong empirical or theoretical evidence to back up your claim that this is a principled approach. Instead, I would expect that your change of convolutional structure (to the dilated convolutions, etc) for the \"mel spectrogram\" branch of your network would account more heavily for stronger modeling of longer timescales.\n- You refer to \"WaveNet auto-encoders\" and cite van den Oord et al. The original wavenet paper did not propose an auto-encoder structure; Engel et al. did.\n- \"neither representation is capable of representing spatial patterns along the frequency axis\" What do you mean by this? Mel or linear-frequency (or CQT) spectrograms exhibit very strong patterns along their frequency axis.\n- The method for automatically setting the scales of the different loss terms seems interesting, but I can't find anywhere a description of how you apply each of the beta terms. Are they analogous to the alpha and beta parameters in equation (4)? If so, it appears that gamma is shared across each beta term; this would mean that changing the value of gamma simply changed the scale of all loss terms at once, which would have no effect on optimization.\n- \"This is entirely possible though the ensemble representation\" typo, through -> through\n- That instance normalization causes noisy audio is an interesting empirical result, but I'm interested in a principled explanation of why this would happen.\n- \"the property of having 0 mean and unit variance\" - you use this to describe the SeLU nonlinearity. That's not a property of the nonlinearity, it's a property of the activations of different layers when using the nonlinearity (given correct initialization).\n- How are the \"Inter-Onset Interval Length Distributions\" computed? How are you getting the onsets, etc?\n- \" the maximum cross-correlation value between the time-domain audio waveforms are not significantly affected by the length of this field\" - there are many ways effective copying could happen without the time-domain cross-correlation being large.", "Summary\n-------\nThis paper describes a method for style transfer in musical audio recordings.\nThe proposed method uses three spectral representations to encode rhythm (Mel spectra), harmony (pseudo-constant-Q), and content (STFT), and the shared representation designed to allow synthesis of the time domain signal directly without resorting to phase retrieval methods.\nSome quantitative evaluation is presented for texture synthesis and key invariance, but the main results seem to be qualitative analysis and examples included as supplemental material.\n\n\nQuality\n-------\n\nI enjoyed reading this paper, and appreciate the authors' attention to the specifics of the audio domain.\nThe model design choices make sense in general, though they could be better motivated in places (see below).\nI liked the idea of the rhythm evaluation, but again, I have some questions about the specific implementation.\nThe supplementary audio examples are somewhat hit or miss in my opinion, and a bit more qualitative analysis or a listener preference study would strengthen the paper considerably.\n\n\nClarity\n-------\n\nWhile for the most part, the writing is clear and does a good job of describing the representations used, there are a few parts that could be made more explicit:\n\n- Section 3.2: The motivation for using the Mel spectrum to capture rhythm seems shaky. Each frame has the same temporal resolution as the input STFT representation, so any ability to\n capture rhythmic content should be due to down-stream temporal modeling (dilated residual block, in the model). This does not necessitate or suggest a Mel spectrum, though dimensionality\n reduction is probably beneficial. It would be good to provide a bit more motivation for the choices made here.\n\n- Section 3.2.1: the description of the Mel spectrogram leaves out a few important details, such as the min/max frequencies, shape of the filters, and number of filters.\n\n- Section 3.2.2: the \"pseudo-CQT\" described here is just a different log-frequency projection of the STFT (and not constant-Q), and depending on the number of parameters, could be quite\n redundant with the Mel spectrum. A bit more detail here about the parametrization (filter shapes, etc) and distinction from the Mel basis would be helpful.\n\n- Section 3.3: I didn't completely follow the setup of the objective function. Is there a difference gamma parameter for each component of the model?\n\n- Section 3.4: What is the stride of the pooling operator in the harmony component of the model? It seems like this could have a substantial impact on any key-invariance properties.\n\n- Section 4.1: the idea to measure IOI distributions is clever, but how exactly is it implemented? Does it depend on a pre-trained onset detector? If so, which one? I could imagine\n texture synthesis preserving some aspects of rhythm but destroying onset detection accuracy due to phase reconstruction (whooshiness) problems, and that does not seem to be controlled in\n this experiment.\n\n- Figure 4: how is Phi_{XY} defined? The text does not convey this clearly.\n\n- Section 4.2: why is MSE of the log-stft a good metric for this problem? Changing the key of a piece would substantially change its (linear) spectrum, but leave it relatively constant in\n other representations (eg Mel/MFCC, depending on the bin resolution). Maybe I don't entirely understand what this experiment is measuring.\n\n- General note: please provide proper artist attribution for the songs used in the examples (eg Figure 2).\n\n\nOriginality\n-----------\n\nThe components of the model are not individually novel, but their combination and application are compelling.\nThe approaches to evaluation, while still somewhat unclear, are interesting and original to the best of my knowledge, and could be useful for other practitioners in need of ways to evaluate\nstyle transfer in music.\n\n\n\nSignificance\n------------\n\nThis paper will definitely be of interest to researchers working on music, audio, or creative applications, particularly as a proof of concept illustrating non-trivial style transfer outside\nof visual domains.\n", "We have updated our paper according to your comments. We have tried to clarify the design of our algorithm. \n\nOne thing we tried to emphasize is that the use of the Mel spectrogram is not in its natural ability to capture any type of rhythmic information, but more for its ability to capture information that is still rhythmically relevant with a greatly reduced number of channels. Since a 1D convolutional kernel tensor has a signification number of parameters (# Time bins x # filter in x # filters out), we wanted to create a network that could have more span in time at the cost of less filters in. This lets us represent much more time with a smaller computation graph. As reviewer 2 pointed out, the dilated convolutional network structure we propose is more responsible for the increase in receptive field of time.\n\nAnother point we'd like to clarify is that pooling for the CQT representations happens in the layer representations, not directly on the CQT spectrogram. We agree that pooling directly on the CQT would destroy harmonic information, but propose instead that pooling extracted frequency patterns that could be shifted along this axis would promote key invariance in representations.\n\nThanks again for your time." ]
[ 6, 4, 7, -1 ]
[ 4, 4, 3, -1 ]
[ "iclr_2018_BybQ7zWCb", "iclr_2018_BybQ7zWCb", "iclr_2018_BybQ7zWCb", "iclr_2018_BybQ7zWCb" ]
iclr_2018_SJ60SbW0b
Modeling Latent Attention Within Neural Networks
Deep neural networks are able to solve tasks across a variety of domains and modalities of data. Despite many empirical successes, we lack the ability to clearly understand and interpret the learned mechanisms that contribute to such effective behaviors and more critically, failure modes. In this work, we present a general method for visualizing an arbitrary neural network's inner mechanisms and their power and limitations. Our dataset-centric method produces visualizations of how a trained network attends to components of its inputs. The computed "attention masks" support improved interpretability by highlighting which input attributes are critical in determining output. We demonstrate the effectiveness of our framework on a variety of deep neural network architectures in domains from computer vision and natural language processing. The primary contribution of our approach is an interpretable visualization of attention that provides unique insights into the network's underlying decision-making process irrespective of the data modality.
rejected-papers
The proposed LAN provides a visualization of the selectivity of networks to its inputs. It takes a trained network as golden target and estimates an LAN to predict masks that can be applied on inputs to generate the same outputs. But the significance of the proposed method is unclear, "what is the potential usage of the model?". Empirical justification of that would make it stronger.
train
[ "Hy3t5U5gz", "Sk3pyHoez", "HkkF8qngG", "r1nN7a4Qz", "ByjI56eXG", "rJakzaEmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The main contribution of the paper is to propose to learn a Latent Attention Network (LAN) that can help to visualize the inner structure of a deep neural network. To this end, the paper propose a novel training objective that can learn to tell the importance of each dimension of input. It is very interesting. However, one question is what is the potential usage of the model? Since the model need to train an another network to visualize the structure of a trained neural network, it is expensive, and I don't think the model can help use to design a better structure (at least the experiments did not show this point). And maybe different structures of LAN will produce different understanding of the trained model. Hence people are still not sure what kind of structure is the most helpful.", "The authors of this paper proposed a data-driven black-box visualization scheme. The paper primarily focuses on neural network models in the experiment section. The proposed method iteratively optimize learnable masks for each training example to find the most relevant content in the input that was \"attended\" by the neural network. The authors empirically demonstrated their method on image and text classification tasks. \n\nStrength:\n - The paper is well-written and easy to follow. \n - The qualitative analysis of the experimental results nicely illustrated how the learnt latent attention masks match with our intuition about how neural networks make its classification predictions.\n\n Weakness:\n - Most of the experiments in the paper are performed on small neural networks and simple datesets. I found the method will be more compiling if the authors can show visualization results on ImageNet models. Besides simple object recognition tasks, other more interesting tasks to test out the proposed visualization method are object detection models like end-to-end fast R-CNN, video classification models, and image-captioning models. Overall, the current set of experiments are limited to showcase the effectiveness of the proposed method.\n - It is unclear how the hyperparameter is chosen for the proposed method. How does the \\beta affect the visualization quality? It would be great to show a range of samples from high to low beta values. Does it require tuning for different visualization samples? Does it vary over different datasets?\n ", "The paper presents the formulation of Latent Attention Masks, which is a framework for understanding the importance of input structure in neural networks. The framework takes a pre-trained network F as target of the analysis, and trains another network A that generates masks for inputs. The goal of these masks is to remove parts of the input without changing the response of F. Generated masks are helpful to interpret the preferred patterns of neural networks as well as diagnose modes of error.\n\nThe paper is very well motivated and the formulation and experiments are well presented too. The experiments are conducted in small benchmarks and using simple fully connected networks. It would be interesting to report and discuss convergence properties of the proposed framework. Also, insights of what are the foreseeable challenges on scaling up the framework to real world scenarios.", "Before directly addressing their concerns, we direct the reviewer’s attention to the newly added Sections E, F, & G in the supplementary material of our revised paper draft. These sections include new experiments that illustrate the effect of varying the beta hyperparameter, demonstrate the strength of our approach on the larger scale Inception network for the ILSVRC 2014 classification challenge, and further highlight the effectiveness of our approach in diagnosing model failure modes.\n\nThe phrase “simple datasets” is difficult to interpret; all datasets used in this paper are standard benchmark datasets in computer vision and NLP. We share the reviewer’s desire to further analyze the strength of our framework within computer vision however, for this initial outline of our framework, we have opted to showcase breadth across modalities instead of depth. That said, please consult Section F of our supplementary materials to see visualizations of attention masks trained on top of the Inception network architecture for ImageNet classification. Notably, the results demonstrate that our sample-specific attention masks identify regions of the input space critical to correct classification.\n\nRepeating from a separate comment, LANs trained with an insufficiently large value of beta would accurately reproduce F network outputs without providing useful attention masks; conversely, overly large values of beta dismiss too much information in the input and make reproducing the original network outputs incredibly difficult. Please refer to Section E of the supplementary material for some visualizations illustrating the effect of beta on the resulting attention masks. In general, we make a default assumption that there is a single, if not small range of, beta value that can adequately produce the latent attention mechanisms of the pre-trained network. The parameter does require tuning for different models and, accordingly, we utilize different values of beta across our experiments.\n", "Before directly addressing their concerns, we direct the reviewer’s attention to the newly added Sections E, F, & G in the supplementary material of our revised paper draft. These sections include new experiments that illustrate the effect of varying the beta hyperparameter, demonstrate the strength of our approach on the larger scale Inception network for the ILSVRC 2014 classification challenge, and further highlight the effectiveness of our approach in diagnosing model failure modes.\n\nThe purpose of a LAN is to diagnose failure modes in trained neural networks. Consider a neural network that must make inferences in the healthcare domain, potentially having direct and immediate consequence to patients. When such a network makes any inference, it is paramount that there is an understanding of *why*. Identifying life-threatening illnesses or selecting an optimal course of treatment are just a few examples of decisions that must be made with as much transparency as possible. In capturing the importance in each dimension of a network’s input, our attention masks are a step forward in this direction of interpretable and understandable models. Please consult Section G of our supplementary materials for a demonstration of how sample-specific attention masks successfully identify model failure modes.\n\nWith regards to the cost of a LAN, we note that the training time is itself a sunk cost that offers the potential of yielding information about a potentially erroneous model. In our work, we outline two methods for learning attention masks. While one prescribes an entirely new network to train a mapping from input to attention masks, the other simply learns the attention mask for a single input directly. While we make no claims on the ability for a LAN to deliver a novel architecture that could remedy the problems of an existing network, it can certainly identify failure points from the dataset and provide sufficient motivation for abandoning an existing network architecture.\n\nSince the training objective would remain unchanged, we currently do not believe that using different structures for the LAN will result in significantly different attention masks. That said, the prohibitive cost of searching over the space of possible LAN architectures is, in part, mitigated by our second “sample-specific” approach for learning attention masks directly without constructing an entirely new network. \n", "Before directly addressing their concerns, we direct the reviewer’s attention to the newly added Sections E, F, & G in the supplementary material of our revised paper draft. These sections include new experiments that illustrate the effect of varying the beta hyperparameter, demonstrate the strength of our approach on the larger scale Inception network for the ILSVRC 2014 classification challenge, and further highlight the effectiveness of our approach in diagnosing model failure modes.\n\nIn general, we found that the setting of the beta hyperparameter, weighting the amount of input corruption against the reconstructing the outputs of the pre-trained F network, was the single critical factor in determining convergence to good mask structures. LANs trained with an insufficiently large value of beta would accurately reproduce F network outputs without providing useful attention masks; conversely, overly large values of beta dismiss too much information in the input and make reproducing the original network outputs incredibly difficult. Fortunately, since our approach does not require the re-training of the original network altogether, the grid search over potential beta values is relatively simple, though results must be evaluated qualitatively. Please refer to Section E of the supplementary material for some visualizations illustrating the effect of beta on the resulting attention masks. Accordingly, a fruitful direction for subsequent research involves identifying metrics or alternate loss functions that can better measure the interpretability of the resulting masks thereby minimizing the amount of manual inspection needed to diagnose pre-trained models within our framework.\n" ]
[ 4, 5, 7, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_SJ60SbW0b", "iclr_2018_SJ60SbW0b", "iclr_2018_SJ60SbW0b", "Sk3pyHoez", "Hy3t5U5gz", "HkkF8qngG" ]
iclr_2018_rkmoiMbCb
Tandem Blocks in Deep Convolutional Neural Networks
Due to the success of residual networks (resnets) and related architectures, shortcut connections have quickly become standard tools for building convolutional neural networks. The explanations in the literature for the apparent effectiveness of shortcuts are varied and often contradictory. We hypothesize that shortcuts work primarily because they act as linear counterparts to nonlinear layers. We test this hypothesis by using several variations on the standard residual block, with different types of linear connections, to build small (100k--1.2M parameter) image classification networks. Our experiments show that other kinds of linear connections can be even more effective than the identity shortcuts. Our results also suggest that the best type of linear connection for a given application may depend on both network width and depth.
rejected-papers
The paper presents a good analysis on the use of different linear maps instead of identity shortcuts for resnet. It is interesting to the community but the experimental justification is insufficient. 1) As pointed out by the reviewer that this work shows "that on small size networks Tandem Block outperforms Residual Blocks, since He at. al. (2016) in Tab 1 showed a contrary effect, does it mean that the observations do not scale to higher capacity networks?", the paper would be much stronger if with experiments justify this claim. 2) "extremely deep networks take much longer to train" is not a valid reason to not conduct such exps.
train
[ "BkWQMuIBz", "rJyDBCYgG", "ryUzS184G", "Hk1CjNZyM", "ry1gCAFlM", "HkL83kfVz", "HJ3dHdW4z", "BJVy_t14M", "HkLjGd6mM", "B1m4fdpmz", "Sy_eGOTXf", "Byzn-uamG", "BJKUbSHJf", "BkYT2CzkM" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "public", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "We appreciate that all of our reviewers responded to our updated paper and we are pleased to see that we managed to address nearly all of their questions and concerns. The only remaining criticisms regard the size of the networks in our experiments. While we were careful to recreate the meta-architecture of Zagoruyko and Komodakis to ensure the most direct and relevant comparisons possible, we understand the desire to see the same experiments done on a larger scale. We did as much as we could in this regard with the financial and computational resources available to us. We believe that our results and analysis make a compelling case for questioning the conventional wisdom in this area and motivate further (and larger) experiments.", "This paper performs an analysis of shortcut connections in ResNet-like architectures. The authors hypothesize that the success of shortcut connections comes from the combination of linear and non-linear features at each layer and propose to substitute the identity shortcuts with a convolutional one (without non-linearity). This alternative is referred to as tandem block. Experiments are performed on a variety of image classification tasks such as CIFAR-10, CIFAR-100, SVHN and Fashion MNIST.\n\nThe paper is well structured and easy to follow. The main contribution of the paper is the comparison between identity skip connections and skip connections with one convolutional layer.\n\nMy main concerns are related to the contribution of the paper and experimental pipeline followed to perform the comparison. First, the idea of having convolutional shortcuts was already explored in the ResNet paper (see https://arxiv.org/pdf/1603.05027.pdf). Second, given Figures 3-4-5-6, it would seem that the authors are monitoring the performance on the test set during training. Moreover, results on Table 2 are reported as the ones with “the highest test accuracy achieved with each tandem block”. Could the authors give more details on how the hyperparameters of the architectures/optimization were chosen and provide more information on how the best results were achieved?\n\nIn section 3.5, the authors mention that batchnorm was not useful in their experiments, and was more sensitive to the learning rate value. Do the authors have any explanation/intuition for this behavior?\n\nIn section 4, authors claim that their results are competitive with the best published results for a similar number of parameters. It would be beneficial to add the mentioned best performing models in Table 2 to back this statement. Moreover, it seems that in some cases such as SVHN the differences between all the proposed blocks are too minor to draw any strong conclusions. Could those differences be due to, for example, luck in picking the initialization seed? How many times was each experiment run? If more than once, what was the std?\n\nThe experiments were performed on relatively shallow networks (8 to 26 layers). I wonder how the conclusions drawn scale to much deeper networks (of 100 layers for example) and on larger datasets such as ImageNet.\n\nFigures 3-5 are not referenced nor discussed in the text.\n\nFollowing the design of the tandem blocks proposed in the paper, I wonder why the tandem block B3x3(2,w) was not included.\n\nFinally, it might be interesting to initialize the convolutions in the shortcut connections with the identity, and check what they have leant at the end of the training.\n\nSome typos that the authors might want to fix:\n\n- backpropegation -> backpropagation (Introduction, paragraph 3)\n- dropout is a kind of regularization as well (Introduction, second to last paragraph)\n- nad -> and (Sect 3.1. paragraph 1)\n", "The code and hyperparameters we used are now available at https://github.com/tandemblock/iclr2018\n\nWe hope this will further clarify our methods and make it easy to reproduce our results.\n", "This paper investigates the effect of replacing identity skip connections with trainable convolutional skip connections in ResNet. The authors find that in their experiments, performance improves. Therefore, the power of skip connections is due to their linearity rather than due to the fact that they represent the identity.\n\nOverall, the paper has a clear and simple message and is very readable. The paper contains a good amount of experiments, but in my opinion not quite enough to conclude that identity skip connections are inherently worse. The question is then: how non-trivial is it that tandem networks work? For someone who understands and has worked with ResNet and similar architectures, this is not a surprise. Therefore, the paper is somewhat marginal but, I think, still worth accepting.\n\nWhy did you choose a single learning rate for all architectures and datasets instead of choosing the optimal one for each archtitecture and dataset? Was it a question of computational resources? Using custom step sizes would strenghten your experimental results significantly. In the absence of this, I would still ask that you create an appendix where you specify exactly how hyperparameters were chosen.\n\nOther comments:\n\n- \"and that it’s easier for a layer to learn from a starting point of keeping things the same (the identity map) than from the zero map\" I don't understand this comment. Networks without skip connections are not initialized to the zero map but have nonzero, usually Gaussian, weights.\n- in section 2, reason (ii), you seem to imply that it is a good thing if a network behaves as an ensemble of shallower networks. In general, this is a bad thing. Therefore, the fact that ResNet with tandom networks is an ensemble of shallower networks is a reason for why it might perform badly, not well. I would suggest removing reason (ii).\n- in section 3, reason (iii), you state that removing nonlinearities from the skip path can improve performance. However, using tandom blocks instead of identity skip connections does not change the number of nonlinearity layers. Therefore, I do not see how reason (iii) applies to tandem networks.\n- \"The best blocks in each challenge were competitive with the best published results for their numbers of parameters; see Table 2 for the breakdown.\" What are the best published results? I do not see them in table 2.", "The paper is well written, has a good structure and is easy to follow. The paper investigates the importance of having the identity skip connections in residual block. The authors hypothesize that changing the identity mapping into a linear function would be beneficial. The main contribution of the paper is the Tandem Block, that is composed of two paths, linear and nonlinear, the outcome of two paths is summed at the end of the block. Similarly, as for residual blocks in ResNets, one can stack together multiple Tandem Blocks. However, this contribution seems to be rather limited. He at. al. (2016) introduces a Tandem Block like structure, very similar to B_(1x1)(2,w), see Fig. 2(e) in He at. al. (2016). Moreover, He et. al (2016) shows in Tab 1 that for a ResNet 101 this tandem like structure performs significantly worse than identity skip connections. This should be properly mentioned, discussed and reflected in the contributions of the paper. \n\nResult section: \nMy main concern is that it seems that the comparison of different Tandem Blocks designs has been performed on test set (e. g. Table 2 displays the highest test accuracies) . Figs 3, 4, 5 and 6 together with Tab. 2 monitors test set. The architectural search together with hyperparameters selection should be performed on validation set. \n\n\nOther issues:\n- Section 1: “… ResNets have overcome the challenging technical obstacles of vanishing/exploding gradients… “. It is clear how ResNet address the issue of vanishing gradients, however, I’m not sure if ResNet can also address the problem of exploding gradients. Can authors provide reference for this statement?\n- Experiments: The authors show that on small size networks Tandem Block outperforms Residual Blocks, since He at. al. (2016) in Tab 1 showed a contrary effect, does it mean that the observations do not scale to higher capacity networks? Could the authors comment on that? ", "We'd be more than happy to help you recreate our experiments. Hopefully the following will answer most of your questions. We'll also provide a follow-up later this week with code samples and tables of hyperparameter values.\n\n1. We do mean stride in the usual (spatial) sense. Much like pooling, it reduces the height and width of our image channels. When using identity connections or 1x1 convolutions, using stride 2 simply amounts to taking the sub-image of pixels with even coordinates.\n\nAs you observe, this needs to happen on both the linear and nonlinear sides of a block so that they can be added together. When there are two nonlinear layers in a block, we only change the stride on the first one.\n\n2. The specific initialization method we used was the 'variance scaling' method from the keras package, which uses a standard deviation of sqrt(scale/n) where n is the number of inputs to the layer (which is just the width of the previous layer). We determined the scale parameter experimentally, so we'll have to put together a list of the ones we ended up using for each experiment.\n\nExact values don't seem to be very important in this case. They just need to be large enough to get the network learning, but not so large that it becomes unstable. All of our values were between 0.3 and 1.2.\n\n3. We used a batch size of 125 for all experiments.\n\nAll of the data sets we used come with designated test sets that we used as such.\n\nFor our hyperparameter grid searches, we used 20% of the given training data as validation data and the remaining 80% as training data. We made sure that classes were equally represented in both the training and validation sets.\n\n4. Our weight decay and dropout values were a little different in each experiment, as dictated by our grid search. We'll make tables of these for you and get them to you soon. However, you may also want to perform your own grid searches for these values. We tried weight decay values from 0.0000 to 0.0004 and dropout rates from 0.0 to 0.3 for each experiment. \n\n5. For data augmentation, we only used shifts (both vertical and horizontal) and flips (only horizontal). The shifts were limited to 10% of image height/width, so 3.2 pixels.", "I like the authors thoughtful response to my points and those raised by other reviewers. Also I was not aware that the major ResNet papers took a position in favor of identity connections. I am more convinced than before that this paper is an accept.\n\nOne note: Regarding the statement that ResNet combats exploding gradients, which one of the other reviewers objected to, this has been demonstrated in \"Gradients explode - deep networks are shallow - ResNet explained\" as submitted to this conference: https://openreview.net/forum?id=HkpYwMZRb (I hope you'll allow me to be so bold as to shill my own paper :)", "Hi,\nAs a final project in deep learning seminar in Tel Aviv University, I am reproducing the experiments described in your paper.\nI’ll much appreciate it if you can answer a few questions regarding the implementation details.\n\n1. Tandem blocks with stride 2\nI assume stride 2 in the spatial dimensions.\nAs I understand, on those blocks, the linear layer is either 1x1 convolution or 3x3 convolution (not identity) as the input and output differs in the third dimension.\nIs there a stride 2 in the linear part as well?\nIn blocks with l=2, there are 2 nonlinear layers (3x3 convolution) and only 1 linear layer (1x1 convolution). Assuming stride 2 in all the 3 convolutions, linear output and nonlinear output have different dimensions and can’t be summed together. How do you cope with that?\n\n2. Initialization\nIn the paper, you mentioned that the initialization was done as in He et al. (2015).\nAs I understand, std in layer l is sqrt( 2/nl), while nl is the number of the kernel parameter.\nYou also mentioned a “base standard deviation” that varied considerably from network to network.\nHow does the “base standard deviation” affect the std computation?\nWhat “base standard deviation” did you use in each model?\nWhat is the std used for the softmax output layer weights?\nIs it correct that you initialized biases to 0?\n\n3. Training\nWhat was the batch size that you used in each of the experiments?\nYou mentioned a use of validation set. What portion of the training set was used for training in each of the experiments? \n\n4. Regularization\nPlease specify the weight decay and dropout rates that you used for each of the architectures.\n\n5. Data Augmentation\nPlease specify the details and amount of the augmentation that you used.\n\nThanks,\nOshrat Bar\nTel Aviv University\noshratbar@mail.tau.ac.il \n", "In response to the reviewers' comments, we have made a number of improvements to the paper. Most importantly, we:\n\n- Clarified that the primary purpose of the paper was not to introduce novel architectures, but to challenge the conclusion that identity shortcuts are superior to other linear shortcuts. We show experimentally that this is not the case for any of the networks we trained.\n\n- Added a small section (based on a reviewer's suggestion) showing that the linear connections in our networks did not learn to imitate identity connections, even if they were initialized with identity weight matrices. This supports the conclusion that learnable weights add real value to linear connections.\n\n- Explained that we used validation data to determine hyperparameters and test data only for our final comparisons between architectures, following the standard practice. We also clarified that we were comparing average performance across series of five runs for each experiment. Both points were unclear in our original submission.\n\n- Removed some introductory comments that were confusing or distracted from our main points.\n\n- Stressed that differences in performance were not due to some architectures having unfair advantages due to greater numbers of parameters. We were careful to keep parameter counts as close as possible by adjusting layer widths separately for each architecture.\n\n- Discussed the relevant literature more thoroughly in the first two sections of the paper.\n\nWe also made a number of minor corrections and clarifications.", "We appreciate the thoughtfulness that went into this review. We feel that we have substantially improved the paper as a result of this review and the other two.\n\nFirst, it is important to note that we aren't replacing identity shortcuts so much as generalizing them. Tandem blocks include standard (identity) residual blocks as a special case. An identity shortcut is just a 1x1 convolution without non-linearity whose weight matrix is fixed as an identity matrix. The intent of our paper is to show that the latter property is unnecessary and limiting. The weight matrix of the linear shortcut doesn't need to be fixed (it can be learnable) and it doesn't need to be an identity matrix (either at initialization or after training). The linear convolution doesn't even need to be 1x1, which is particularly surprising. Because a number of notable papers contain assertions to the contrary (that identity connections are necessary and/or optimal), we believe that our contribution is both new and important. However, we failed to clearly express this and have revised the paper accordingly.\n\nSecond, it is important to make clear that we didn’t just switch identity connections to linear connections, rather we also reduced the number of neurons per layer in the linear case so that the total number of parameters did not increase in our comparisons. In other words, we narrowed the layers to make the contests fair. This wasn’t as clear as it could have been in the initial version of the paper. We hope it is clearer now.\n\nTo address the reviewer’s question about learning rates, we did do a grid search across a number of learning rate schedules, testing them separately for each architecture, and (surprisingly) the same rate schedule turned out to be optimal for every architecture. In Section 3 we clarified our approach. We also clarified how we performed the searches for dropout and weight decay parameters, which convinced us to use different values for different architectures; see Section 3 for details.\n\nResponse to Other Comments:\n\nThe comment about learning from the zero map has been clarified to indicate that we initialized weights with small Gaussian values, as is standard practice.\n\nWe removed the paragraph about tandem networks acting as ensembles of shallower networks, per the reviewer's suggestion. We removed the paragraph about removing nonlinearities for the same reason and Section 2 is clearer as a result.\n\nWe should clarify that our results are competitive with those achieved in other ResNet papers. We mention this primarily to establish that we correctly recreated their architectures for our experiments, making the comparisons fair. Our networks may not beat more complex architectures (such as Inception) on a per-parameter basis, but that isn't the goal. We're only investigating the question of shortcut connections, so we tried not to introduce any extra variables.", "First, we'd like to clarify what we see as the central thesis of our paper. We aren't replacing identity shortcuts so much as generalizing them. Tandem blocks include standard (identity) residual blocks as a special case. An identity shortcut is just a 1x1 convolution without non-linearity whose weight matrix is fixed as an identity matrix. The intent of our paper is to show that the latter property is unnecessary and limiting. The weight matrix of the linear shortcut doesn't need to be fixed (it can be learnable) and it doesn't need to be an identity matrix (either at initialization or after training). The linear convolution doesn't even need to be 1x1, which is particularly surprising. Because a number of notable papers contain assertions to the contrary (that identity connections are necessary and/or optimal), we believe that our contribution is both new and important. However, we failed to clearly express this and have revised the paper accordingly. We are grateful to the reviewer for pointing out the weaknesses of the submitted draft.\n\nThe linked paper (“Identity Mappings in Deep Residual Networks” by He et al.) does explore the idea of using learnable linear 1x1 convolutions instead of identity mappings, as does the original ResNet paper. Both conclude that identity connections are superior on the grounds that they work better in extremely deep networks because they don't scale gradients. We did not intend to claim to be the first to use linear 1x1s in this way. Instead, our primary aim was to challenge the conclusion that identity connections are superior. We have now clarified this in the revised paper.\n\nMuch of the initial explanation for why identity shortcut connections were important had to do with building extremely deep networks. However, Zagoruyko and Komodakis showed that wider, shallower networks are superior even with traditional resblocks (https://arxiv.org/pdf/1605.07146.pdf). So it's important to ask what types of shortcut connections work best in these cases.\n\nIn reading this review, it was clear that we needed to explain more thoroughly our experimental procedures, including our use of train/val/test splits and hyperparameter grid search. As is traditional, we do not use the test set for hyperparameter selection, but rather a separate validation set. The test set is only used for final evaluation. We hope this is now clear in the paper.\n\nUnfortunately, we don't have a good explanation for the effects of batch normalization in our experiments. We expected it to help, but this simply wasn't what we observed. This question certainly merits further investigation.\n\nWe should clarify that our results are competitive with those achieved in other ResNet papers. We mention this primarily to establish that we correctly recreated their architectures for our experiments, making the comparisons fair. Our networks may not beat more complex architectures (such as Inception) on a per-parameter basis, but that isn't the goal. We're only investigating the question of shortcut connections, so we tried not to introduce any extra variables.\n\nThe differences between architectures in some experiments were indeed too small to indicate that one architecture was better than another, and we don't want to imply otherwise. Our goal is to show that non-identity connections were better than identities in some experiments and comparable in others. Both cases contradict the near-universal assertions that identity connections are somehow special or optimal. It is important to make clear that we didn’t just switch identity connections to linear connections, rather we also reduced the number of neurons per layer so that the total number of parameters did not increase in our comparisons. In other words, we narrowed the layers to make the contests fair.\n\nWe would love to provide results on larger datasets, however, our computational resources are an issue. Testing extremely deep networks would also be interesting, but we would expect to observe the same thing as everyone else—that extremely deep networks take much longer to train and offer at best marginally better performance.\n\nWe have referenced and discussed all of the figures explicitly in the revised text.\n\nIMPORTANT: At the reviewer's suggestion, we confirmed using the singular value decomposition that linear connections with standard initializations (zero mean and small variance) did not learn identity maps and that linear connections initialized to the identity did not stay there. In other words, these maps are truly non-identity in nature. This was an excellent suggestion from the reviewer and has (in our opinion) substantially strengthened our argument and the paper.\n\nWe noted that dropout is a kind of regularization, this and the typos are fixed.", "We appreciate the thoughtfulness that went into this review. We feel that we have substantially improved the paper as a result of this review and the other two.\n\nFollowing the reviewers comments, we have clarified that we aren't contrasting residual blocks with tandem blocks. It is more accurate to say that tandem blocks generalize residual blocks, including identity connections as a special case.\n\nThe paper “Identity Mappings in Deep Residual Networks” by He et al does explore the idea of using learnable linear 1x1 convolutions instead of identity mappings, as does the original ResNet paper. Both conclude that identity connections are superior on the grounds that they work better in extremely deep networks because they don't scale gradients. We did not intend to claim to be the first to use linear 1x1s in this way. Instead, our primary aim was to challenge the conclusion that identity connections are superior. We have now clarified this and discussed the relevant papers in the revised paper.\n\nMuch of the initial explanation for why identity shortcut connections were important had to do with building extremely deep networks. However, Zagoruyko and Komodakis showed that wider, shallower networks are superior even with traditional resblocks (https://arxiv.org/pdf/1605.07146.pdf). So it's important to ask what types of shortcut connections work best in these cases. Our experiments show that learnable linear connections are as good as or better than identity connections in networks of practical size.\n\nIn reading this review, it was clear that we needed to explain more thoroughly our experimental procedures, including our use of train/val/test splits and hyperparameter grid search. As is traditional, we do not use the test set for hyperparameter selection, but rather a separate validation set. The test set is only used for final evaluation. We hope this is now clear in the paper.\n\nWe have fixed an incorrect statement to reflect the fact that identity connections don't prevent exploding gradients. We thank the reviewer for calling that to our attention.\n\nIt is important to differentiate between network capacity and network depth. Zagoruyko and Komodakis used networks of tremendous capacity (but not particularly great depth) and outperformed the original ResNets which were much deeper. We would love to provide results for much larger networks (in terms of parameter count) and also on larger datasets. However, our computational resources are an issue. Testing extremely deep networks would also be interesting, but we would expect to observe the same thing as everyone else—that extremely deep networks take much longer to train and offer at best marginally better performance.", "Dear Reviewer,\n\nWe can't see your review on OpenReview, but we did receive it via email. We appreciate your analysis and look forward to answering your questions and making the appropriate revisions during the discussion period. Hopefully the review will post soon.\n\nThanks,\nThe Authors", "Dear authors,\n\nI posted my review recently. I am curious: Can you see the review? Because when I log out of my account, I can no longer see it. Hence, the review is (so far) not public. I am wondering whether at least you can see it.\n\nThanks," ]
[ -1, 5, -1, 7, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkmoiMbCb", "iclr_2018_rkmoiMbCb", "BJVy_t14M", "iclr_2018_rkmoiMbCb", "iclr_2018_rkmoiMbCb", "BJVy_t14M", "B1m4fdpmz", "iclr_2018_rkmoiMbCb", "iclr_2018_rkmoiMbCb", "Hk1CjNZyM", "rJyDBCYgG", "ry1gCAFlM", "BkYT2CzkM", "iclr_2018_rkmoiMbCb" ]
iclr_2018_S1PWi_lC-
Multi-task Learning on MNIST Image Datasets
We apply multi-task learning to image classification tasks on MNIST-like datasets. MNIST dataset has been referred to as the {\em drosophila} of machine learning and has been the testbed of many learning theories. The NotMNIST dataset and the FashionMNIST dataset have been created with the MNIST dataset as reference. In this work, we exploit these MNIST-like datasets for multi-task learning. The datasets are pooled together for learning the parameters of joint classification networks. Then the learned parameters are used as the initial parameters to retrain disjoint classification networks. The baseline recognition model are all-convolution neural networks. Without multi-task learning, the recognition accuracies for MNIST, NotMNIST and FashionMNIST are 99.56\%, 97.22\% and 94.32\% respectively. With multi-task learning to pre-train the networks, the recognition accuracies are respectively 99.70\%, 97.46\% and 95.25\%. The results re-affirm that multi-task learning framework, even with data with different genres, does lead to significant improvement.
rejected-papers
the paper validates the benefit of multi-task learning on MNIST datasets, which is not sufficient for ICLR publication
train
[ "Sk-OB6wlM", "rJPeuDOxM", "B19pUEjlz", "rkmQzUoQG", "rkUS0Hj7f", "Sy09XLsmf", "rJgnUCmGM", "rJVCZ7zzz", "By4d_zGff", "Hyo0zyzGz", "HJdSRRnWz", "ByLrDaeGf", "HJLn9FM-z", "S1za-QyZf", "BJnd4ZTlf", "SyveRxjlM", "BJ-kwu9eM", "SJV6JGqgM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "public", "public", "author", "author", "public", "author", "public", "author", "public" ]
[ "This paper presents a multi-task neural network for classification on MNIST-like datasets.\n\nThe main concern is that the technical innovation is limited. It is well known that multi-task learning can lead to performance improvement on similar tasks/datasets. This does not need to be verified in MNIST-like datasets. The proposed multi-task model is to fine tune a pretrained model, which is already a standard approach for multi-task and transfer learning. So the novelty of this paper is very limited.\n\nThe experiments do not bring too much insights.", "The manuscript mainly utilizing the data from all three MNIST-like datasets to pre-train the parameters of joint classification networks, and the pre-trained parameters are utilized to initialize the disjoint classification networks (of the three datasets).\n\nThe presented idea is quite simple and the authors only re-affirm that multi-task learning can lead to performance improvement by simultaneously leverage the information of multiple tasks. There is no technique contribution.\n\nPros:\n1.\tThe main idea is clearly presented.\n2.\tIt is interesting to visualize the results obtained with/without multi-task learning in Figure 6.\n\nCons:\n1.\tThe contribution is quite limited since the authors only apply multi-task learning to the three MNIST-like datasets and there is no technique contribution.\n2.\tThere is no difference between the architecture of the single-task learning network and multi-task learning network.\n3.\tMany unclear points, e.g., there is no description for “zero-padding” and why it can enhance target label. What is the “two-stage learning rate decay scheme” and why it is implemented? It is also unclear what can we observed from Figure 4.\n", "The paper applies multi-task learning to MNIST (M), FashionNIST (F), and NotMNIST (N) datasets. That is, the authors first train a neural network (with a specific architecture; in this case, it is an all-convolutional network) on a combination of the datasets (M+F; F+N; N+M; M+F+N) and then use the learned weights (in all but the output layer) to initialize the weights for task-specific training on each of the datasets. The authors observe that for each of the combinations, the above approach does better than training on a dataset individually. Further, in all but one case, initializing weights based on training on M+F+N gives the best performance. The improvements are not striking but are noticeable. ", "We thank the reviewer for the thorough analysis and insightful comments. Although it is well known that multi-task learning can lead to performance improvement on similar tasks, there is not too many research on FashionMNIST and NotMNIST datasets. And we also visualize the results obtained with/without multi-task learning to analyze our experiment result. Hopeful it contributes to the research community.", "Thank you for the very pertinent and exhaustive comments about our work. We humbly accept the shortcomings you said about our paper and have already revised the shortcomings in our paper.\n\nAbout the unclear points, our answers as below:\nIn order to compare the result between multi-task learning and single task, we must keep the architecture of the network unchanging.\nThe reason we implement \"zero-padding\" is when the multi-task learning process is executed, we combine the training data of MNIST-like datasets together. In order to train a 20(or 30) ways classifier, we have to extend target labels by zero-padding.\nThe “two-stage learning rate decay scheme” we implemented is just a way to decrease learning rate. We found it is helpful to improve recognition accuracies.", "We thank the reviewer for the thorough analysis and insightful comments. \n", "Thanks for your comments:\nI am interested in your fine tune of the hyper-parameters. How did you achieve 99.88% accuracy? Could you please tell me your hyper-parameters settings and network architecture?", "REPLICATED MODEL\n\nThe approach presented in this paper involves the application of the multi-task learning methodology to a supervised learning task in order to determine the effectiveness of using multi-task learning versus using only single-task learning. The data-sets used included three MNIST-like data-sets: MNIST, Not-MNIST, and Fashion-MNIST.\n\nThe multi-task learning paradigm involved pre-training the network with different combinations of the three data-sets. These combinations included three single-tasks, three bi-tasks, and one tri-tasks. The single-tasks refer to simply using each of the three data-sets individually. The three bi-tasks include the three possible combinations of two data-sets: MNIST+Not-MNIST, MNIST+Fashion-MNIST, and Not-MNIST+Fashion-MNIST. The tri-task referred to using all three data-sets for pre-training.\n\nThe weights from the pre-trained networks are then used to initialize a training network for each data-set individually. Note that for the bi-task learning, the image recognition task is not performed on the data-set that was not involved in the pre-training process. For example, if the pre-training was done on a pooled Not-MNIST and Fashion-MNIST data-set, then the image recognition is not performed on the MNIST data-set as the MNIST was not involved in the pre-training process. In addition, for the single-task learning, there was no pre-training procedure.\n\nThe architecture of the neural network was the all convolutional neural network. The whole architecture was constructed using Tensorflow in the replication.\n\nFor the multi-task learning models, the neural network is first pre-trained for 50 epochs in the multi-task context (bi-task or tri-task). With the parameters transferred over, the neural network is then re-trained for another 50 epochs in the single-task context before conducting the final image-recognition, for a total on 100 epochs on the multi-task learning models. For the single-task learning, the neural network is solely trained for 50 epochs on the individual data-set before performing the image-recognition task.\n\nFor each instance of the 50 epochs, a two-stage learning rate decay scheme is employed. For the first stage of 25 epochs in the instance, the learning rate was initialized to 10^{-3} and in the second stage of 25 epochs, the learning rate initialization was reduced to 10^{-5}. \n\nREPLICATION RESULTS\n\nThe results of the replication are as follows: On the MNIST data-set, the single-task achieved 99.59%, MNIST+Fashion-MNIST achieved 99.61%, MNIST+Not-MNIST achieved 99.58%, and MNIST+Fashion-MNIST+Not-MNIST achieved 99.64%. On Not-MNIST, single-task achieved 97.02%, Fashion-MNIST+Not-MNIST achieved 96.95%, MNIST+Not-MNIST achieved 97.08%, and MNIST+Fashion-MNIST+Not-MNIST achieved 96.91%. On Fashion-MNIST, single-task achieved 93.92%, Fashion-MNIST+Not-MNIST achieved 93.81%, MNIST+Fashion-MNIST achieved 93.56%, and MNIST+Fashion-MNIST+Not-MNIST achieved 93.57%.\n\nThe original study indicated a consistent increase in accuracy from single-task to tri-task learning, with a general pattern of increase in accuracy going from single-task to multi-task models. The results we obtained did not match this pattern in general. 6 of the 9 multi-task models had a lower accuracy than the corresponding single-task models. In addition, the single-task model achieved the highest accuracy for the Fashion-MNIST data-set.\n\nThe discrepancy in the results is based on very small absolute differences. Since each model was trained only once, the difference in results may have been simply due to chance. Keeping this in mind, further analysis using more training experiments and subsequent statistical significance analysis would be helpful for ascertaining the accuracy of multi-task learning versus single-task learning for MNIST and MNIST-like data-sets.", "This paper seeks to apply multi task learning to MNIST-like image classification tasks. The authors intend to show that classifiers whose weights have been initialized from a multi task classifier trained on several different datasets will outperform single task classifiers trained on a single dataset pertaining to the single task.\n\nThe paper outline several methods of applying multi task learning to neural networks at the input layer, hidden layers, and final output layer. The experiment applies multi-task learning to recognizing digits, fashion items, and letters from the MNIST, FashionMNIST, and NotMNIST datasets respectively. It uses the entire FashionMNIST and MNIST datasets (60,000 training set and 10,000 test set images each) and a subset of 70,000 NotMNIST images to train their learners. First a multi-task classifier is trained on different combinations of datasets (MNIST + FashionMNIST, FashionMNIST + Not MNIST, etc.) for 25 epochs with a learning rate initialized to 1e-3, then another 25 epochs with an initial learning rate of 1e-5. The weights of the trained multi task learners are then used as the initial weights for single task learners that are then trained for a specific task in the same procedure with a different dataset (i.e. an MNIST single task learner trains further on the MNIST dataset). \n\nAll hyperparameters are clearly outlined, including learning rate, optimization function used, and number of epochs ran for training. The authors mention a decay rate for the learning rate but did not specify it. Furthermore they released the code used to run their experiments, and in their code they make extensive use of batch normalization and dropout in their A-CNNs, though the paper makes no mention of it. This made it more difficult to obtain their exact results, because without the code it would have been impossible to recreate their exact training procedure. We did not include batch normalization and dropout in our architecture even after this discovery, to better gauge how well the paper's results can be reproduced based strictly on what is written in it.\n\nThis team was able to closely reproduce the author’s work with a combination of C++, Python, and bash scripts, thanks to the detailed architecture and hyper parameters outlined in the paper. A C++ code preprocessed the datasets, obtained from their official websites, and saved them in a custom format as .ocv files. Python scripts built the learners using Keras Tensorflow and fit the models to the appropriate datasets using the same hyperparameters specified by the original paper. A bash script called the Pythons scripts and passed in different arguments for which datasets to train on and what to set the initial learning rate to. \n\nThe results reported by the paper seem to favor the author’s conclusion that multi-task learners are more powerful than their single task counterparts; on the task of classifying digits from the MNIST dataset, for example, a single task learner whose weights were initialized randomly got an accuracy of 99.56\\%, whereas a single task learner initialized from multi task learner trained on the MNIST and Fashion MNIST datasets got an accuracy of 99.71\\%. This team obtained some results that agreed with the author's conclusion, but others that contradicted it. For the same set of comparative results, for example, this team obtained accuracies of 99.36\\% and 99.21\\% respectively, where the single task learner out performed its multi task counterpart. \n\nOverall the results obtained by the authors were easy to reproduce as the authors provided ample detail of their method and architectures used. This team obtained some results that correlated with the author's conclusions, and some that did not. These discrepancies could be attributed to methods used by the others that went unmentioned in the paper, though with results between classifiers differentiating by less than half a percentile in some cases (both ours and the authors) it is difficult to discern a significant trend from regular fluctuation.", "This paper examined the improvement that multi-task learning could bring through training a fixed-structure\nall-convolutional neuron network using three image text datasets. Multi-task learning was achieved by first training\na network in a pre-defined way to get the weights between layers and then using this weights to initialize the\nnetworks for single-task learning.\n1) Implementation: We tried to reproduce the accuracy results in the paper for single-task learning and multi-task\nlearning on MNIST, NotMNIST and FashionMNIST datasets. A same CNN structure was used throughout. The\nauthors provided the code which was simple, elegantly written and well formatted. Thus we were able to reproduce\nthe results very easily. For single-task learning, 50 epochs were used to train the model with a single dataset using\n4:1 training and validation data split.\nFew key differences were observed between the actual implemented model and the model presented in the\npaper. Moreover, some important details regarding the model architecture were missing. For example, there was\nno mention of batch normalization in the manuscript, although it was used in the code. Batch normalization was\nused after each convolution layer. It was difficult to reproduce the results without such details. But as soon as the\ncode was given the entire structure was very clear. However, hyperparameters and structure of the model could\nhave been explained in more detail. Inclusion of some statistics comparing accuracy of single and multi-tasking\nmodels would have been more useful.\n2) Reproduced Results: Denoting MNIST, NotMNIST and FashionMNIST by M, N and F respectively, the\naccuracies that we obtained on MNIST using M, M+N, M+F and M+N+F are 99.71%, 99.67%, 99.67% and\n99.63% respectively; the results on NotMNIST for N, M+N, N+F and M+N+F are 97.55%, 97.53%, 97.43%,\n97.54% and the results on FashionMNIST for F, F+N, M+F and M+N+F are 94.81%, 94.92%, 94.90% and 95.07\nrespectively.We obtained slightly different results as compared to manuscript. The reason behind this could be\ndifferent version of tensorflow (we used 1.4.0). It was very helpful of the authors to mention the exact version of\nthe libraries used which supported our ease of reproducing it.\nWhen reproducing the results for single-task learning, we tried to tune the hyper-parameters and achieved a better\naccuracy of 99.88% for single-task learning than in the paper. Hence, we believe that some more hyper-parameter\noptimization and then implementing a multi-task learning would be helpful. Overall, this paper brings out a very\ninteresting insight which can be of great use if the same thing could be extended to other different domains", "Can you guys please comment on how did you generate the t-SNE plot or share the code for it.\nThanks", "The code has been upload to Github:https://github.com/st70712/image-multi-task-learning-\nYou can find the detail in \"TSNE_CNN.ipynb\" file.", "Dear reader:\nThe CNN model which we reference from 'Striving for Simplicity: The All Convolutional Net'. \nYou can find more detail in their paper.The “two-stage learning rate decay scheme” we implemented is just a way to decrease learning rate.We found it is helpful to improve recognition accuracies.\n", "Hi I am wondering why you chose the specific CNN structure (e.g. hyper-parameters) and why you chose this changing learning rate. Could you please justify your choices?", "github link:https://github.com/st70712/image-multi-task-learning-", "Hello, we are also intending to reproduce the results of your paper. Could you please post the github link here as a comment once you manage to upload the code? \n\nWould you also be able to post the exact 60,000 train and 10,000 test data points you used from the Not MNIST data set, so that we could use the exact same datasets to reproduce your results? \n", "Thanks for your comments:\n1.Yes, I use cross-entropy as the loss function in my experiment.\n2.The hyper-parameters of single-task models are as same as multi-task models. Each network is trained with 50 epochs. A two-stage learning rate decay scheme is implemented. The initial learning rate is 0.001 for the first stage of 25 epochs, and 0.00001 for the second stage of 25 epochs. The size of a mini-batch is set to 100.\n3.Learning rate decay function is: LearningRate = LearningRate * 1/(1 + decay * epoch). \nThe decay argument is set to LearningRate /25\n4.I will upload my code to github as soon as possible.\n\n\n\nHope the response above clarified your questions.", "This is not really a comment. As we intend to reproduce the results in your paper, we would like to know more about the implementation details.\n\nCould you please tell us:\n1. if you used cross-entropy error or any other criteria as the loss function, \n2. how many epochs did you use to train the pre-trained single-task models and what the learning rate and the mini-batch size were, and\n3. what function was used to specify the decay of the learning rate for training the multi-task models? \n\nBesides, would it be possible for you to share your code with us?\n\nThanks." ]
[ 5, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1PWi_lC-", "iclr_2018_S1PWi_lC-", "iclr_2018_S1PWi_lC-", "Sk-OB6wlM", "rJPeuDOxM", "B19pUEjlz", "Hyo0zyzGz", "iclr_2018_S1PWi_lC-", "iclr_2018_S1PWi_lC-", "iclr_2018_S1PWi_lC-", "iclr_2018_S1PWi_lC-", "HJdSRRnWz", "S1za-QyZf", "BJ-kwu9eM", "SyveRxjlM", "BJ-kwu9eM", "SJV6JGqgM", "iclr_2018_S1PWi_lC-" ]
iclr_2018_B1p461b0W
Deep Learning is Robust to Massive Label Noise
Deep neural networks trained on large supervised datasets have led to impressive results in recent years. However, since well-annotated datasets can be prohibitively expensive and time-consuming to collect, recent work has explored the use of larger but noisy datasets that can be more easily obtained. In this paper, we investigate the behavior of deep neural networks on training sets with massively noisy labels. We show on multiple datasets such as MINST, CIFAR-10 and ImageNet that successful learning is possible even with an essentially arbitrary amount of noise. For example, on MNIST we find that accuracy of above 90 percent is still attainable even when the dataset has been diluted with 100 noisy examples for each clean example. Such behavior holds across multiple patterns of label noise, even when noisy labels are biased towards confusing classes. Further, we show how the required dataset size for successful training increases with higher label noise. Finally, we present simple actionable techniques for improving learning in the regime of high label noise.
rejected-papers
The paper studies the robustness of deep learning against label noise on MNIST, CIFAR-10 and ImageNet. But the generalization of the claim "deep learning is robust to massive label noise" is still questionable due to the limited noise types investigated. The paper presents some tricks to improve learning with high label noise (batch size and learning rate), which is not novel enough.
test
[ "B1STvPVyG", "HJTr9mqgM", "BkC-glk-M", "BktM0Ppmf", "rJqJbCqXf", "BkKLl097G", "rkfFCmgzz", "ByRhoHM1G", "H1pqnCHCZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "author", "public" ]
[ "The paper makes a bold claim, that deep neural networks are robust to arbitrary level of noise. It also implies that this would be true for any type of noise, and support this later claim using experiments on CIFAR and MNIST with three noise types: (1) uniform label noise (2) non-uniform but image-independent label noise, which is named \"structured noise\", and (3) Samples from out-of-dataset classes. The experiments show robustness to these types of noise. \n\nReview: \nThe claim made by the paper is overly general, and in my own experience incorrect when considering real-world-noise. This is supported by the literature on \"data cleaning\" (partially by the authors), a procedure which is widely acknowledged as critical for good object recognition. While it is true that some image-independent label noise can be alleviated in some datasets, incorrect labels in real world datasets can substantially harm classification accuracy.\n\nIt would be interesting to understand the source of the difference between the results in this paper and the more common results (where label noise damages recognition quality). The paper did not get a chance to test these differences, and I can only raise a few hypotheses. First, real-world noise depends on the image and classes in a more structured way. For instance, raters may confuse one bird species from a similar one, when the bird is photographed from a particular angle. This could be tested experimentally, for example by adding incorrect labels for close species using the CUB data for fine-grained bird species recognition. Another possible reason is that classes in MNIST and CIFAR10 are already very distinctive, so are more robust to noise. Once again, it would be interesting for the paper to study why they achieve robustness to noise while the effect does not hold in general. \n\nWithout such an analysis, I feel the paper should not be accepted to ICLR because the way it states its claim may mislead readers. \n\nOther specific comments: \n-- Section 3.4 the experimental setup, should clearly state details of the optimization, architecture and hyper parameter search. For example, for Conv4, how many channels at each layer? how was the net initialized? which hyper parameters were tuned and with which values? were hyper parameters tuned on a separate validation set? How was the train/val/test split done, etc. These details are useful for judging technical correctness.\n-- Section 4, importance of large datasets. The recent paper by Chen et al (2017) would be relevant here.\n-- Figure 8 failed to show for me. \n-- Figure 9,10, need to specify which noise model was used.\n\n\n\n\n\n", "The authors study the effect of label noise on classification tasks. They perform experiments of label noise in a uniform setting, structured setting as well provide some heuristics to mitigate the effect of label noise such as changing learning rate or batch size. \n\nAlthough, the observations are interesting, especially the one on MNIST where the network performs well even with correct labels slightly above chance, the overall contributions are incremental. Most of the observations of label noise such as training with structured noise, importance of larger datasets have already been archived in prior work such as in Sukhbataar et.al. (2014) and Van Horn et. al (2015). Agreed that the authors do a more detailed study on simple MNIST classification, but these insights are not transferable to more challenging domains. \n\nThe main limitation of the paper is proposing a principled way to mitigate noise as done in Sukhbataar et.al. (2014), or an actionable trade-off between data acquisition and training schedules. \n\nThe authors contend that the way they deal with noise (keeping number of training samples constant) is different from previous setting which use label flips. However, the previous settings can be reinterpreted in the authors setting. I found the formulation of the \\alpha to be non-intuitive and confusing at times. The graphs plot number of noisy labels per clean label so a alpha of 100 would imply 1 right label and 100 noisy labels for total 101 labels. In fact, this depends on the task at hand (for MNIST it is 11 clean labels for 101 labels). This can be improved to help readers understand better. \n\nThere are several unanswered questions as to how this observation transfers to a semi-supervised or unsupervised setting, and also devise architectures depending on the level of expected noise in the labels. \n\nOverall, I feel the paper is not up to mark and suggest the authors devote using these insights in a more actionable setting. \nMissing citation: \"Training Deep Neural Networks on Noisy Labels with Bootstrapping\", Reed et al.\n", "The problem the authors are tackling is extremely relevant to the research community. The paper is well written with considerable number of experiments. While a few conclusions are made in the paper a few things are missing to make even broader conclusions. I think adding those experiments will make the paper stronger! \n\n1. Annotation noise is one of the biggest bottleneck while collecting fully supervised datasets. This noise is mainly driven by lack of clear definitions for each concept (fine-grained, large label dictionary etc.). It would be good to add such type of noise to the datasets and see how the networks perform.\n2. While it is interesting to see large capacity networks more resilient to noise I think the paper spends more effort and time on small datasets and smaller models even in the convolutional space. It would be great to add more experiments on the state of the art residual networks and large datasets like ImageNet.\n3. Because the analysis is very empirical and authors have a hypothesis that the batch size is effectively smaller when there are large batches with noisy examples it would be good to add some analysis on the gradients to throw more light and make it less empirical. Batch size and learning rate analysis was very informative but should be done on ResNets and larger datasets to make the paper strong and provide value to the research community.\n\nOverall, with these key things missing the paper falls a bit short making it more suitable for a re submission with further experiments.", "We would like to thank the reviewer for their time and insightful comments, and we respond below to the particular issues raised.\n\n1. We agree with the reviewer that there are different types of noise depending on the granularity and size of the dataset dictionary. This was our motivation in using three different datasets, ImageNet, CIFAR, and MNIST, to capture several different paradigms of distinctiveness between classes.\n\n2. Our motivation in working with smaller models was greater computational tractability in considering different training set sizes, noise types, and hyperparameter choices. Our experiments with ResNets on ImageNet illustrate the consistency of our main robustness results across models and dataset sizes.\n\n3. We would be very interested in a theoretical analysis to complement our empirical findings. Establishing our conclusions required a significant amount of experimentation, inducing us to relegate a corresponding theoretical analysis to a separate work.", "We would like to thank the reviewer for their time and insightful comments. \n\nWe agree with the reviewer that it is key to have clean data to achieve top performance. In our work, we aim to measure precisely the effect of noisy annotations on classification performance, complementing prior work in which increasing noise simultaneously implied a decrease in clean data.\n\nWe thank the reviewer for the excellent question, on how our experiments capture the variability of natural, especially fine-grained, datasets. Our use of ImageNet, CIFAR, and MNIST was intended to capture several different paradigms of distinctiveness between classes. We consider noise which is class-dependent but image-independent - that is, two pictures from the same class are equally likely to be mislabeled, with no images being “especially confusing”. Our interest in this formulation was in fact motivated by datasets used for fine-grained identification.\n\nWe observed that in the field, practitioners were using large datasets with abundant class-dependent errors, instead of occasional image-dependent errors. For example, a Google Image search for “Blue Jay”, a common species of bird, turns up many incorrect examples. Some are not birds at all (such as the similarly named baseball team). Of those that are, most are high-quality images of other species (such as the Steller’s Jay). We do not observe a prevalence of ambiguous images among those that are falsely labeled.\n\nWe have addressed the specific comments in our revision. We are uncertain why Figure 8 should have failed to appear for some readers and have reformatted the image; please let us know if further issues arise.", "We would like to thank the reviewer for their time and insightful comments. \n\nThe questions of improving training under noisy conditions, cleaning noisy data before training, and reducing noise in dataset acquisition are all very important. As you mention, there has already been excellent work on these questions. Complementary to these lines of research, we aim to show that standard methods can work remarkably well even in the presence of high noise. Certainly, it is always better to have less noise; and if noise is present, then an explicitly noise-robust algorithm might provide better results. However, there are many circumstances in which ConvNets are naively applied without understanding the extent of noise in the data. For such cases, we believe it is constructive to demonstrate the remarkable robustness of the “vanilla” training paradigm.\n\nWe respectfully disagree that prior experiments adequately consider the question of simultaneous large noise and large training set size. In prior work, we have found observations that performance decreases when good training examples are turned into bad ones. This is, however, unsurprising since it essentially represents a double attack: The signal is being diluted with noise *and* there is less total signal. We believe that to understand the robustness to label noise it is key to separate the two effects. To that end, we consider experiments in which the extent of the signal is fixed, but the dilution can be controlled. Specifically, we find that the likely explanation for the previously observed decreases in performance is that the datasets were too small to compensate for the noise.\n\nWe did indeed consider alternatives to alpha in parametrizing the extent of noise. As mentioned above, we chose this measurement because it allows us to keep the total amount of signal constant while varying the extent to which this signal is diluted. In light of your helpful comments, we have made some changes in the revision as to how this is described.\n\nWe have added the missing citation - thank you for drawing this to our attention.", "The extended review we wrote for our machine learning course can be found at https://drive.google.com/file/d/1AHRxuKsR_ywKGAeXUo1QfYbEmdf0Kwxk/view?usp=sharing\n\nOur code can be found at \nhttps://drive.google.com/open?id=1elVTaqdXMDEIQW3qITOYCnKiMYXks_6a \n\nIntroduction\n\nWe were unable to closely match the authors’ experiments, but had similar trends holding for most, except for the effect of batch size on performance. Their claim that learning is possible under heavy noise was generally confirmed by our team. \n\nThe paper having clearly explained plots and noise measures, we focused on trying to reproduce the plots they obtained or, at the very least, confirm the qualitative behavior of neural nets the authors observed for the different datasets and noise settings they selected.\n\nResults\n\nWhen training a 4-layer convolutional neural net over MNIST with an alpha of 100, the authors of the original paper managed to obtain an accuracy above 90 % when we only managed to reach an accuracy of 20% for such high alpha. This is a large gap, but our experiment still confirms learning beyond random predictions is possible even with such high label noise, as the authors' paper claim.\n\nThe paper also explored how many properly labels were needed to get satisfying performance. When using 10 000 correctly labeled images from MNIST with alpha = 20, the authors were able to obtain classification accuracy beyond 90% and our team reached accuracy above 80%. For a fixed alpha, increasing the number of good labels produced an increase in performance in both the authors' attempts and ours for alpha in {0,10,20}. However, for alpha = 50, accuracy drops after 10 000 good labels in our experiment. Moreover, the authors' accuracy takes a large jump around 1000 good labels for alpha=20 when for our reproduction of this experiment, the accuracy increases more smoothly over the whole range for alpha.\n\nFor ImageNet, given the size of the original dataset, it was not realistic to expect the same results with our attempt to reproduce the experiment. The results from the original paper for alpha in {0,1,2,3,4,5} were roughly 70%, 63%, 58%, 55%, 52%, 45% respectively under uniform noise. For alpha increasing from 0 to 5, our classification accuracy obtained decreased from 5% to 0.5%, a much lower accuracy. \n\nThe original paper obtained above 60 % accuracy for alpha when using mislabeled examples from CIFAR-10 whereas we did not manage to obtain accuracy beyond random performance for such high alpha.\n\nDiscussion\n \nHere, we summarize what helped reproduce the paper and what made it more difficult.\n\nNoise measures like alpha or delta were properly defined and explained. The nets' types and depths were provided along learning rates and batch sizes for training. However, other hyper-parameters like the number of units per layer or activation function were not specified. As their code was not clearly accessible, we had to implement nets similar to the ones discussed in their paper starting only from the few details shared.\n\nAll plots had clear legends and always had an explanation in the text or right below the plot. This made our attempts at reproducing their work much easier. Overall, the paper was well written and organized and its themes and ideas were clearly stated.\n\nIn terms of time constraints, we ignore how long it took the authors to complete the experiments they ran. In particular, the number of training epochs required to run any of the neural nets could not be found.\n\nFurthermore, the authors did not mention what preprocessing steps were needed to obtain their results. This is quite important information since preprocessing details state the assumptions made on the data. It is possible that the authors only used raw images for training, but this should have been explicitly stated, as many an alternatives, such as image augmentation, could affect results.\n\nConclusion\n\nThe paper we studied looked at how much noise neural nets can handle when bad labels make up a significant proportion of a training set. Their global claim that neural nets can learn under heavy noise was confirmed by most of our experiments, but matching their classification performances was unsuccessful. More details of implementation would help reproduce their work and confirm the trends their results suggested. Moreover, the time required to complete their study would give a rough initial idea of how long it would take someone to reproduce their work. In our situation, time was limited, so having clear, access to these training settings and extra hyper-parameters would have made our work easier. \n\nFortunately, the authors' themes and plots were well-explained and understandable. In addition, their metrics and noise definitions were described in details, making their work realistically reproducible.", "Thank you for bringing this up. Indeed, the difference between ResNets and CaffeNet likely makes a difference in the observed results. Further, in the referred paper, as in other prior work, an increase in noise comes at the expense of a smaller number of clean training examples. As we show in Section 4, this likely causes lower training performance than could be achieved over a large training set with the same level of noise.", "Here https://arxiv.org/abs/1606.02228 accuracy drops 27%, with 1:1 noisy/clean labels, which is much more than in the paper. Although, may be ResNets are much more robust than CaffeNet. " ]
[ 5, 4, 5, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 5, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B1p461b0W", "iclr_2018_B1p461b0W", "iclr_2018_B1p461b0W", "BkC-glk-M", "B1STvPVyG", "HJTr9mqgM", "iclr_2018_B1p461b0W", "H1pqnCHCZ", "iclr_2018_B1p461b0W" ]
iclr_2018_H1O0KGC6b
Post-training for Deep Learning
One of the main challenges of deep learning methods is the choice of an appropriate training strategy. In particular, additional steps, such as unsupervised pre-training, have been shown to greatly improve the performances of deep structures. In this article, we propose an extra training step, called post-training, which only optimizes the last layer of the network. We show that this procedure can be analyzed in the context of kernel theory, with the first layers computing an embedding of the data and the last layer a statistical model to solve the task based on this embedding. This step makes sure that the embedding, or representation, of the data is used in the best possible way for the considered task. This idea is then tested on multiple architectures with various data sets, showing that it consistently provides a boost in performance.
rejected-papers
* the proposed fine-tuning of only the last layer is not novel enough * experiments are not sufficient to isolate the differences to support the benefit of post-training
train
[ "HyyQKdklf", "ry63clwxz", "H1zHTO2ef", "H1orCErGz", "HJL3TESff", "HkrvrXMGz", "r1QnTOhJf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "public" ]
[ "Summary: \nBased on ideas within the context of kernel theory, the authors consider post-training of NNs as an extra training step, which only optimizes the last layer of the network.\nThis additional step makes sure that the embedding, or representation, of the data is used in the best possible way for the considered task (which is also reflected in the experiments).\n\nAccording to the authors, the contributions are the following:\n1. Post-training step: keeping the rest of the NN frozen (after training), the method trains the last layer in order to \"make sure\" that the representation learned is used in the most efficient way.\n2. Highlighting connections with kernel techniques and RKHS optimization (like kernel ridge regression).\n3. Experimental results.\n\nClarity:\nThe paper is well-written, the main ideas well-clarified. \n\nImportance:\nWhile the majority of papers nowadays focuses on the representation part (i.e., how we get to \\Phi_{L-1}(x)), this paper assumes this is given and proposes how to optimize the weights in the final step of the algorithm. This by itself is not enough to boost the performance universally (e.g., if \\Phi_{L-1} is not well-trained, the problem is deeper than training the last layer); however, it proposes an additional step that can be used in most NN architectures. From that front (i.e., proposing to do something different than simply training a NN), I find the paper interesting, that might attract some attention at the conference.\n\nOn the other hand, to my humble opinion, the experimental results do not show a significant gain in the performances of all networks (esp. Figure 3 and Table 1 are within the range of statistical error). In order to state something like this universally, either one needs to perform experiments with more than just MNIST/CIFAR datasets, or even more preferably, prove that the algorithm performs better.\n\nOriginality:\nIt would be great to have some more theory (if any) for the post-training step, or investigate more cases, rather than optimizing only the last layer.\n\nComments:\n1. I assume the authors focused in the last layer of the NN for simplicity, but is there a reason why one might want to focus only on the last layer? One reason is convexity in W of the problem (2). Any other? \n\n2. Have the authors considered (even in practice only) to include training of the last 2 layers of the NN? The authors state this question in the future direction, but it would make the paper more complete to consider it here.\n", "This paper proposes to fine-tune the last layer while keeping the others fixed, after initial end-to-end training, viewing the last layer learning under the light of kernel theory (well actually it's just a linear model).\n\nSummary of evaluation\n\nThere is not much novelty in this idea (of optimizing carefully only the last layer as a post-training stage or treating the last layer as kernel machine in a post-processing step), which dates back at least a decade, so the only real contribution would be in the experiments. However the experimental setup is questionable as it does not look like the same care has been given to control overfitting with the 'regular training' method.\n\nMore details\n\nPrevious work on the same idea: at least a decade old, e.g., Huang and LeCun 2006. See a review of such work in 'Deep Learning using Linear Support Vector Machines' more recently.\n\nExperiments\n\nYou should also have a weight norm penalty in the end-to-end ('regular training') case and make sure it is appropriately and separately tuned (not necessarily the same value as for the post-training). Otherwise, the 'improvements' may simply be due to better regularization in one case vs the other, and the experimental curves suggest that interpretation is correct.\n", "This paper demonstrate that by freezing all the penultimate layers at the end of regular training improves generalization. However, the results do not convince this reviewer to switch to using 'post-training'.\n\nLearning features and then use a classifier such as a softmax or SVM is not new and were actually widely used 10 years ago. However, freezing the layers and continue to train the last layer is of a minor novelty. The results of the paper show a generalization gain in terms of better test time performance, however, it seems like the gain could be due to the \\lambda term which is added for post-training but not added for the baseline. c.f. Eq 3 and Eq 4.\nTherefore, it's unclear whether the gain in generalization is due to an additional \\lambda term or from the post-training training itself.\n\nA way to improve the paper and be more convincing would be to obtain the state-of-the-art results with post-training that's not possible otherwise.\n\nOther notes, \n\nRemark 1: While it is true that dropout would change the feature function, to say that dropout 'should not be' applied, it would be good to support that statement with some experiments.\n\nFor table 1, please use decimal points instead of commas.\n", "We would like to thank you for taking the time to read the paper and to test the code.\nYour comments will help us improve the paper reproducibility.", "We would like to thank the reviewers for their feedbacks.\n\nWe agree that our paper could benefit from additional experiments, particularly with more recent networks. Additionally, we concur that additional experiments studying the influence of the L2 regularisation on the regular / post training steps could help highlight the interest of the post training step.\n\nHowever, we would like to point out that we conducted additional experiments and it is the authors’s belief that while post training is not a very complex or fully original idea, it does seem to provide interesting improvement to the performance of many networks for a negligible cost — and thus worth exploring.\n\nOverall, we acknowledge the reviewer decision to reject this paper and will work to improve it.", "The paper on Post-Training in Deep Learning suggests another phase of training after a phase of regular training in neural networks. The second phase involves freezing all but the last layer and optimizing the loss function with respect to the weights at the last layer over several additional iterations of training. The authors assert that this additional phase can lead to an improvement of performance in neural networks.\n\nWe attempted to reproduce the experiments performed by the authors in order to verify the claims in the paper. The paper is crisp, clear, and well-written in its explanations and hence facilitated the understanding and reproducibility process. In addition, the authors have kindly made their code public, and all experiments were conducted on either well-known datasets or those that could be generated with details provided in the paper. We found that the clarity of the code is also reasonable and any technical details lacking in the report could, for the most part, be found in the code.\n\nThe authors performed experiments on three major classes of neural networks, namely CNNs, RNNs, and Feed Forward Neural Networks, using different datasets to compare the performance of additional post-training with classical training. We observed a discrepancy between the paper and its implementation in the setting of these iterations. In general, in the provided code, the accuracy or error comparison (based on the experiment) is made between q iterations of regular training and q regularly-trained iterations + p iterations of post-training. In only the experiment of the CIFAR-10 dataset using CNN, q+100 iterations of regular-training were compared with q regularly-trained iterations + 100 iterations of post-training. Nonetheless, for all observed iterations and network complexities, we confirmed the authors’ findings that applying post-training, on average, improves test accuracy, and executes faster than regular training per iteration. That being said, through our own experimentation, we found that in the Kernel Ridge Regression experiment, an additional number of iterations of regular training would, in-fact, result in lower test error than the equal number of additional iterations of post-training, without necessarily overfitting. For example, at 250 iterations on the fully-connected network using the Parkinson Telemonitoring dataset, the error claimed by the authors in Table 2 of the paper is 0.832. With an additional 200 iterations of post-training, the error reduces to 0.433, however if instead 200 iterations of regular training were performed, we found that the error would have reduced to 0.167. Also, for the CNN experiment on the MNIST dataset, there was no clear relationship between the number of training iterations in the paper and those run in the code, which meant we could not reproduce the training conditions with certainty.\n\nOverall, we believe that the paper is well presented and the experiments support the advantages of post-training stated by the authors. We conclude that the paper is reproducible.\n\nLink to full report: https://github.com/deekshaarya4/Post_training/blob/master/reproducing-post-training-deeplearning.pdf", "Dear authors, \n\nThank you to the authors for this interesting paper, which I enjoyed :)\n\nThe goal of the proposed method is to separate representation learning (done by the network in all but the final layer) and the task at hand (e.g., classification, which is performed by the final layer of the network). As such, the authors propose a final 'post training' step, which only updates the final layers of a network. One benefit of such an approach is that, given a fixed representation, learning the final layer weights is convex for many choice of the activation functions in the final layer. \n\nThe authors propose an interesting link with kernels. From reading the paper, it seemed to me that there was also a relationship between representation learning from the perspective of non-linear ICA, recently proposed by Hyvarinen & Morioka (2016), which I thought I would share with you. \nIn their work, Hyvarinen & Morioka show that when a classification task is combined with a function approximator (eg a deep net) , the final representation learnt by the network (i.e., what the authors here refer to as $\\Phi_{L-1}(x)$) will be equal to the independent components which generated the data (roughly speaking). As a result, it may be possible/interesting to interpret post training as first learning the non-linear unmixing of independent components followed by post training which then performs classification on the original independent components. \n\nDisclaimer: I am not associated with the referenced paper, just wanted to provide an additional justification for the proposed method :)\n\nGood luck!\n\nReferences:\nHyvarinen & Morioka, \"Unsupervised Feature Extraction by Time-Contrastive Learning and Nonlinear ICA\", NIPS 2016" ]
[ 5, 3, 4, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_H1O0KGC6b", "iclr_2018_H1O0KGC6b", "iclr_2018_H1O0KGC6b", "HkrvrXMGz", "iclr_2018_H1O0KGC6b", "iclr_2018_H1O0KGC6b", "iclr_2018_H1O0KGC6b" ]
iclr_2018_H1-oTz-Cb
Parametrizing filters of a CNN with a GAN
It is commonly agreed that the use of relevant invariances as a good statistical bias is important in machine-learning. However, most approaches that explicitely incorporate invariances into a model architecture only make use of very simple transformations, such as translations and rotations. Hence, there is a need for methods to model and extract richer transformations that capture much higher-level invariances. To that end, we introduce a tool allowing to parametrize the set of filters of a trained convolutional neural network with the latent space of a generative adversarial network. We then show that the method can capture highly non-linear invariances of the data by visualizing their effect in the data space.
rejected-papers
The experiments are not sufficient to support the claim. The authors plan to improve it for future publication.
val
[ "ByXiDMPlG", "Bk7nEndeG", "HJoUOC5eM", "rJsyhEaXM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "The paper proposes an approach to learning a distribution over filters of a CNN. The method is based on a adversarial training: the generator produces filters, and the discriminator aims to distinguish the activation maps produced by real filters from those produced by the generated ones. \n\nPros:\n1) The general task of learning distributions over network weights is interesting\n2) To my knowledge, the proposed approach is new\n\nCons:\n1) Experimental evaluation is very substandard. The experiments on invariances seem to be the highlight of the paper, but they basically do not tell me anything. \n - Figures 3 and 4 take 2 pages, but what should one see there?\n - There are no quantitative results. Could there be a way to measure the invariances?\n - Can the results be applied to some practical task? Why are the results interesting and/or useful?\n2) The experiments are restricted to a single dataset - MNIST. The authors mention that “the test accuracy obtained by following the above procedure is of 0.982, against a test accuracy of 0.971 for the real CNN” - these are very poor accuracies for MNIST. So even the MNIST results do not seem convincing.\n3) Presentation is suboptimal, and many details are missing. For instance, architectures of networks are not provided.\n \nTo conclude, while the general direction is interesting and the proposed method might work, the experimental evaluation is very poor, and the paper absolutely cannot be accepted for publication.", "Recent work on incorporating prior knowledge about invariances into neural networks suggests that the feature dimension in a stack of feature maps has some kind of group or manifold structure, similar to how the spatial axes form a plane. This paper proposes a method to uncover this structure from the filters of a trained ConvNet. The method uses an InfoGAN to learn the distribution of filters. By varying the latent variables of the GAN, one can traverse the manifold of filters. The effect of moving over the manifold can be visualized by optimizing an input image to produce the same activation profile when using a perturbed synthesized filter as when using an unperturbed synthesized filter.\n\nThe idea of empirically studying the manifold / topological / group structure in the space of filters is interesting. A priori, using a GAN to model a relatively small number of filters seems problematic due to overfitting, but the authors show that their InfoGAN approach seems to work well.\n\nMy main concerns are:\n\nControls\nTo generate the visualizations, two coordinates in the latent space are varied, and for each variation, a figure is produced. To figure out if the GAN is adding anything, it would be nice to see what would happen if you varied individual coordinates in the filter space (\"x-space\" of the GAN), or varied the magnitude of filters or filter planes. Since the visualizations are as much a function of the previous layers as they are a function of the filters in layer l which are modelled by the GAN, I would expect to see similar plots for these baselines.\n\nLack of new Insights\nThe visualizations produced in this paper are interesting to look at, but it is not clear what they tell us, other than \"something non-trivial is going on in these networks\". In fact, it is not even clear that the transformations being visualized are indeed non-linear in pixel space (note that even a 2D diffeomorphism, which is a non-linear map on R^2, is a linear operator on the space of *functions* on R^2, i.e. on the space of images). In any case, no attempt is made to analyze the results, or provide new insights into the computations performed by a trained ConvNet.\n\nInterpretation\nThis is a minor point, but I would not say (as the paper does) that the method captures the invariances learned by the model, but rather that it aims to show the variability captured by the model. A ReLU net is only invariant to changes that are mapped to zero by the ReLU, or that end up in the kernel of one of the linear layers. The presented method does not consider this and hence does not analyze invariances.\n\nMinor issues:\n- In the last equation on page 2, the right-hand side is missing a \"min max\".", "This paper wants to probe the non-linear invariances learnt by CNNs. This is attempted by selecting a particular layer, and modelling the space of filters that result in activations that are indistinguishable from activations generated by the real filters (using a GAN). For a GAN noise vector a plausible filter set is created, and for a data sample a set of plausible activations are computed. If the noise vector is perturbed and a new plausible filter set is created, the input data can be optimised to find the input that produces the same set of activations. The claim is that the found input represents the non-linear transformations that the layer is invariant to.\n\nThis is a really interesting perspective on probing invariances and should be explored more. I am not convinced that this particular method is showing much information or highlighting anything particularly interesting, but could be refined in the future to do so.\n\nIt seems that the generated images are not actually plausible images at all and so not many conclusions can be drawn from this method. Instead of performing the optimisation to find x' have you tried visualising the real data sample that gives the closest activations?\n\nI think you may want to consider minimising ||a(x'|z) - a(x|z_k)|| instead to show that moving from x -> x' is the same as is invariant under the transformation z -> z_k (and thus the corresponding movement in filter space). This (the space between x and x') I think is more interpretable as the invariance corresponding to the space between z and z_k. Have you tried that?\n\nThere is no notion of class invariance, so the GAN can find the space of filters that transform layer inputs into other classes, which may not be desirable. Have you tried conditioning the GAN on class?\n\nOverall I think this method is inventive and shows promise for probing invariances. I'm not convinced the current incarnation is showing anything insightful or useful. It also should be shown on more than a single dataset and for a single network, at the moment this is more of a workshop level paper in terms of breadth and depth of results.", "We thank all reviewers for their valuable and detailed feedback.\nWe will incorporate it into the work for future publication." ]
[ 2, 4, 4, -1 ]
[ 4, 4, 5, -1 ]
[ "iclr_2018_H1-oTz-Cb", "iclr_2018_H1-oTz-Cb", "iclr_2018_H1-oTz-Cb", "iclr_2018_H1-oTz-Cb" ]
iclr_2018_HkwrqtlR-
WHAT ARE GANS USEFUL FOR?
GANs have shown how deep neural networks can be used for generative modeling, aiming at achieving the same impact that they brought for discriminative modeling. The first results were impressive, GANs were shown to be able to generate samples in high dimensional structured spaces, like images and text, that were no copies of the training data. But generative and discriminative learning are quite different. Discriminative learning has a clear end, while generative modeling is an intermediate step to understand the data or generate hypothesis. The quality of implicit density estimation is hard to evaluate, because we cannot tell how well a data is represented by the model. How can we certainly say that a generative process is generating natural images with the same distribution as we do? In this paper, we noticed that even though GANs might not be able to generate samples from the underlying distribution (or we cannot tell at least), they are capturing some structure of the data in that high dimensional space. It is therefore needed to address how we can leverage those estimates produced by GANs in the same way we are able to use other generative modeling algorithms.
rejected-papers
As the reviewers said, it is unclear what the main contribution of the paper is.
train
[ "rymXy2ggz", "HkqCiwdlf", "rJE3PW5gM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The main take-away messages of this paper seem to be:\n\n1. GANs don't really match the target distribution. Some previous theory supports this, and some experiments are provided here demonstrating that the failure seems to be largely in under-sampling the tails, and sometimes perhaps in introducing spurious modes.\n\n2. Even if GANs don't exactly match the target distribution, their outputs might still be useful for some tasks.\n\n(I wouldn't be surprised if you disagree with what the main takeaways are; I found the flow of the paper somewhat disjointed, and had something of a hard time identifying what the \"point\" was.)\n\nMode-dropping being a primary failure mode of GANs is already a fairly accepted hypothesis in the community (see, e.g. Mode Regularized GANs, Che et al ICLR 2017, among others), though some extra empirical evidence is provided here.\n\nThe second point is, in my opinion, simultaneously (i) an important point that more GAN research should take to heart, (ii) relatively obvious, and (iii) barely explored in this paper. The only example in the paper of using a GAN for something other than directly matching the target distribution is PassGAN, and even that is barely explored beyond saying that some of the spurious modes seem like reasonable-ish passwords.\n\nThus though this paper has some interesting aspects to it, I do not think its contributions rise to the level required for an ICLR paper.\n\nSome more specifics:\n\nSection 2.1 discusses four previous theoretical results about the convergence of GANs to the true density. This overview is mostly reasonable, and the discussion of Arora et al. (2017) and Liu et al. (2017) do at least vaguely support the conclusion in the last section of this paragraph. But this section is glaringly missing an important paper in this area: Arjovsky and Bottou (2017), cited here only in passing in the introduction, who proved that typical GAN architectures *cannot* exactly match the data distribution. Thus the question of metrics for convergence is of central importance, which it seems should be important to the topic of the present paper. (Figure 3 of Danihelka et al. https://arxiv.org/abs/1705.05263 gives a particularly vivid example of how optimizing different metrics can lead to very different results.) Presumably different metrics lead to models that are useful for different final tasks.\n\nAlso, although they do not quite fit into the framing of this section, Nowozin et al.'s local convergence proof and especially the convergence to a Nash equilibrium argument of Heusel et al. (NIPS 2017, https://arxiv.org/abs/1706.08500) should probably be mentioned here.\n\nThe two sample testing section of this paper, discussed in Section 2.2 and then implemented in Section 3.1.1, seems to be essentially a special case of what was previously done by Sutherland et al. (2017), except that it was run on CIFAR-10 as well. However, the bottom half of Table 1 demonstrates that something is seriously wrong with the implementation of your tests: using 1000 bootstrap samples, you should reject H_0 at approximately the nominal rate of 5%, not about 50%! To double-check, I ran a median-heuristic RBF kernel MMD myself on the MNIST test set with N_test = 100, repeating 1000 times, and rejected the null 4.8% of the time. My code is available at https://gist.github.com/anonymous/2993a16fbc28a424a0e79b1c8ff31d24 if you want to use it to help find the difference from what you did. Although Table 1 does indicate that the GAN distribution is more different from the test set than the test set is from itself, the apparent serious flaw in your procedure makes those results questionable. (Also, it seems that your entry labeled \"MMD\" in the table is probably n * MMD_b^2, which is what is computed by the code linked to in footnote 2.)\n\nThe appendix gives a further study of what went wrong with the MNIST GAN model, arguing based on nearest-neighbors that the GAN model is over-representing modes and under-representing the tails. This is fairly interesting; certainly more interesting than the rehash of running MMD tests on GAN outputs, in my opinion.\n\nMinor:\n\nIn 3.1.1, you say \"ideally the null hypothesis H0 should never be rejected\" – it should be rejected at most an alpha portion of the time.\n\nIn the description of section 3.2, you should clarify whether the train-test split was done such that unique passwords were assigned to a single fold or not: did 123456 appear in both folds? (It is not entirely clear whether it should or not; both schemes have possible advantages for evaluation.)", "This paper considers the question of how well GANs capture the true data distribution. The train GAN models on MNIST, CIFAR and a pass word dataset and then use two-kernel ample tests to assess how well the models have modeled the data distribution. They find that in most cases GANs don't match the true distribution.\n\nIt is unclear to me what the contribution of this paper is. The authors appear to simple perform experiments done elsewhere in different papers. I have not learned anything new by reading this work. Neither the method nor the results are novel contributions to the study of GANs. \n\nThe paper is also written in a very informal manner with several typos throughout. I would recommend the authors try to rewrite the work as perhaps more of a literature review + throughout experimentations of GAN evaluation techniques. In its current form I don't think it should be accepted. \n\nAdditional comments:\n- The authors claim GANs are able to perform well even when data is limited. Could the authors provide some examples to back up this claim. As far as I understand GANs require lots of data to properly train. \n- on page 3 the authors claim that using human assessments of GAN generated images is bad because humans have a hard time performing the density estimation (they might ignore tails of the distribution for example) .. I think this is missing up a bunch of different ideas.. First, a key questions is *what do we want our GANs for?* Density estimation is only one of those answers. If the goal is density estimation then of course human evaluation is an inappropriate measure of performance. But if the goal is realistic synthesis of thats then human perceptual measures are more appropriate. Using humans can be ban in other ways of course since they would have a hard time assessing generalizability (i.e. you could just sample training images and humans would think the samples looked great!). \n", "This paper tried to tell us something else about GANs except for their implicit generation power. The conclusion is GANs can capture some structure of the data in high dimensional space. \n\nTo me, the paper seems a survey paper instead of a research one. The introduction part described the involving of generative models and some related work about GANs. However, the author did not claim what the main contributions are. Even in Section 2, I can see nothing new but all the others' work. The experimental section included some simulation results, which are weird for me since they are not quite related to previous content. Moreover, the 3.1.1 \"KERNEL TWO-SAMPLE TEST\" is something which has been done in other paper [Li et al., 2017, Guo et al., 2017]. \n\nIt is suggested that the author should delete some of the parts describing their work and make clear claims about the main contributions of the paper. Meanwhile, the experimental results should support the claims. \n\n" ]
[ 3, 3, 3 ]
[ 5, 4, 5 ]
[ "iclr_2018_HkwrqtlR-", "iclr_2018_HkwrqtlR-", "iclr_2018_HkwrqtlR-" ]
iclr_2018_BJQPG5lR-
Avoiding degradation in deep feed-forward networks by phasing out skip-connections
A widely observed phenomenon in deep learning is the degradation problem: increasing the depth of a network leads to a decrease in performance on both test and training data. Novel architectures such as ResNets and Highway networks have addressed this issue by introducing various flavors of skip-connections or gating mechanisms. However, the degradation problem persists in the context of plain feed-forward networks. In this work we propose a simple method to address this issue. The proposed method poses the learning of weights in deep networks as a constrained optimization problem where the presence of skip-connections is penalized by Lagrange multipliers. This allows for skip-connections to be introduced during the early stages of training and subsequently phased out in a principled manner. We demonstrate the benefits of such an approach with experiments on MNIST, fashion-MNIST, CIFAR-10 and CIFAR-100 where the proposed method is shown to greatly decrease the degradation effect (compared to plain networks) and is often competitive with ResNets.
rejected-papers
Pros: + Interesting perspective on training deep networks Cons: - Not a lot of practical significance: why would one want to use this algorithm over standard methods like ResNets or highway networks given that the proposed algorithm is more complex than established methods?
test
[ "B1LHtBsEM", "ByISCAelf", "HkX303_ez", "SyaalQ9lM", "HJOgTRn7f", "HJe2nZnmf", "S1Hrk2cQf", "Bka78GFXG", "r1-JIMK7z", "BJERu9XXz", "HyFpv57XG", "H1C8Pcmmf", "HyqZ1Io1z", "HJaKBU9kG", "HJV2WiFyf", "BJiI_YZkG" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer" ]
[ "Thank you for your feedback as well as for increasing the score of the paper. We respond to the remaining comments below.\n\nSimilarity to Savarese et al., (2016)\nWe thank the reviewer for suggesting this paper, which we will reference in an updated manuscript. As noted by the reviewer, the similarity between our work and that of Savarese et al., (2016) occurs when \\lambda=0. This corresponds to the scenario where no constraints are not enforced on \\alpha (that is, skip-connections are not removed). The comparison to the \\lambda=0 case was included in order to study the capacity and flexibility of VAN networks without the need to satisfy the constraint to remove skip-connections. However, our main focus consisted in removing skip-connections (i.e., case where \\lambda \\neq 0), which is different from Savarese et al., (2016).\n\nContributions and motivation\nWe thank the reviewer for noting that the motivation has been clarified. In an updated manuscript we will further clarify that the motivation for this work is not to reduce the computational cost associated with skip-connections but instead to pose the learning of deep networks in the context of constrained optimization. The motivation behind the proposed method is to introduce skip-connections penalized by Lagrange multipliers into the architecture of our network. In this manner, skip-connections play an important role during the initial training of the network (e.g., by avoiding shattered gradients) and are subsequently removed in a principled manner. As noted by the reviewer, such an approach allows us to train deep plain networks (which do not contain skip-connections, as these are ultimately removed) without suffering degradation to the same extent as when ordinary training is employed. \n", "EDIT: The rating has been changed. See thread below for explanation / further comments.\n\nORIGINAL REVIEW: In this paper, the authors present a new training strategy, VAN, for training very deep feed-forward networks without skip connections (henceforth called VDFFNWSC) by introducing skip connections early in training and then gradually removing them. \n\nI think the fact that the authors demonstrate the viability of training VDFFNWSCs that could have, in principle, arbitrary nonlinearities and normalization layers, is somewhat valuable and as such I would generally be inclined towards acceptance, even though the potential impact of this paper is limited because the training strategy proposed is (by deep learning standards) relatively complicated, requires tuning two additional hyperparameters in the initial value of \\lambda as well as the step size for updating \\lambda, and seems to have no significant advantage over just using skip connections throughout training. So my rating based on the message of the paper would be 6/10.\n\nHowever, there appear to be a range of issues. As long as those issues remain unresolved, my rating is at is but if those issues were resolved it could go up to a 6.\n\n+++ Section 3.1 problems +++\n\n- I think the toy example presented in section 3.1 is more confusing than it is helpful because the skip connection you introduce in the toy example is different from the skip connection you introduce in VANs. In the toy example, you add (1 - \\alpha)wx whereas in the VANs you add (1 - \\alpha)x. Therefore, the type of vanishing gradient that is observed when tanh saturates, which you combat in the toy model, is not actually combated at all in the VAN model. While it is true that skip connections combat vanishing gradients in certain situations, your example does not capture how this is achieved in VANs.\n- The toy example seems to be an example where Lagrangian relaxation fails, not where it succeeds. Looking at figure 1, it appears that you start out with some alpha < 1 but then immediately alpha converges to 1, i.e. the skip connection is eliminated early in training, because wx is further away from y than tanh(wx). Most of the training takes place without the skip connection. In fact, after 10^4 iterations, training with and without skip connection seem to achieve the same error. It appears that introducing the skip connection was next to useless and the model failed to recognize the usefulness of the skip connection early in training.\n- Regarding the optimization algorithm involving \\alpha^* at the end of section 3: It looks to me like a hacky, unprincipled method with no guarantees that just happened to work in the particular example you studied. You motivate the choice of \\alpha^* by wanting to maximize the reduction in the local linear approximation to \\mathcal{C} induced by the update on w. However, this reduction grows to infinity the larger the update is. Does that mean that larger updates are always better? Clearly not. If we wanted to reduce the size of the objective according to the local linear approximation, why wouldn't we choose infinitely large step sizes? Hence, the motivation for the algorithm you present is invalid. Here is an example where this algorithm fails: consider the point (x,y,w,\\alpha,\\lambda) = (100, \\sigma(100), 1.0001, 1, 1). Here, w has almost converged to its optimum w* = 1. Correspondingly, the derivative of C is a small negative value. However, \\alpha* is actually 0, and this choice would catapult w far away from w*.\n\nIf I haven't made a mistake in my criticisms above, I strongly suggest removing section 3.1 entirely or replacing it with a completely new example that does not suffer from the above issues.\n\n+++ ResNet scaling +++\n\nThere is a crucial difference between VANs and ResNets. In the VAN initial state (alpha = 0.5), both the residual path and the skip path are multiplied by 0.5 whereas for ResNet, neither is multiplied by 0.5. Because of this, the experimental results between the two architectures are incomparable.\n\nIn a question I posed earlier, you claimed that this scaling makes no difference when batch normalization is used. I disagree. Let's look at an example. Consider ResNet first. It can be written as x + r_1 + r_2 + .. + r_B, where r_b is the value computed by residual block b. Now let's assume we insert a scaling constant after each residual block, say c = 0.5. Then the result is c^{B}x + c^{B-1}r_1 + c^{B-2}r_2 + .. + r_B. Therefore, contributions of lower blocks vanish exponentially. This effect is not combated by batch normalization.\n\nSo the learning dynamics for VAN and ResNet are very different because of this scaling. Therefore, there is an open question: are the differences in results between VAN and ResNet in your experiments caused by the removal of skip connections during training or by this scaling? Without this information, the experiments have limited value. In fact, I suspect that the vanishing of the contribution of lower blocks bears more responsibility for the declining performance of VAN at higher depths than the removal of skip connections.\n\nIf my assessment of the situation is correct, I would like to ask you to repeat your experiments with the following two settings: \n\n- ResNet where after each block you multiply the result of the addition by 0.5, i.e. x_{l+1} = 0.5\\mathcal{F}(x_l) + 0.5x_l\n- VAN with the following altered equation: x_{l+1} = \\mathcal{F}(x_l) + (1-\\alpha)x_l, i.e. please remove the alpha in front of \\mathcal{F}. Also, initialize \\alpha to zero. This ensures that VAN starts out as a regular ResNet.\n\n+++ writing issues +++\n\nTitle:\n\n- \"VARIABLE ACTIVATION NETWORKS: A SIMPLE METHOD TO TRAIN DEEP FEED-FORWARD NETWORKS WITHOUT SKIP-CONNECTIONS\" This title can be read in two different ways. (A) [Train] [deep feed-forward networks] [without skip-connections] and (B) [Train] [deep feed-forward networks without skip connections]. In (A), the `without skip-connections' modifies the `train' and suggests that training took place without skip connections. In (B), the `without skip-connections' modifies `deep feed-forward networks' and suggests that the network trained has no skip connections. You must mean (B), because (A) is false. Since it is not clear from reading the title whether (A) or (B) is true, please reword it.\n\nAbstract:\n\n- \"Part of the success of ResNets has been attributed to improvements in the conditioning of the optimization problem (e.g., avoiding vanishing and shattered gradients). In this work we propose a simple method to extend these benefits to the context of deep networks without skip-connections.\" Again, this is ambiguous. To me, this sentence implies that you extend the benefit of avoiding vanishing and exploding gradients to fully-connected networks without skip connections. However, nowhere in your paper do you show that trained VANs have less exploding / vanishing gradients than fully-connected networks trained the old-fashioned way. Again, please reword or include evidence.\n- \"where the proposed method is shown to outperform many architectures without skip-connections\" Again, this sentence makes no sense to me. It seems to imply that VAN has skip connections. But in the abstract you defined VAN as an architecture without skip connections. Please make this more clear.\n\nIntroduction:\n- \"Indeed, Zagoruyko & Komodakis (2016) demonstrate that it is better to increase the width of ResNets than the depth, suggesting that perhaps only a few layers are learning useful representations.\" Just because increasing width may be better than increasing depth does not mean that deep layers don't learn useful representations. In fact, the claim that deep layers don't learn useful representations is directly contradicted by the paper.\n\nsection 3.1:\n- replace \"to to\" by \"to\" in the second line\n\nsection 4:\n- \"This may be a result of the ensemble nature of ResNets (Veit et al., 2016), which does not play a significant role until the depth of the network increases.\" The ensemble nature of ResNet is a drawback, not an advantage, because it causes a lack of high-order co-adaptataion of layers. Therefore, it cannot contribute positively to the performance or ResNet.\n\nAs mentioned in earlier comments, please reword / clarify your use of \"activation function\". It is generally used a synonym for \"nonlinearity\", so please use it in this way. Change your claim that VAN is equivalent to PReLU. Please include your description of how your method can be extended to networks which do allow for skip connections.\n\n+++ Hyperparameters +++\n\nSince the initial values of \\lambda and \\eta' are new hyperparameters, include the values you chose for them, explain how you arrived at those values and plot the curve of how \\lambda evolves for at least some of the experiments.", "UPDATED COMMENT\nI've improved my score to 6 to reflect the authors' revisions to the paper and their response to my and R2's comments. I still think the work is somewhat incremental, but they have done a good job of exploring the idea (which is nice).\n\nORIGINAL REVIEW BELOW\n\nThe paper introduces an architecture that linearly interpolates between ResNets and vanilla deep nets (without skip connections). The skip connections are penalized by Lagrange multipliers that are gradually phased out during training. The resulting architecture outperforms vanilla deep nets and sometimes approaches the performance of ResNets.\n\nIt’s a nice, simple idea. However, I don’t think it’s sufficient for acceptance. Unfortunately, this seems to be a simple idea that doesn't work as well as the simpler idea (ResNets) that inspired it. Moreover, the experiments are weak in two senses: (i) there are lots of obvious open questions that should have been explored and closed, see below, and (ii) the results just aren’t that good. \n\nComments:\n\n1. Why force the Lag. multipliers to 1 at the end of training? It seems easy enough to treat the alphas as just more parameters to optimize with gradient descent. I would expect the resulting architecture to perform at least as well as variable action nets. If not, I’d be curious as to why.\n\n2.Similarly, it’s not obvious that initializing the multipliers at 0.5 is the best choice. The “looks linear” initialization proposed in “The shattered gradients problem” (Balduzzi et al) implies that alpha=0 may work better. Did the authors try any values besides 0.5? \n\n3. The final paragraph of the paper discusses extending the approach to architectures with skip-connections. Firstly, it’s not clear to me what this would add, since the method is already interpolating in some sense between vanilla and resnets. Secondly, why not just do it? \n\n", "Update (original review below):\nThe authors have addressed several of the reviewers' comments and improved the paper.\nThe motivation has certainly been clarified, but in my opinion it is still hazy. The paper does use skip connections, but the difference is that they are phased out over training. So I think that the motivation behind introducing this specific difference should be clear. Is it to save the additional (small) overhead of using skip connections?\nNevertheless, the additional experiments and clarifications are very welcome.\n\nFor the newly added case of VAN(lambda=0), please note the strong similarity to https://arxiv.org/abs/1611.01260 (ICLR2017 reviews at https://openreview.net/forum?id=Sywh5KYex). In that report \\alpha_l is a scalar instead of a vector. \n\nAlthough it is interesting, the above case case also calls into question the additional value brought by the use of constrained optimization, a main contribution of the paper.\n \nIn light of the above, I have increased my score since I find this to be an interesting approach, but in my opinion the significance of the results as they stand is low. The paper demonstrates that it is possible to obtain very deep plain networks (without skip connections) with improved performance through the use of constrained optimization that gradually removes skip connections, but the value of this demonstration is unclear because a) consistent improvements over past work or the \\lambda=0 case were not found, and b) The technique still relies on skip connections in a sense so it's not clear that it suggests a truly different method of addressing the degradation problem. \n\nOriginal Review\n=============\nSummary:\nThe contribution of this paper is a method for training deep networks such that skip connections are present at initialization, but gradually removed during training, resulting in a final network without any skip connections.\nThe paper first proposes an approach based on a formulation of deep networks with (non-parameterized, non-gated) skip connections with an equality constraint that effectively removes the skip connections when satisfied. It is proposed to optimize the formulation using the method of Lagrange multipliers.\nA toy model with a single unit is used to illustrate the basic ideas behind the method. Finally, experimental results for the task of image classification are reported using the MNIST, Fashion-MNIST, and CIFAR datasets.\n\nQuality and significance:\nThe proposed methodology is simple and straightforward. The analysis with the toy network is interesting and helps illustrate the method. However, my main concerns with this paper are related to motivation and experiments.\n\nThe motivation of the work is not clear at all. The stated goal is to address some of the issues related to the role of depth in deep networks, but I think it should be clarified which specific issues in particular are relevant to this method and how they are addressed. One could additionally consider that removing the skip connections at the end of training reduces the computational expense (slightly), but beyond that the expected utility of this investigation is very hazy from the description in the paper.\n\nFor MNIST and MNIST-Fashion experiments, the motivation is mentioned to be similar to Srivastava et al. (2015), but in that study the corresponding experiment was designed to test if deeper networks could be optimized. Here, the generalization error is measured instead, which is heavily influenced by regularization. Moreover, only some architectures appear to employ batch normalization, which is a potent regularizer. The general difference between plain and non-plain networks is very likely due to optimization difficulties alone, and due to the above issues further comparisons can not be made from the results. \n\nFor the CIFAR experiments, the experiment design is reasonable for a general comparison. Similar experimental setups have been used in previous papers to report that a proposed method can achieve good results, but there is no doubt that this does not make a rigorous comparison without employing expensive hyper-parameter searches. This is not the fault of the present paper but an unfortunate tradition in the field. Nevertheless, it is important to note that direct comparison should not be made among approaches with key differences. For the reported results, Fitnets and Highway Networks did not use Batch Normalization (which is a powerful regularizer) while VANs and Resnets do. Moreover, it is important to report the training performance of deeper VANs (which have a worse generalization error) to clarify if the VANs suffered difficulties in optimization or generalization.\n\nClarity:\nThe paper is generally well-written and easy to read. There are some clarity issues related to the use of the term \"activation function\" and a typo in an equation but the authors are already aware of these.", "We thank the reviewers for all their helpful comments. We have now posted an updated manuscript incorporating the comments of reviewers. We have responded to each reviewer below. We highlight the main changes to the manuscript here:\n\nExperiments:\nWe have run additional experiments as suggested by reviewers. In particular, we have run experiments without enforcing the equality constraint is not enforced on \\alpha (as requested by AnonReviewer1). This comparison is included in order to study the capacity and flexibility of VAN networks without the constraint to remove skip-connections. Following suggestions from AnonReviewer2 we have also reformulated the residual block structure of the proposed method, leading to improved results. This reformulated also provides a clear choice for the initialization of \\alpha values, which was a concern for AnonReviewer1 (we now initialize to \\alpha, ensuring that VANs are equivalent to ResNets at initialization). \n\nMotivation:\nThe motivation for the proposed work was unclear in the original submission. As a result, we have updated the manuscript to clearly state out motivation: which is to address the degradation problem in deep feed-forward networks. While ResNets and Highway networks address this issue by introducing skip-connections or gating mechanisms, we look to tackle the problem from the perspective of constrained optimization. As such, the objective of the our work is to propose a new training regime for plain networks which explicitly addresses the issue of performance degradation with depth for plain feed-forward networks. \n", "ok", "Thank you for quick response - we have uploaded an updated manuscript incorporating the suggested changes. Please find our response to comments below. \n\nChoice of parameters:\nThe choice of the training schedule and learning rate where taken directly from He et al., (2016). In this paper, the authors state: “We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations” where we take each iteration to refer to one batch of training. Based on a training set of 50k samples and a batch size of 128 this implies that each epoch corresponds to approximately 50k/128 = 390 iterations. Therefore 32k, 48k and 64k iterations correspond roughly to the 82, 125 and 165 epochs.\n\nChoice of VAN-specific hyper-parameters\nThe choice of VAN hyper-parameters was based on a small number of experiments (as noted by AnonReviewer3, we did not run extensive hyper-parameter searches). \n\nCIFAR-100 experiments\nOur goal through the experiments was to demonstrate that degradation did not occur with VANs. In the case of the CIFAR100 experiments, the degradation of plain networks is already clear for networks of depth 26 and 34 (we think this may be due to the reduced number of training examples per class). As such, we did not run experiments for deeper architectures on CIFAR-100. \n\nAxis of right panel in Figures 2, 3\nWe thank the reviewer for spotting this - this scaling was because we had accidentally divided by the number of residual blocks (this was fixed at 4 for all experiments). This has been corrected. \n\nMinor points\nWe thank the reviewer for the suggestions, most of which have been incorporated into the updated manuscript.\n\n\nReferences:\nHe, Kaiming, et al. \"Deep residual learning for image recognition.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.\n", "\nSome minor points I noticed while reading the updated manuscript:\n\n- \"...is shown to greatly decrease the degradation effect (compared to plain networks) and is often competitive with ResNets.\" This sentence suggests that a VAN is not considered by the authors to be a \"plain network\", but a seperate architecture. \"However, the degradation problem persists in the context of plain feed-forward networks. In this work we propose a simple method to address this issue.\" This part suggests that VAN's are considered by the authors to be a \"plain network\", and VAN to be merely an optimization \"method\". I suggests the authors decide whether they want to frame VAN as an architecture seperate from ResNet, HighwayNet and plain networks, or as an optimization method for plain networks and then use this framing throughout the paper.\n- \"even within training data\". replace this with \"even when the network is applied to training data\"\n- \"ResNets can be considered a special case of Highway Networks\" not in the way you defiend ResNet and Highway networks. (1) is not a special case of (2) as the highway network cannot represent the skip path's multiplication with W'.\n- While I'm flattered by the inclusion of footnote 2, I don't think it is necessary / appropriate to acknowledge me in this way. In theory, reviewers are supposed to be helpful.\n- \"In all networks the first layer was a fully connected plain layer followed by l layers and a final softmax layer.\" What do you mean by \"l layers\"? Do you mean \"l residual blocks\"? Please clarify.\n- \"In the case of ResNets and VAN, the residual function consisted of batch-normalization followed by ReLU.\" This sounds like it does not include a linear transformation, but I assume it does. Also, is the linear transformation applied before or after those two operations? Please clarify.\n- replace \"gradual removed\" with \"gradual removal\"\n- \"This provides evidence that the gradual removed of skip-connections via Lagrange multipliers leads to improved generalization performance\" compared to what?\n- \"Finally, we note that VAN networks obtain competitive results across all depths. This is particularly true in the context of shallow networks (l ≤ 10 layers) where VAN networks obtain competitive test accuracies.\" You seem to contradict yourself here. First you say that VAN is completitive at all depths and then you insinuate that it is only competitive for l <= 10. \n- \"Crucially, we note that VAN networks outperform plain networks in this context, \" It looks to me that VAN outperforms plain networks in all contexts, i.e. for all values of l.\n- \"As a more realistic benchmark we consider the CIFAR-10 and CIFAR-100 datasets.\" It sounds like you are declaring MNIST / Fashion-MNIST \"unrealistic\". I don't think this is necessary as I don't think CIFAR is more realistic than MNIST. Digit recognition is a valid field of application for deep networks.\n- \"with a fully connected softmax layer\" Do you mean fully-connected layer FOLLOWED BY a softmax layer or do you mean JUST a softmax layer which you describe as fully-connected? Please clarify.\n- in table 2, please also include the more recent benchmarks wide resnet and densenet\n- \"This manuscript presents a simple method for training deep feed-forward networks without suffering from the degradation problem.\" Doesn't it still suffer from the degradation problem, albeit to a lesser extent?\n- \"Throughout a series of experiments we demonstrate that the performance of VAN networks is stable as network depth increases\" really? I thought it degrades slightly. That does not seem to me to be the same as being stable.\n- I don't think you need to include Appendix A, unless you are making an argument that the alternative formulation with the 0.5's has value independent of the formulation used in the main paper. I don't think you are trying to make that case. In my original review, while I suggested you run the experiments shown in Appendix A, this was more for my and your benefit in terms of understanding what's going on. I don't think the 0.5 formulation needs to appear in the paper.\n- replace \"by the currently value\" by \"by the current value\"\n- fix grammar in \"The left panel plots the mean Lagrange multiplier, λl are shown for networks of varying depth.\"", "Dear Authors,\n\nThank you for your detailed response and updated manuscript. I changed my rating to 5, with the rating becoming a 6 if (A) you explain how you arrived at your hyperparameter choices and (B) if that explanation is reasonable, i.e. does not reveal weaknesses in statistical validity. Where did the numbers 165 / 82 / 125 come from in the third paragraph of section 4.2? How where VAN-specific hyperparameters (\\eta' and initial \\lambda) chosen?\n\nAlso I just noticed that for CIFAR-10, you considered much deeper networks than for CIFAR-100. Why is that? Can you include results for, say, 80-layer nets in CIFAR-100?\n\nThere seems to be something wrong with figure 2 (right graph). The gold curve seems to start at 0.25, but it should start at 1 under the new formulation of VAN where the \\alpha are initialized to 0.\n\nI read the reviews written by the other reviewers and your rebuttal to them. While I do not presume to speak for the other reviewers, I think their concerns were addressed well in both the rebuttals and the revised version. However, I understand that if the other reviewers do not opt to increase their ratings in response to those rebuttals, it may not be worth your time to respond to this comment or upload another revision. I just wanted to let you know that I would not be offended if you did not respond to this comment or upload another revision.", "We thank the reviewer for providing a thorough and thoughtful review. We have made many of the changes suggested in the review, leading to an improved manuscript in the process. Below we respond to each of the comments in turn.\n\nToy problem: \nThe reviewer raised important issues relating to the toy problem and its relevance to the proposed framework. As such, we have removed this motivating example from the updated manuscript.\n\nResNet scaling: \nWe thank the reviewer for alerting us to an important shortcoming of the proposed method. Following the reviewers suggestion, we have run experiments with the following residual block structure:\n\t\t\\mathcal{F}(x_l, W_l) + (1 - \\alpha_l) x_l\nFurther, as suggested by the reviewer we have initialized with \\alpha=0 resulting in networks that were equivalent to ResNets at initialization. \nOur experiments suggest that this formulation leads to improved results, especially in the context of very deep networks. Further, we have also run the ResNet scaling experiment suggested by the reviewer and find similar degradation in performance for deep networks. These results, which are reported in Appendix A, provide evidence for the reviewers hypothesis that modulating the \\mathcal{F} by some \\alpha_l \\neq 1 leads to a vanishing contribution from shallow blocks. \n\nWriting issues:\nWe have the following changes:\n - Title: As suggested by the reviewer, we have amended the title to clearly resemble the goal of the proposed method and our contribution. The new title is: “Avoiding degradation in deep feed-forward networks by phasing out skip-connections”. \n - Abstract: The abstract has also been re-written in order to clarify the objectives and contributions of our work.\n - Introduction, claim about relationship between width and depth: the paragraph in question has been removed as it was not relevant to the goals of the proposed method. \n - Section 4, comment about ensemble nature of ResNets: Whether the ensemble nature of ResNets is an advantage of disadvantage is unclear. While boosting will typically lead to better performance, the reviewer notes that the ensemble nature of ResNets may actually be detrimental as it leads to co-adaptation of features. In order to avoid entering into this discussion we have removed this sentence. \n - Use of activation function/non-linearity: this has been clarified throughout. \n\nHyper-parameters: \nwe have updated the manuscript to clearly state the choice of hyper-parameters. In particular, we now clearly state the choice of \\eta’ and provide traces for the Lagrange multipliers in Appendix B. \n", "We thank the reviewer for raising important issues. We respond to each below.\n\nNot enforcing constraint on alphas: \nWe have run the additional experiments suggested by the reviewer. As expected, when no constraint is enforced on the alphas (this corresponds to setting the Lagrange multipliers to 0 in equation (6)), the performance of VAN networks does indeed improve. These additional experiments have been included in the updated version of the manuscript (see Figures 2 and 3 as well as Table 2). \n\nInitialization of alpha: \nOur original experiments focused on the choice of alpha=0.5 at initialization. However, the reviewer correctly notes that such an initialization will not necessarily be optimal. Following the suggestion of AnonReview2 we have reformulated the VAN equation as follows:\n\t\t\\mathcal{F}(x_l) + (1 - \\alpha_l) x_l \nAnd initialize with alpha_l=0. This ensures that VANs are equivalent to ResNets at initialization (while this was not previously the case). More importantly, such a reparameterization leads to improved performance - an empirical comparison is provided in Appendix A. \n\nVANs with skip-connections: \nthe reviewer correctly notes that it is intuitively obvious how to combine VAN residual blocks with architectures that contain skip-connections. One way the two could be combined would be by not enforcing the constraint on \\alpha_l in VANs (as we have done in response to a previous comment). We have removed this sentence from the manuscript. \n", "We thank the reviewer for raising three important points, which we respond to below\n\nMotivation: \nWe agree with the reviewer that the motivation of the proposed work was unclear in the original submission. As a result, we have updated the manuscript to clearly state out motivation: which is to address the degradation problem in deep feed-forward networks. While ResNets and Highway networks address this issue by introducing skip-connections or gating mechanisms, we look to tackle the problem from the perspective of constrained optimization. As such, the objective of the our work is to propose a new training regime for plain networks which explicitly addresses the issue of performance degradation with depth for plain feed-forward networks. \n\nIn order to clearly reflect our motivation and contribution we have amended the title of the manuscript as well as clarified the abstract and the introduction. \n\n\nRegarding MNIST and MNIST-Fashion experiments: \nBatch-normalization was employed across all architectures in the MNIST experiments (although we do acknowledge that the original Highway network experiments did not use batch norm). We have updated the manuscript to clearly state this. As a result, we do believe that comparisons across the different architectures are valid on this experiment. \nWhile Srivastava et al. (2015) reported the training cross-entropy, it is unclear to us why reporting the generalization performance does not serve as an indication of successful optimization across various networks given that all networks employed batch normalization.\n\nCIFAR experiments:\nThe reviewer correctly notes that extensive hyper-parameter searches where not run for the CIFAR experiments. This is because the objective of the experiments in Section 4 was not to achieve state-of-the-art performance but rather to demonstrate that VAN networks do not suffer from the degradation problem to the same extent as plain feed-forward networks. We feel that these experiments serve to validate this claim.\nRegarding comparisons with alternative architectures (e.g., FitNets and Highway networks) we have amended Table 2 to state which architectures did and did not use batch-normalization. \n", "Thank you for the further questions, please see our replies below. \n\n1) Relation to parametric ReLU\n\nWe agree that our language is ambiguous and will update the manuscript to avoid conflating 'activation function' with \\mathcal{F} (note that currently we are unable to upload a new version). \n\nRegarding the various interpretations of line 4 p4, the reviewers final interpretation is correct and we agree that there is only a correspondence between the proposed method and parametric ReLUs when W is the identity. Our intention was to highlight a similarity with the parametric ReLU. However, we note that our claims regarding the gradient still holds; the gradient with respect to \\alpha will be simple and computationally cheap. \n\n2) Extension to networks with skip-connections\n\nOne possible extension would be to enforce the constraint in equation (5) with an inequality where we constrain the total budget on \\alpha_l across all layers. This would lead to skip-connections being removed across some layers and retained across others, providing insights into where in a network skip-connections are most important. \n\n", "Thank you for your responses so far. I need to ask a few more question before writing my final review.\n\n\"Throughout the experiments we will often take the nonlinearity in Fl to be ReLU in which case the activation function corresponds to the parametric ReLU, for which the term @C can be efficiently computed (He et al., 2015).\"\n\nFirstly, please try to avoid using 'activation function' to refer to \\mathcal{F} as it is generally considered to be a synonym for 'nonlinearity'. However, if you do use 'activation function' to refer to \\mathcal{F}, then the statement \"the activation function corresponds to the parametric ReLU\" makes no sense, because parametric ReLU is a nonlinearity. I take it \\mathcal{F} is not just composed of a nonlienarity, but also of convolutional and / or normalization layers.\n\nHowever, even if I take the statement \"the activation function corresponds to the parametric ReLU\" to mean \"\\mathcal{F} corresponds to a sequence of operations, the last of which is parametric ReLU\", it still makes no sense because the parameter alpha does not impact \\mathcal{F}. In equation (4), it is only applied after \\mathcal{F} is computed. So I take it you use 'activation function' to refer to function that computes x_{l+1} from x_l in this instance? Again, this is ambiguous.\n\nBut even if we take 'activation function' to mean 'the function that computes x_{l+1} from x_l' and thus \"the activation function corresponds to the parametric ReLU\" to mean \"the function that computes x_{l+1} from x_l corresponds to a sequence of operations, the last of which is parametric ReLU\", it is still false if equation (4) is true. Let's assume that \\mathcal{F} = ReLU(Wx) for some W. Then we have x_{l+1} = (1-\\alpha)*x_l + \\alpha*ReLU(Wx_l) according to (4). But this is not the same as PReLU(Wx_l). In fact, we have PReLU(Wx_l) = (1-\\alpha)*Wx_l + \\alpha*ReLU(Wx_l).\n\nPlease clarify.\n\n\n\"However, the proposed method is quite general and can be readily extended to networks which do allow for skip-connections.\" \n\n\nHow?\n\n\n\n\nThanks,\n\n", "Dear Reviewer,\n\nThank you for your questions. We respond to each of them below.\n\n1) Use of Lagrange multipliers \n\nThe objective of the proposed method is to extend some of the benefits associated with skip-connections to the context of networks without skip-connections. We propose to do this by working in the framework of constrained optimisation. As such, the use of Lagrange multipliers is not an illustration but a tool we employ in order to solve equation (5), which is the true objective. Equation (6) is the associated Lagrangian, which combines the objective in equation (5) with a term enforcing the constraint. Our updates then seek a saddle point of this Lagrangian, by minimizing it with respect to the parameters W and \\alpha, and maximizing with respect to the Lagrange multipliers \\lambda.\n\n2) + and - signs in equations (7) and (8):\n\nAs stated above, the updates should minimize the Lagrangian with respect to \\alpha and W, while maximizing it with respect to \\lambda. We thank the reviewer for pointing out the mistake in equation (7). It should read as:\n \\alpha_l \\leftarrow \\alpha_l - \\eta ( \\frac{\\partial C}{ \\partial \\alpha_l } + \\ \\lambda_l )\n\nHowever, with respect to equation (8), the plus sign is indeed correct. This is because we wish to maximise with respect to the Lagrange multipliers (in order to enforce the equality constraint with increasing severity). The signs in the equations in the section on the toy model are also correct; the reason they look different is because the constraint has been (equivalently) encoded as 1 - \\alpha there, rather than as \\alpha - 1 in equation (6). \n\n3) Initialization of \\lambda \n\nThroughout our experiments we initialized \\lambda to -1. Because we enforced that \\alpha remain in [0,1], all updates to \\lambda were <=0. These two facts ensured that \\lambda remained negative. An updated version of the manuscript will reflect this. However, as noted in the paragraph after equation (8), the nature of the \\lambda updates dictates that \\lambda will be monotonically decreasing. Thus even if we initialized \\lambda to be positive, subsequent updates would push it towards negative values.\n\n4) Relation between VAN at \\alpha=.5 and ResNets \n\nThe reviewer is correct in noting that when \\alpha=.5 there is a similarity between ResNets and VANs, but the activation is not exactly the same (as the VAN activation is scaled by 0.5). We will correct our language to reflect this. However, what we intended to highlight was that the balance between non-linearity and skip-connection was equal (as in a ResNet) whereas for arbitrary \\alpha, equation (4) defines a sum where the non-linearity and the skip-connection may not necessarily receive equal weights. Further, when batch normalisation is employed the activation functions are effectively the same as the effects of scaling are removed. \n\n5) Initialization of \\alpha for CIFAR experiments \n\nWe initialized \\alpha=0.5 - see the final paragraph on page 6. This initialization was used throughout all experiments; we will make this clear in the updated manuscript.\n\n6) Activation function and non-linearity. \n\nWe agree with the reviewer that our language regarding activation function and non-linearity could be more clear. When we say \"activation function\" we refer to \\mathcal{F}. In the experiments section we make statements such as \"the ReLU activation function ...\" when we should say \"the ReLU non-linearity\". We will correct this in the updated version of the manuscript. \n\n\nThanks for you questions - we hope to have clarified them. If you have any more questions (or need further details) please let us know :) \n\n", "Dear authors, I am one of the reviewers and would like to ask a couple of question. In equation (6), you show the Lagrangian of the constrained problem. Is there a deeper reason why you do this? Or are you simply using the term in equation (6) as your objective function and the fact that it is the Lagrangian is merely an illustration? The reason I ask is that it's been a while since I took Optimization 101 :), meaning I would have to read up on the Lagrangian to remind myself of the associated theory. If you let me know that understanding Lagrangian optimization is necessary for the paper, I will go and do that. If not, I won't. \n\nAlso, there seems to be something wrong with + and - signs throughout section 3. If you take the derivative of (6) with respect to \\alpha and \\lambda, one does not get out (7) and (8). It seems that in (7) the second minus should be a plus and in (8), the first plus should be a minus. The exact same problem occurs at the bottom of page 4 for the toy example.\n\nAlso, to what value do you initialize \\lambda? For equation (6) to make sense, \\lambda has to be initialized to negative values so that the term that is added to \\mathcal{C} is positive, and it needs to be positive as it is a penalty ...\n\nAlso, you say \" In the case of VAN networks the α values were initialized to 0.5 · 1 for all layers. As such, during the initial stages of training VAN networks had the same activation function as ResNets.\" I don't understand this. In the initial state, the weight of the residual path and skip path for ResNet is both 1. But in VAN, the weight of the residual path and the skip path are both 0.5. Hence, as far as I can tell, they are *not* the same. Also, how did you initialize \\alpha for the CIFAR experiments?\n\nAlso, what do you mean by \"activation function\". Do you mean \\mathcal{F} or do you mean just the nonlinearity? You seem to be using \\mathcal{F} for both, which is confusing.\n\n" ]
[ -1, 6, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "SyaalQ9lM", "iclr_2018_BJQPG5lR-", "iclr_2018_BJQPG5lR-", "iclr_2018_BJQPG5lR-", "iclr_2018_BJQPG5lR-", "S1Hrk2cQf", "r1-JIMK7z", "BJERu9XXz", "BJERu9XXz", "ByISCAelf", "HkX303_ez", "SyaalQ9lM", "HJaKBU9kG", "iclr_2018_BJQPG5lR-", "BJiI_YZkG", "iclr_2018_BJQPG5lR-" ]
iclr_2018_B1D6ty-A-
Training Autoencoders by Alternating Minimization
We present DANTE, a novel method for training neural networks, in particular autoencoders, using the alternating minimization principle. DANTE provides a distinct perspective in lieu of traditional gradient-based backpropagation techniques commonly used to train deep networks. It utilizes an adaptation of quasi-convex optimization techniques to cast autoencoder training as a bi-quasi-convex optimization problem. We show that for autoencoder configurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform the alternations very effectively. DANTE effortlessly extends to networks with multiple hidden layers and varying network configurations. In experiments on standard datasets, autoencoders trained using the proposed method were found to be very promising when compared to those trained using traditional backpropagation techniques, both in terms of training speed, as well as feature extraction and reconstruction performance.
rejected-papers
Pros: + Interesting alternative algorithm for training autoencoders Cons: - Not a lot of practical value because DANTE does not outperform SGD in terms of time or classification performance using autoencoder features. This is an interesting and well-written paper that doesn't quite meet the threshold for ICLR acceptance. If the authors can find use cases where DANTE has demonstrable advantages over competing training algorithms, I expect the paper would be accepted.
train
[ "HJTgwzPVM", "H1Q0pKwlf", "HJJaJoveM", "Syrb4g5gz", "SyFN3vmNf", "ry_SthoGG" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "We thank you for going through our revised draft, and sharing your concern. \n\nIn our setup for Theorem A.3, we have a single *multi-output* layer, so the label set given x is given by (y being a vector):\n\n y = [ max{0, cx}, max{0,dx} ]\n\nAssuming the setup mentioned in the comment with a,b,c and d as scalars, we would then have a setup with one input node, and two output nodes, where (a, b) are the current weights, and (c,d) is the optimal solution. The error is then given by: (\\phi(a,b,x) - y)^2 (where \\phi is applied elementwise).\nThe error derivative at (a,b) is therefore:\n\n\tG = [ (max{0, ax} - max{0, cx} ) * Ind{ax>0} x, (max{0, bx} - max{0, dx}) * Ind{bx>0} x ]\n\nThe Frobenius inner product is then given by:\n\n <G, (a,b) - (c, d) >_F = (a-c)(max{0, ax}- max{0, cx}) *Ind{ax>0}x + (b-d)(max{0, bx}- max{0, dx})*Ind{bx>0}x\n\nThis is always >= 0 (each term in the sum is >= 0).\n\nAnother way of looking at a single multi-output layer is that each output and the weights with which it is associated is, in fact, our original GLM problem which we have shown to be SLQC in Theorem 3.4. The separate values of <G, w_i -v_i > are >= 0 and therefore the sum is also >=0.\n", "After reading the rebuttal:\n\nThe authors addressed some of my theoretical questions. I think the paper is borderline, leaning towards accept.\n\nI do want to note my other concerns:\n\nI suspect the theoretical results obtained here are somewhat restricted to the least-squares, autoencoder loss. \n\nAnd note that the authors show that the proposed algorithm performs comparably to SGD, but not significantly better. The classification result (Table 1) was obtained on the autoencoder features instead of training a classifier on the original inputs. So it is not clear if the proposed algorithm is better for training the classifier, which may be of more interest.\n\n=============================================================\n\nThis paper presents an algorithm for training deep neural networks. Instead of computing gradient of all layers and perform updates of all weight parameters at the same time, the authors propose to perform alternating optimization on weights of individual layers. \n\nThe theoretical justification is obtained for single-hidden-layer auto-encoders. Motivated by recent work by Hazan et al 2015, the authors developed the local-quasi-convexity of the objective w.r.t. the hidden layer weights for the generalized RELU activation. As a result, the optimization problem over the single hidden layer can be optimized efficiently using the algorithm of Hazan et al 2015. This itself can be a small, nice contribution.\n\nWhat concerns me is the extension to multiple layers. Some questions are not clear from section 3.4:\n1. Do we still have local-quasi-convexity for the weights of each layer, when there are multiple nonlinear layers above it? A negative answer to this question will somewhat undermine the significance of the single-hidden-layer result.\n\n2. Practically, even if the authors can perform efficient optimization of weights in individual layers, when there are many layers, the alternating optimization nature of the algorithm can possibly result in overall slower convergence. Also, since the proposed algorithm still uses gradient based optimizers for each layer, computing the gradient w.r.t. lower layers (closer to the inputs) are still done by backdrop, which has pretty much the same computational cost of the regular backdrop algorithm for updating all layers at the same time. As a result, I am not sure if the proposed algorithm is on par with / faster than the regular SGD algorithm in actual runtime. In the experiments, the authors plotted the training progress w.r.t. the minibatch iterations, I do not know if the minibatch iteration is a proxy for actual runtime (or number of floating point operations).\n\n3. In the experiments, the authors found the network optimized by the proposed algorithm generalize better than regular SGD. Is this result consistent (across dataset, random initializations, etc), and can the authors elaborate the intuition behind?\n", "The authors propose an alternating minimization framework for training autoencoders and encoder-decoder networks. The central idea is that a single encoder-decoder network can be cast as an alternating minimization problem. Each minimization problem is not convex but is quasi-convex and hence one can use stochastic normalized gradient descent to minimize w.r.t. each variable. This leads to the proposed algorithm called DANTE which simply minimizes w.r.t. each variable using stochastic normalized gradient algorithm to minimize w.r.t. each variable The authors start with this idea and introduce a generalized ReLU which is specified via a subgradient function only whose local quasi-convexity properties are established. They then extend these idea to multi-layer encoder-decoder networks by performing greedy layer-wise training and using the proposed algorithms for training each layer. The ideas are interesting, but I have some concerns regarding this work.\n\nMajor comments:\n\n1. When dealing with a 2 layer network where there are 2 matrices W_1, W_2 to optimize over. It is not clear to me why optimizing over W_1 is a quasi-convex optimization problem? The authors seem to use the idea that solving a GLM problem is a quasi-convex optimization problem. However, optimizing w.r.t. W_1 is definitely not a GLM problem, since W_1 undergoes two non-linear transformations one via \\phi_1 and another via \\phi_2. Could the authors justify why minimizing w.r.t. W_1 is still a quasi-convex optimization problem?\n\n2. Theorem 3.4, 3.5 establish SLQC properties with generalized RELU activations. This is an interesting result, and useful in its own right. However, it is not clear to me why this result is even relevant here. The main application of this paper is autoencoders, which are functions from R^d -> R^d. However, GLMs are functions from R^d ---> R. So, it is not at all clear to me how Theorem 3.4, 3.5 and eventually 3.6 are useful for the autoencoder problem that the authors care about. Yes they are useful if one was doing 2-layer neural networks for binary classification, but it is not clear to me how they are useful for autoencoder problems.\n\n3. Experimental results for classification are not convincing enough. If, one looks at Table 1. SGD outperforms DANTE on ionosphere dataset and is competent with DANTE on MNIST and USPS. \n\n4. The results on reconstruction do not show any benefits for DANTE over SGD (Figure 3). I would recommend the authors to rerun these experiments but truncate the iterations early enough. If DANTE has better reconstruction performance than SGD with fewer iterations then that would be a positive result.", "In this paper an alternating optimization approach is explored for training Auto Encoders (AEs).\nThe authors treat each layer as a generalized linear model, and suggest to use the stochastic normalized GD of [Hazan et al., 2015] as the minimization algorithm in each (alternating) phase.\nThen they apply the suggested method to several single layer and multi layer AEs, comparing its performance to standard SGD. The paper suggests an interesting approach and provides experimental evidence for its usefulness, especially for multi-layer AEs.\n\n\nSome comments on the theoretical part:\n-The theoretical part is partly misleading. While it is true that every layer can be treated a generalized linear model, the SLQC property only applies for the last layer.\nRegarding the intermediate layers, we may indeed treat them as generalized linear models, but with non-monotone activations, and therefore the SLQC property does not apply.\nThe authors should mention this point.\n\n-Showing that generalized ReLU is SLQC with a polynomial dependence on the domain is interesting. \n\n-It will be interesting if the authors can provide an analysis/relate to some theory related to alternating minimization of bi-quasi-convex objectives. Concretely: Is there any known theory for such objectives? What guarantees can we hope to achieve?\n\n\nThe extension to muti-layer AEs makes sense and seems to works quite well in practice.\n\nThe experimental part is satisfactory, and seems to be done in a decent manner. \nIt will be useful if the authors could relate to the issue of parameter tuning for their algorithm.\nConcretely: How sensitive/robust is their approach compared to SGD with respect to hyperparameter misspecification.\n", "Dear Authors,\nI believe that theorem A.3 that you added regarding the SLQC property of intermediate layers is incorrect.\n\nI could not spot the mistake in the proof, but I came up with a very simple example showing that SLQC property could not hold,\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%\nConsider the following example:\n-Assume a,b,c,d are scalars.\n-Also assume that we are in the realizable case. Meaning given x its label is set as follows,\n y = max{0,cx}+max{0,dx}\n\n-For a slotion (a,b) and point x we denote, \\phi(a,b,x) = max{0,ax}+max{0,bx}\n-We also denote by Ind{A} the indicator function of an event A.\n\n-Now consider a solution (a,b)\nThen the error of this solution at a point x will be, error = (\\phi(a,b, x) - y)^2,\n-The derivative at (a,b) is therefore,\nG = (\\phi(a,b, x) - y) * ( Ind{ax>0}x, Ind{bx>0}x)\n\n-Now, \n<G, (a,b) -(c, d) > = (\\phi(a,b, x) - y) *(nd{ax>0} *(a-c)*x + Ind{bx>0}*(b-d)*x)\n \n-Lets make the following simplifying assumptions,\nax>0, bx>0, cx>0, dx<0\n-In this case we get,\n <G, (a,b) -(c, d) > = ( (a-c)*x + bx)* ( (a-c)*x + (b-d)*x)\n = ( (a-c)*x + bx)^2 -dx * ( (a-c)*x + b*x)\n\n-Now if we assume that (a-c)*x + bx<0 and also dx< (a-c)*x + bx \nThis implies that, <G, (a,b) -(c, d) > <0 \n*And therefore SLQC property does not apply in this case!\n*Note that since dx<0 and bx>0 then then the error = (\\phi(a,b, x) - y)^2, might be very large (much larger than \\epsilon)\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\nHaving said that I still believe that the paper is interesting and I keep my score.\n\n\n", "We thank the reviewers for acknowledging our contributions and sharing their feedback. Please find our responses below (we have also uploaded a revised paper draft with the appropriate content based on these responses):\n\nR3: “every layer can be treated a generalized linear model...the SLQC property only applies for the last layer…”\nR2: “Do we still have local-quasi-convexity for the weights of each layer, when there are multiple nonlinear layers above it?”\n\nThe SLQC property does hold for the intermediate layers as well. It is important to note, however, that the SLQC property holds with respect to the input to the corresponding intermediate layer. (Section 3.3 in the latest version of the paper clarifies this.)\n\nR1: “Experimental results for classification are not convincing enough...Table 1….SGD outperforms….and is competent with DANTE...”\nR1: “The results on reconstruction do not show benefits for DANTE (over SGD)…”\nR2: “In the experiments, the authors found the network optimized by the proposed algorithm generalize better than regular SGD. Is this result consistent (across dataset, random initializations, etc)?”\n\nWe would like to clarify that we propose DANTE (and thus, an alternating minimization strategy) as a competitive alternative to backpropagation-based SGD, and our results corroborate this claim. While this was conveyed in our earlier version too, we have revised any choice of words that may have suggested otherwise. This comparable performance is consistent across all our experiments and studies.\n\nR2: “In the experiments, the authors plotted the training progress w.r.t. the minibatch iterations, I do not know if the minibatch iteration is a proxy for actual runtime (or number of floating point operations).”\nMinibatch iterations is in fact a proxy for actual runtime in these experiments, and on measuring the time taken for the experiments in Figure 2, we found the times taken are indeed comparable.\n\nR1: “When dealing with a 2 layer network where there are 2 matrices W_1, W_2 to optimize over. It is not clear to me why optimizing over W_1 is a quasi-convex optimization problem? The authors seem to use the idea that solving a GLM problem is a quasi-convex optimization problem. However, optimizing w.r.t. W_1 is definitely not a GLM problem, since W_1 undergoes two non-linear transformations one via \\phi_1 and another via \\phi_2. Could the authors justify why minimizing w.r.t. W_1 is still a quasi-convex optimization problem?”\n\nR1: “...autoencoders, which are functions from R^d -> R^d’. However, GLMs are functions from R^d ---> R. So, it is not at all clear to me how Theorem 3.4, 3.5 and eventually 3.6 are useful for the autoencoder problem...” \n\nWe have revised Section 3.3 (as well as included additional results in our Appendix) to clarify these questions. \n\nIn the DANTE algorithm (Algorithm 2 of paper), it is evident that each node of the output layer presents a GLM problem (and hence, SLQC) w.r.t. the corresponding weights from W_2. We show in Appendices A.2 and A.3 how the entire layer is SLQC w.r.t. W_2, by generalizing the definition of SLQC to matrices. In case of W_1, while the problem may not directly represent a GLM, we show in Appendix A.3 that our generalized definition of SLQC to functions on matrices allows us to prove that Step 4 of Algorithm 2 is also SLQC w.r.t. W_1, thus allowing us to use Theorems 3.4, 3.5 and 3.6 for our formulation.\n\nR3: “It will be interesting if the authors can provide an analysis/relate to some theory related to alternating minimization of bi-quasi-convex objectives. Concretely: Is there any known theory for such objectives? What guarantees can we hope to achieve?”\n\nWe are definitely interested in this question ourselves, and this will form an important direction of our future work. To the best of our knowledge, there are no such known guarantees for this setting. \n" ]
[ -1, 6, 4, 7, -1, -1 ]
[ -1, 4, 4, 5, -1, -1 ]
[ "SyFN3vmNf", "iclr_2018_B1D6ty-A-", "iclr_2018_B1D6ty-A-", "iclr_2018_B1D6ty-A-", "ry_SthoGG", "iclr_2018_B1D6ty-A-" ]
iclr_2018_SyL9u-WA-
Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization
Vanishing and exploding gradients are two of the main obstacles in training deep neural networks, especially in capturing long range dependencies in recurrent neural networks (RNNs). In this paper, we present an efficient parametrization of the transition matrix of an RNN that allows us to stabilize the gradients that arise in its training. Specifically, we parameterize the transition matrix by its singular value decomposition (SVD), which allows us to explicitly track and control its singular values. We attain efficiency by using tools that are common in numerical linear algebra, namely Householder reflectors for representing the orthogonal matrices that arise in the SVD. By explicitly controlling the singular values, our proposed svdRNN method allows us to easily solve the exploding gradient problem and we observe that it empirically solves the vanishing gradient issue to a large extent. We note that the SVD parameterization can be used for any rectangular weight matrix, hence it can be easily extended to any deep neural network, such as a multi-layer perceptron. Theoretically, we demonstrate that our parameterization does not lose any expressive power, and show how it potentially makes the optimization process easier. Our extensive experimental results also demonstrate that the proposed framework converges faster, and has good generalization, especially when the depth is large.
rejected-papers
Pros: + Clearly written paper. + Good theoretical analysis of the expressivity of the proposed model. + Efficient model update is appealing. + Reviewers appreciated the addition of results on the copy and adding tasks in Appendix C. Cons: - Evaluation was on less-standard RNN tasks. A language modeling task should have been included in the empirical evaluation because language modeling is such an important application of RNNs. This paper is close to the decision boundary, but the reviewers strongly felt that demonstration of the method on a language modeling task was necessary for acceptance.
val
[ "Syi9ojdgf", "Bkyxj89lM", "H1Z5gRJZf", "rk61wXVQM", "BJttB7EXf", "HJtJL7Emz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposed a new parametrization scheme for weight matrices in neural network based on the Householder reflectors to solve the gradient vanishing and exploding problems in training. The proposed method improved two previous papers:\n1) stronger expressive power than Mahammedi et al. (2017),\n2) faster gradient update than Vorontsov et al. (2017).\nThe proposed parametrization scheme is natrual from numerical linear algebra point of view and authors did a good job in Section 3 in explaining the corresponding expressive power. The experimental results also look promising. \n\nIt would be nice if the authors can analyze the spectral properties of the saddle points in linear RNN (nonlinear is better but it's too difficult I believe). If the authors can show the strict saddle properties then as a corollary, (stochastic) gradient descent finds a global minimum. \n\nOverall this is a strong paper and I recommend to accept.", "This paper suggests a reparametrization of the transition matrix. The proposed reparametrization which is based on Singular Value Decomposition can be used for both recurrent and feedforward networks.\n\nThe paper is well-written and authors explain related work adequately. The paper is a follow up on Unitary RNNs which suggest a reparametrization that forces the transition matrix to be unitary. The problem of vanishing and exploding gradient in deep network is very challenging and any work that shed lights on this problem can have a significant impact. \n\nI have two comments on the experiment section:\n\n- Choice of experiments. Authors have chosen UCR datasets and MNIST for the experiments while other experiments are more common. For example, the adding problem, the copying problem and the permuted MNIST problem and language modeling are the common experiments in the context of RNNs. For feedforward settings, classification on CIFAR10 and CIFAR100 is often reported.\n\n- Stopping condition. The plots suggest that the optimization has stopped earlier for some models. Is this because of some stopping condition or because of gradient explosion? Is there a way to avoid this?\n\n- Quality of figures. Figures are very hard to read because of small font. Also, the captions need to describe more details about the figures.", "The paper introduces SVD parameterization and uses it mostly for controlling the spectral norm of the RNN. \n\nMy concerns with the paper include: \n\na) the paper says that the same method works for convolutional neural networks but I couldn't find anything about convolution. \n\nb) the theoretical analysis might be misleading --- clearly section 6.2 shouldn't have title ALL CRITICAL POINTS ARE GLOBAL MINIMUM because 0 is a critical point but it's not a global minimum. Theorem 5 should be phrased as \n\nall critical points of the population risk that is non-singular are global minima.\n\nc) the paper should run some experiments on language applications where RNN is widely used\n\nd) I might be wrong on this point, but it seems that the GPU utilization of the method would be very poor so that it's kind of impossible to scale to large datasets? \n", "Thank you for your comments. We have carefully reorganized the paper to take into account your suggestions.\n\nSince our parameterization works for any real matrix, any weight matrix in CNN (e.g. filter matrix) can also be SVD parametrized.\nSince the gradient exploding/vanishing problem is generally considered more significant in RNN/MLP models than in CNN, we have not implement our algorithm as yet for CNNs.\n\n(Please also note that the state-of-the-art neural networks for image processing consists of both convolutional layers and fully-connected layers, like LeNet, AlexNet, ResNet, or Inception, etc. We are already able to parameterize the fully connected layers using our scheme. )\n\nWe will explicitly explain in section 6.2 that we are talking about non-singular critical points. Our theory effectively states that our update rule avoids singular critical points, and thus spurious local minimum.\n\nIn Appendix C (experimental results), we have added experiments on widely used benchmarking RNN tasks like Adding and Copying tasks. In all experiments, svdRNN outperforms others expecially in capturing long-range dependencies. Also from the plots of (first layer) gradient magnitude, we can observe that svdRNN has much more stable gradient than that of RNN and LSTM, which could be the reason why it converges much faster.\n \nThe efficiency and GPU utilization is indeed an important aspect.\nHowever, with efficient linear algebra algorithms, GPU utilization of our method can actually be quite high. The main part in our algorithm, where we multiply m Householder reflectors to the hidden matrix (stacking of hidden vector h's within a minibatch), can be done using blocked (BLAS3) algorithm widely used in QR decomposition.\nFor example, in LAPACK library, the corresponding subroutine is called 'DLARFT', while in MAGMA (LAPACK with GPU) it is called 'magma_cunmqr'. We plan to implement these blocked BLAS3 algorithms as part of our software in the near future.", "Thank you for your insightful comments. We have carefully reorganized the paper to take into account your suggestions.\n\nWe actually have done experiments on Adding and Copying tasks -- these classical benchmarking tasks can test the model's ability in learning long term dependencies. In the new version, we have included experimental result of Adding/Copying problem in Appendix C. On short sequnences, all models performs similarly well. However, svdRNN outperforms other models when sequence length is large.\nFor example, on the addition task, when length of the sequence is 300, svdRNN reached almost 0 loss after 5k examples while RNN/LSTM failed to converge within 20k examples. From the plot of first layer gradient magnitude on different models, we can observe that svdRNN has much more stable gradients than RNN/LSTM.\nThese experiments support our claim that svdRNN is much better at capturing long-term dependencies than vanilla RNNs.\n\nIn most experiments we run all models for the same number of iterations. In the experiment on MNIST, RNN with random initialization encountered gradient explosion and stopped early. Implementing gradient clipping might be able to solve this. However, to have a fair comparison among the algorithms, we did not use huristics like gradient clipping on any of these models.\n\nSorry for the small font in figures. We have enlarged the fonts in figures, and added more details in the figure captions.", "Thank you for your insightful comments. We'll consider your suggestions and make modifications to the paper.\n\nIt is certainly important for the community to study the spectral properties of different types of stationary points of linear RNN. Nevertheless in our work we focus on how our proposed svdRNN helps to avoid local minimum, which is guaranteed by theorem 5 and its corollary." ]
[ 7, 5, 5, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_SyL9u-WA-", "iclr_2018_SyL9u-WA-", "iclr_2018_SyL9u-WA-", "H1Z5gRJZf", "Bkyxj89lM", "Syi9ojdgf" ]
iclr_2018_B1twdMCab
Dynamic Integration of Background Knowledge in Neural NLU Systems
Common-sense or background knowledge is required to understand natural language, but in most neural natural language understanding (NLU) systems, the requisite background knowledge is indirectly acquired from static corpora. We develop a new reading architecture for the dynamic integration of explicit background knowledge in NLU models. A new task-agnostic reading module provides refined word representations to a task-specific NLU architecture by processing background knowledge in the form of free-text statements, together with the task-specific inputs. Strong performance on the tasks of document question answering (DQA) and recognizing textual entailment (RTE) demonstrate the effectiveness and flexibility of our approach. Analysis shows that our models learn to exploit knowledge selectively and in a semantically appropriate way.
rejected-papers
Pros: + The paper is very clearly written. + The proposed re-embedding approach is easily implemented and can be integrated into fancier architectures. Cons: - A lot of the gains reported come from lemmatization, and the gains from background knowledge become marginal when used on a stronger baseline (e.g., ESIM with full training data and full word vectors). This paper is rather close to the decision boundary. The authors had reasonable answers for some of the reviewers' concerns, but in the end the reviewers were not completely convinced.
train
[ "SkdyfcOlf", "BJdxRnOlz", "ByyFhLYgM", "Bk8Ppr4Wf", "Skxg6NE-M", "HJzqfHVZz", "B1DVGLEbG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes a model for adding background knowledge to natural language understanding tasks. The model reads the relevant text and then more assertions gathered from background knowledge before determining the final prediction. The authors show this leads to some improvement on multiple tasks like question answering and natural language inference (they do not obtain state of the art but improve over a base model, which is fine in my opinion).\n\nI think the paper does a fairly good job at doing what it does, it is just hard to get excited by it. \nHere are my major comments:\n\n* The authors explains that the motivation for the work is that one cannot really capture all of the knowledge necessary for doing natural language understanding because the knowledge is very dynamic. But then they just concept net to augment text. This is quite a static strategy, I was assuming the authors are going to use some IR method over the web to back up their motivation. As is, I don't really see how this motivation has anything to do with getting things out of a KB. A KB is usually a pretty static entity, and things are added to it at a slow pace.\n\n* The author's main claim is that retrieving background knowledge and adding it when reading text can improve performance a little when doing QA and NLI. Specifically they take text and add common sense knowledge from concept net. The authors do a good job of showing that indeed the knowledge is important to gain this improvement through analysis. However, is this statement enough to cross the acceptance threshold of ICLR? Seems a bit marginal to me.\n\n* The author's propose a specific way of incorporating knowledge into a machine reading algorithm through re-embeddings that have some unique properties of sharing embeddings across lemmas and also having some residual connections that connect embeddings and some processed versions of them. To me it is unclear why we should use this method for incorporating background knowledge and not some simpler way. For example, have another RNN read the assertions and somehow integrate that. The process of re-creating embeddings seems like one choice in a space of many, not the simplest, and not very well motivated. There are no comparisons to other possibilities. As a result, it is very hard for me to say anything about whether this particular architecture is interesting or is it just in general that background knowledge from concept net is useful. As is, I would guess the second is more likely and so I am not convinced the architecture itself is a significant contribution.\n\nSo to conclude, the paper is well-written, clear, and has nice results and analysis. The conclusion is that reading background knowledge from concept net boost performance using some architecture. This is nice to know but I think does not cross the acceptance threshold.\n\n", "The main emphasis of this paper is how to add background knowledge so as to improve the performance of NLU (specifically QA and NLI) systems. They adopt the sensible perspective that background knowledge might most easily be added by providing it in text format. However, in this paper, the way it is added is simply by updating word representations based on this extra text. This seems too simple to really be the right way to add background knowledge. \n\nIn practice, the biggest win of this paper turns out to be that you can get quite a lot of value by sharing contextualized word representations between all words with the same lemma (done by linguistic preprocessing; the paper never says exactly how, not even if you read the supplementary material). This seems a useful observation which it would be easy to apply everywhere and which shows fairly large utility from a bit of linguistically sensitive matching! As the paper notes, this type of sharing is the main delta in this paper from simply using a standard deep LSTM (which the paper claims to not work on these data sets, though I'm not quite sure couldn't be made to work with more tuning).\n\npp. 6-7: The main thing of note seems to be that sharing of representations between words with the same lemma (which the tables refer to as \"reading\" is worth a lot (3.5-6.0%), in every case rather more than use of background knowledge (typically 0.3-1.5%). A note on the QA results: The QA results are certainly good enough to be in the range of \"good systems\", but none of the results really push the SOTA. The best SQuAD (devset) results are shown as several percent below the SOTA. In the table the TriviaQA results are shown as beating the SOTA, and that's fair wrt published work at the time of submission, but other submissions show that all of these results are below what you get by running the DrQA (Chen et al. 2017) system off-the-shelf on TriviaQA, so the real picture is perhaps similar to SQuAD, especially since DrQA is itself now considerably below the SOTA on SQUAD. Similar remarks perhaps apply to the NLI results.\n\np.7 In the additional NLI results, it is interesting and valuable to note that the lemmatization and knowledge help much more when amounts of data (and the covarying dimensionality of the word vectors) is much smaller, but the fact that the ideas of this paper have quite little (or even negative) effects when run on the full data with full word vectors on top of the ESIM model again draws into question whether enough value is being achieved from the world knowledge.\n\nBiggest question:\n - Are word embeddings powerful enough as a form of memory to store the kind of relational facts that you are accessing as background knowledge?\n\nMinor notes:\n - The paper was very well written/edited. The only real copyediting I noticed was in the conclusion: and be used ➔ and can be used; that rely on ➔ that relies on.\n - Should reference to (Manning et al. 1999) better be to (Manning et al. 2008) since the context here appears to be IR systems?\n - On p.3 above sec 3.1: What is u? Was that meant to be z?\n - On p.8, I'm a bit suspicious of the \"Is additional knowledge used?\" experiment which trains with knowledge and then tests without knowledge. It's not surprising that this mismatch might hurt performance, even if the knowledge provided no incremental value over what could be gained from standard word vectors alone.\n - In the supplementary material the paper notes that the numbers are from the best result from 3 runs. This seems to me a little less good experimental practice than reporting an average of k runs, preferably for k a bit bigger than 3.\n\n\n", "The quality of this paper is good. The presentation is clear but I find lack of description of a key topic. The proposed model is not very innovative but works fine for the DQA task. For the TE task, the proposed method does not perform better than the state-of-the-art systems. \n\n- As ESIM is one of the key components in the experiments, you should briefly introduce ESIM and explain how you incorporated with your vector representations into ESIM.\n- The reference of ESIM is not correct.\n- Figure 1 is hard to understand. What do you indicate with the box and arrow? Arrows seem to have some different meanings. \n- What corpus did you use to pre-train word vectors? \n- As the proposed method was successful for the QA task, you need to explain QA data sets and how the questions are solved.\n- I also expect performance and error analysis of the task results. \n- To claim \"task-agnostic\", you need to try to apply your method to other NLP tasks as well.\n- Page 3. \\Sigma is not defined.", "We thank you for your review and helpful comments.\n\nWe will clarify that we used spacy for lemmatization. Sorry for missing that.\nWe will also clarify misunderstandings in the paper as well as change the reference.\n\n\n==Are word embeddings powerful enough as a form of memory to store the kind of relational facts that you are accessing as background knowledge?==\n\nShort answer: it might be and frankly, we show that it works (see section 6 for counterfactual reasoning analysis). \n\nTo understand the reasons for this one has to consider that only facts that are relevant for the context and task have to be remembered on the fly for each concept. These are not too many, and their relevance can change from task to task.\n\n==Simplicity==\n'This seems too simple to really be the right way to add background knowledge. '\nWe disagree with this statement, because 1) our solution works and 2) what does 'right' mean? Obviously, there are infinite solutions to this problem and we present merely one not claiming that it is the 'right' way. However, it is a very *practical* and *useful* way of doing it because it is orthogonal to all existing task-architectures operating on word embeddings (almost all). Furthermore, we focus on simplicity to have more control over the system, because it is the first that integrates background knowledge in textual form like this. Because we are the first to try this, we like to point out that this is not trivial, although the solution might seem that way (which is not a bad sign). \n\nWe agree, however, that there might be other interesting architectural options for integrating knowledge which should be explored in the future.\n\n==Reading as biggest win==\nWe acknowledge that this is the case and another indicator that our reading architecture is a good solution to incorporate contextual information from the task and back knowledge alike. However, this does not change the fact that background knowledge helps, although less than merely reading. Please note that the reason for this is that 1) additional knowledge is not always present and 2) knowledge helps resolving long-tail phenomena such as handling antonyms properly.\n\n==Results==\nOur QA results are not SotA and we do not aim for that since this requires a lot of engineering and trial and error to get 1 or 2% more. On the NLI datasets we show that also more complex models such as ESIM can improve when employing our strategy and using knowledge. \nComparing against SotA is of course important but it never gives the full picture because every setup is slightly different and small changes can lead to rather large performance differences (from our experience with these datasets).\n\n= SQuAD =\nExtra knowledge gives a boost of an additional 2% which is quite remarkable given that this merely stems from the use of external knowledge (note that external knowledge can mostly help to account for long tail phenomena).\nIn particular, considering that we are using a single Layer BiLSTM as task model, no attention, no hand crafted features, nothing, the reported results are remarkably high.\n\nIn contrast, DrQA explicitly hand engineers features (NER, word in question, etc.) using sophisticated pre-trained systems such as NER systems and uses a three layer BiLSTM. Still, our model achieves higher or comparable performance.\n\n=TriviaQA=\nTriviaQA is somewhat delicate and reported results are not entirely comparable to each other, because data preparation is different. However, we will adopt recent advances in [1] to deal properly with TriviaQA and update results accordingly.\n\n[1] Clark et al. Simple and Effective Multi-Paragraph Reading Comprehension\n\n=NLI results=\nWe present slightly negative results for SNLI. We argued in the paper that systems are overfit on SNLI and that SNLI is very homogeneous wrt the vocabulary used while being very large. This means that most knowledge can be learnt implicitly on that amount of data. However, this is not the case anymore when lowering the data as shown in section 5.3. \n\nIn short, SNLI is not a good test bed for our work, however, we included results for reasons of transparency.\n\nOn MultiNLI we respectfully disagree with the reviewer by noting that knowledge helps in most cases, up to several % when considering simpler BiLSTM as task model.\n\n== Is additional knowledge used? ==\nThis is a valid experiment because in many cases in NLI there is NO knowledge retrieved by our heuristic, that means our model knows how to deal with the absence of it. However, at the same time it learns to rely on it, hence the drop in performance which shows that knowledge is indeed utilized in some meaningful way.\n\n== best result from 3 runs==\nResults did only change slightly between runs (up to 0.2%), so there would not have been a big difference in presentation. We agree, however, that this might indeed be good practice, but frankly, common practice on these datasets is unfortunately to only report the best results.\n", "We thank you for your review and helpful comments.\n\nWe are sorry that parts of our paper were not clear enough and will improve upon that given your concerns.\n\n== Clarification==\n\nThe paper is not about \"generating vector representations of text and background knowledge\", it is about incorporating explicit knowledge (that is text) into a neural NLU system when given a single instance of a task, such that it is able to make better predictions. Knowledge is provided from the outside and the system learns to make use of it. We do this by refining word representations on-the-fly given additional knowledge in textual form when processing a single instance of a task. That means, we do not generate vector representations for knowledge nor text, but simply learn to process and use it on-the-fly.\n\n== ESIM == \nWe will check and update the reference. As mentioned in our paper, we build refined word emebeddings. Because ESIM takes word emebeddings as input naturally (as most neural NLU systems) there is no need to adapt ESIM at all. Being agnostic to the task architecture is an important aspect of our solution.\n\n== Pre trained Embeddigns == \nWe do not pre train any word embeddings. We use pretrained embeddings from Glove (Pennington et al 2015) and refine these on the fly using our architecture.\n\n== Analysis== \nwe provided several analysis with respect to our contribution, namely integrating background knowledge. Note that this work does not aim at improving SotA on highly competitive datasets but tries to explore a way to incorporate background knowledge on the fly. This is why we did not focus on task-specifics in our analysis.\n\n== agnostic == \nour solution is task and model-agnostic. We show that it works well together with 3 different models on 4 different datasets of 2 challenging NLU tasks.", "We thank you for your review and helpful comments.\n\n== Dynamic Integration ==\nWe agree with your point that ConceptNet is a static resource. Despite this fact, we show that our model can deal with changing knowledge in section 6, by swapping antonym with synonym relations. This results in our model making correct counterfactual decisions and shows that our model is not finetuned on a static resource. Extending ConceptNet would have a positive impact on our models without retraining. This is why we call it dynamic.\n\nIn this paper we tried to address on the fly integration of knowledge first, because this is a non trivial problem by itself. We agree that (once such a system works) it would be interesting to focus more on the retrieval part (potentially from the web).\n\n== Marginality of Contribution ==\nShowing that explicit, external knowledge can be used to improve reading comprehension can have a huge impact. This means that task models can focus on handling knowledge given to the system from the outside rather than memorizing every single fact about the world in a limited set of parameters. Furthermore, it allows the model to react to changing knowledge dynamically without having to be retrained. We demonstrate this ability in section 6 where our model learns to make correct counterfactual decisions. This is an important first step in that direction.\n\nGiven that this is the first work that tries and succeeds in integrating knowledge in such a general setting (i.e., knowledge provided in plain text), we believe that this work opens up very promising paths for future research.\n\n== Simpler way of integrating knowledge ==\n\"For example, have another RNN read the assertions and somehow integrate that. \" -> This is exactly what we are doing. However, integrating this information in a simpler way than we did through word embeddings seems unlikely, especially when considering that we aim for a task agnostic reading architecture complementary to any neural task architecture. This goal was also clearly motivated in the paper, namely, that we want to be task and model agnostic with our approach such that we stack any task model on top of our reading module.\n\nFinally, this paper does not claim that our architecture is the best solution, but it is the *first* solution to incorporate knowledge almost seamlessly and we show that it works. That is why we stuck with that architecture and focused on analyzing how a model with knowledge is able to perform and handle the provided knowledge.\n", "Thanks for the thorough review. I would make two points:\n1) Although both reading and knowledge add to this task (and reading adds more in some cases), both are important, new results.\n2) NLP/NLU/ML/(vision/AI/...) are full of models that are \"too simple\". We should be very glad about this. First, it means we can solve some interesting, potentially complex tasks without terribly sophisticated models. Second, they serve as a reasonable baseline for significant architectural innovations." ]
[ 5, 5, 6, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_B1twdMCab", "iclr_2018_B1twdMCab", "iclr_2018_B1twdMCab", "BJdxRnOlz", "ByyFhLYgM", "SkdyfcOlf", "BJdxRnOlz" ]
iclr_2018_SJ19eUg0-
BLOCK-DIAGONAL HESSIAN-FREE OPTIMIZATION FOR TRAINING NEURAL NETWORKS
Second-order methods for neural network optimization have several advantages over methods based on first-order gradient descent, including better scaling to large mini-batch sizes and fewer updates needed for convergence. But they are rarely applied to deep learning in practice because of high computational cost and the need for model-dependent algorithmic variations. We introduce a vari- ant of the Hessian-free method that leverages a block-diagonal approximation of the generalized Gauss-Newton matrix. Our method computes the curvature approximation matrix only for pairs of parameters from the same layer or block of the neural network and performs conjugate gradient updates independently for each block. Experiments on deep autoencoders, deep convolutional networks, and multilayer LSTMs demonstrate better convergence and generalization compared to the original Hessian-free approach and the Adam method.
rejected-papers
Pros: + Clearly written paper. Cons: - Limited empirical evaluation: paper should compare to first-order methods with well-tuned hyperparameters, since the block Hessian-free hyperparameters likely were well tuned, and plots of convergence as a function of time need to be included. - Somewhat limited novelty in that block-diagonal curvature approximations have been used before, though the application to Hessian-free optimization is new. The reviewers liked the clear description of the proposed algorithm and well-structured paper, but after discussion were not prepared to accept it primarily because (1) they wanted to see algorithmic comparisons in terms of convergence vs. time in addition to the convergence vs. updates that were provided; (2) they wanted more assurance that the baseline first-order optimizers had been carefully tuned; and (3) they wanted results on larger scale tasks.
train
[ "SyJaBw1eG", "BkCQ4Ywgz", "SJ1GzQKxz", "ry6E_jXVz", "BkI3GiqXz", "H11iXicmz", "rJ-_ZscQz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "author", "author" ]
[ "Summary: \nThe paper considers second-order optimization methods for training of neural networks.\nIn particular, the contribution of the paper is a Hessian-free method that works on blocks of parameters (this is a user defined splitting of the parameters in blocks, e.g., parameters of each layer is one block, or parameters in several layers could constitute a block). \nThis results into a block-diagonal approximation to the curvature matrix, in order to improve Hessian-free convergence properties: in the latter, a single step might require many CG steps, so the benefit from using second-order information is not apparent.\nThis is mainly an experimental work, where the authors show the merits of their approach on deep autoencoders, convolutional networks and LSTMs: results show favourable performance compared to the original Hessian-free approach and the Adam method.\n\nOriginality: \nThe paper is based on the works of Collobert (2004) and Le Roux et al. (2008), as well as the work of Martens: the twist is that each layer of the neural network is considered a parameter block, so that gradient interactions among weights in a single layer are more useful than those between weights in different layers. This increases the separability of the problem and reduces the complexity. \n\nImportance: \nUnderstanding the difference between first- and second-order methods for NN training is an important topic. Using second-order methods could be considered at its infancy, compared to the wide variety of first-order methods. Having new results on second-order methods with interesting results would definitely attract some attention at the conference. \n\nPresentation/Clarity: \nThe paper is well structured and well written. The authors clearly place their work w.r.t. state of the art and previous works, so that it is clear what is new and what is known.\n\nComments:\n1. It is not clear why the deficiency of first-order methods on training NNs with big batches motivates us to turn into second-order methods. Is there a reasoning for this statement? Or is it just because second-order methods are kind-of the only other alternative we have?\n\n2. Assuming we can perform a second-order method, like Newton's method, on a deep NN. Since originally Newton's method was designed to find solutions that have gradient equal to zero, and since NNs have saddle points (probably many more than local minima), even if we could perfectly perform second-order Newton motions, there is no guarantee whether we converge to a local minimum or a saddle point. However, since we perform Newton's method approximately in practice, this might help escaping saddle points. Any comment on this aspect (I'm not aware whether this is already commented in Schraudolph 2002, where the Gauss-Newton matrix was proposed instead of the Hessian)?\n", "The paper proposes a block-diagonal hessian-free method for training deep networks. \n\n- The block-diagonal approximation has been used in [1]. Although [1] is using Gauss-Newton matrix, the idea of \"block-diagonal\" approximation is similar. \n\n- Is the computational time (per iteration) of the proposed method similar to SGD/Adam? All the figures are showing the comparison in terms of number of updates, but it is not clear whether this speedup can be reflected in the training time. \n\n- Comparing block-diagonal approximation vs original HF method: \nIt is not clear to me what's the benefit using block-diagonal approximation. Is the time cost per iteration similar or faster? \nOr the main benefit is to reduce #CG iterations? (but it seems #CG iterations are fixed for both methods in the experiments). \nAlso, the paper mentioned that \"the HF method requires many hundreds of CG iterations for one update\". Is this true?\n Usually we can set a stopping condition for solving the Newton system.\n\n- It seems the benefit of block-diagonal approximation is marginal in CNN. \n\n[1] Practical Gauss-Newton Optimisation for Deep Learning. ICML 2017. ", "In this paper, authors discuss the use of block-diagonal hessian when computing the updates. The block-diagonal hessian makes it easier to solve the \"newton\" directions, as the CG can be run only on smaller blocks (and hence less CG iterations are needed).\n\nThe paper is nicely written and all was clear to me. In general, I agree that having larger batch-size is the way to go, for very large datasets and a pure SGD type of methods are having problems to efficiently utilize large clusters.\n\nThe only negative thing I find in the paper is the lack of more numerical results. Indeed, the paper is clearly not a theoretical paper, is proposing a new algorithm, hence there should be evidence that it works. For example, I would like to see how the choice of hyper-parameters influences the speed of the algorithm. Was \"CG\" cost included in the \"x\"-axis? i.e. if we put \"passes\" over the data as x-axis, then 1 update \\approx 30 CG + some more == 32 batch evaluation.\nSo please try to make the \"x\"-axis more fair.\n\n ", "1. Wall clock time plots:\n I think it is very important that you include in the final paper the wall-clock time plots. Per iteration, improvement is not of that much interest from a practical perspective if there is significantly different time computation. In the optimization of any Machine Learning model, there are two things of interest - if the final result is significantly different and how long it takes to get there. In your reply to AnonReviwer2, you mentioned that it is between 5-10 times of ADAM. However, it would be nice to clear up if you have actually measured that or is this an eye-balling of the factor. Also is this on a CPU or on a GPU? The aim of such publication should be towards people adopting these methods and without this **very** relevant information it really does not tell us a lot.\n\n2. Batch size and model size\n In your first 2 experiments, you either use very large batch sizes (6000 or 1000) or very tiny models (LSTM with 10 hidden units). These very unrepresentative of what modern Deep Learning workloads are. Nevertheless, this is not nessacarily an issue, as the method is new. However, failing to report what happens when you decrease the batch size (very standard batch size is ~ 100-200) or increase the model size in the LSTM example (the smallest LSTMs I've seen used in practical applications have at least a few hundred nodes) is missfortunate. Especially, from some of the papers in the literature, we know that using too small batches for the curvature matrices (when you can not afford to store moving averages like in [1,2,3]) can lead to significantly detrimental effects, as the Monte-Carlo estimates of the Gauss-Newton and the Hessian become degenerate. I genuinely hope that the authors decide to include this comparison in the final version. The final experiment in the paper is indeed interesting. However, as we can see ADAM achieves lower training cost. Although they have same test accuracy, this is really not much meaningful, as it is highly unfair to compare optimizer based on their generalization performance, when these are methods built for optimizing the objective at hand.\n\n3. Hyperparameter optimization\n In the comparison against ADAM, you use the default setting of the optimizer. Although that is used often in practice, it is again unfair, since given that you have presented only 3 experiments, it is highly likely that the hyperparameters selected for the BHF have been highly tuned to those specific datasets. In [3] they specifically state that they also do hyperparameter search for ADAM parameters, including the decay rate of the learning rate. Additionally, on some of the CNN in the literature is not uncommon to use Momentum rather than ADAM. Thus a more in-depth comparison against better fine-tuned first order methods would be quite desirable. \n\n[1] Optimizing Neural Networks with Kronecker-factored Approximate Curvature. ICML 2015\n[2] A Kronecker-factored Approximate Fisher Matrix for Convolution Layers. ICML 2016\n[3] Practical Gauss-Newton Optimisation for Deep Learning. ICML 2017. \n", "Thank the reviewer for the thoughtful feedback.\n\n------The block-diagonal approximation has been used in [1]. Although [1] is using Gauss-Newton(GN) matrix, the idea of \"block-diagonal\" approximation is similar.\n\nResponse: The work [1] and earlier works [2,3] uses block-diagonal approximation to the Gauss-Newton matrix or Fisher information matrix and requires explicit representation and inverse of each block of GN matrix or Fisher matrix. [1, 2] are only applied to feedforward networks. Further approximation and assumption are considered for working around convolutional neural networks [3]. It is not obvious how to generalize the algorithm to recurrent networks. The significant difference between ours and [1,2,3] is that we use block-diagonal approximation when evaluating the Hessian (Gauss-Newton) vector product and don’t require explicit inverse of the GN matrix and can work directly with convolutional and recurrent networks.\n \n------Is the computational time (per iteration) of the proposed method similar to SGD/Adam? All the figures are showing the comparison in terms of number of updates, but it is not clear whether this speedup can be reflected in the training time.\n\nResponse: The time consumption of block-diagonal Hessian-free (BHF) and that of the full Hessian-free (HF) are comparable. The time per iteration of BHF and HF is 5-10 times of the Adam method. However, the total number of iterations of BHF and HF are much smaller than Adam, which can offset the per-iteration cost.\n\n \n------Comparing block-diagonal approximation vs original HF method: It is not clear to me what's the benefit using block-diagonal approximation. Is the time cost per iteration similar or faster? Or the main benefit is to reduce #CG iterations? (but it seems #CG iterations are fixed for both methods in the experiments). Also, the paper mentioned that \"the HF method requires many hundreds of CG iterations for one update\". Is this true? Usually we can set a stopping condition for solving the Newton system.\n\nResponse: The BHF algorithm partitions the original HF method into a bunch of sub-problems and solves each sub-problem with the same CG iterations as the full HF method and hence may get more accurate solution. In practice given a fixed budgets of CGs the BHF takes slightly more time (15%) per update than the full HF method if without paralleling the sub-problems onto different workers but achieves better accuracy. Hence the main benefit is to reduce #CGs and achieve better accuracy. We note that the performance of HF cannot be improved by simply increasing the #CGs. It is true that we can set a stopping condition (fixed number of CGs) for solving the Newton system. How to achieve good accuracy given a number of CGs for solving the Newton system is not clear. Our BHF algorithm provides a way that easily achieve good accuracy with a small number of CG runs.\n\n\n \n[1] Practical Gauss-Newton Optimisation for Deep Learning. ICML 2017.\n[2] Optimizing Neural Networks with Kronecker-factored Approximate Curvature. ICML 2015\n[3] A Kronecker-factored Approximate Fisher Matrix for Convolution Layers. ICML 2016\n", "We would like to thank the reviewer for appreciating our contribution in this paper. \n\n------It is not clear why the deficiency of first-order methods on training NNs with big batches motivates us to turn into second-order methods. Is there a reasoning for this statement? Or is it just because second-order methods are kind-of the only other alternative we have?\n\nResponse: It have been shown that second-order methods worked well with big batch sizes and in fact small batch size will make the convergence of the second-order methods unstable and hurt their performances. On the contrary, the first-order methods on training NNs with big batches have problem on the speedup and generalization (Keshar et al. 2016; Takac et al. 2013; Dinh et al. 2017). These deficiencies of first-order methods with large mini batch size motivate us to turn into second-order methods to handle big batch size.\n \n------Assuming we can perform a second-order method, like Newton's method, on a deep NN. Since originally Newton's method was designed to find solutions that have gradient equal to zero, and since NNs have saddle points (probably many more than local minima), even if we could perfectly perform second-order Newton motions, there is no guarantee whether we converge to a local minimum or a saddle point. However, since we perform Newton's method approximately in practice, this might help escaping saddle points. Any comment on this aspect (I'm not aware whether this is already commented in Schraudolph 2002, where the Gauss-Newton matrix was proposed instead of the Hessian)?\n \nResponse: The reviewer proposes an very interesting view of the possible advantage of the Gauss-Newton matrix and the approximate Newton over Newton’s method, which was not commented in (Schraudolph 2002). As far as we know the main problem of Newton’s method on trading deep NN is that for nonlinear system, the Hessian matrix is not necessarily positive definite so Newton’s method may diverge, which is consistent with the unstable practical performance of Newton’s method in training deep NN. The Gauss-Newton matrix is an approximation of the local curvature with positive semidefinite property as long as the loss function with respect to the output of the network is convex, which holds for most popular loss functions (MSE, cross entropy). Indeed, these approximations to the curvature matrix may act as noises which help to escape saddle points while the exact Newton’s method may fail. This perspective requires further exploration.\n \n", "We thank the reviewer for appreciating our work.\nThe “x”-axis represents the number of updates and CG cost is not included.\nThe time consumption of block-diagonal Hessian-free (BHF) and that of the full Hessian-free (HF) are comparable. The time per iteration of BHF and HF is 5-10 times of the Adam method. However, the total number of iterations of BHF and HF are much smaller than Adam, which can offset the per-iteration cost.\n" ]
[ 6, 4, 6, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_SJ19eUg0-", "iclr_2018_SJ19eUg0-", "iclr_2018_SJ19eUg0-", "iclr_2018_SJ19eUg0-", "BkCQ4Ywgz", "SyJaBw1eG", "SJ1GzQKxz" ]
iclr_2018_H1bhRHeA-
Unbiased scalable softmax optimization
Recent neural network and language models have begun to rely on softmax distributions with an extremely large number of categories. In this context calculating the softmax normalizing constant is prohibitively expensive. This has spurred a growing literature of efficiently computable but biased estimates of the softmax. In this paper we present the first two unbiased algorithms for maximizing the softmax likelihood whose work per iteration is independent of the number of classes and datapoints (and does not require extra work at the end of each epoch). We compare our unbiased methods' empirical performance to the state-of-the-art on seven real world datasets, where they comprehensively outperform all competitors.
rejected-papers
The key concern from the reviewers that was not addressed is that none of the experimental results illustrate convergence vs. time instead of convergence vs. number of iterations. While the authors point out that their method is O(ND) instead of O(KND), the reviewers really wanted to see graphs demonstrating this, given that the implicit SGD method requires an iterative solver. The revised paper is otherwise much improved from the original submission, but falls a bit short of ICLR acceptance because of the lack of a measurement of convergence vs. time. Pros: + Promising unbiased algorithms for optimizing the log-likelihood of a model using a softmax without having to repeatedly compute the normalizing factor. Cons: - The key concern from the reviewers that was not addressed is that none of the experimental results illustrate convergence vs. time instead of convergence vs. number of iterations.
train
[ "BJ_UlF_lM", "HJBXAQFeM", "HJIPOSAbf", "H1-65w37f", "BkuezBxfz", "SJCdbBezM", "HygceHgMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper presents interesting algorithms for minimizing softmax with many classes. The objective function is a multi-class classification problem (using softmax loss) and with linear model. The main idea is to rewrite the obj as double-sum using the dual formulation and then apply SGD to solve it. At each iteration, SGD samples a subset of training samples and labels. The main contribution of this paper is: 1) proposing a U-max trick to improve the numerical stability and 2) proposing an implicit SGD approach. It seems the implicit SGD approach is better in the experimental comparisons. \n\nI found the paper quite interesting, but meanwhile I have the following comments and questions: \n\n- As pointed out by the authors, the idea of this formulation and doubly SGD is not new. (Raman et al, 2016) has used a similar trick to derive the double-sum formulation and solved it by doubly SGD. The authors claim that the algorithm in (Raman et al) has an O(NKD) cost for updating u at the end of each epoch. However, since each epoch requires at least O(NKD) time anyway (sometimes larger, as in Proposition 2), is another O(NKD) a significant bottleneck? Also, since the formulation is similar to (Raman et al., 2016), a comparison is needed. \n\n- I'm confused by Proposition 1 and 2. In appendix E.1, the formulation of the update is derived, but why we need Newton to get log(1/epsilon) time complexity? I think most first order methods instead of Newton will have linear converge (log(1/epsilon) time)? Also, I guess we are assuming the obj is strongly convex?\n\n- The step size is selected in one dataset and used for all others. This might lead to divergence of other algorithms, since usually step size depends on data. As we can see, OVE, NCE and IS diverges on Wiki-small, which may be fixed if the step size is chosen for each data (in practice we can choose using subsamples for each data). \n\n- All the comparisons are based on \"epochs\", but the competing algorithms are quite different and can have very different running time for each epoch. For example, implicit SGD has another iterative solver for each update. Therefore, the timing comparison is needed in this paper to justify that implicit SGD is faster. \n\n- The claim that \"implicit SGD never overshoots the optimum\" needs more supports. Is it proved in some previous papers? \n\n- The presentation can be improved. I think it will be helpful to state the algorithms explicitly in the main paper.", "The problem of numerical instability in applying SGD to soft-max minimization is the motivation. It would have been helpful if the author(s) could have made a formal statement. \nSince the main contributions are two algorithms for stable SGD it is not clear how one can formally say that they are stable. For this a formal problem statement is necessary. The discussion around eq (7) is helpful but is intuitive and it is difficult to get a formal problem which we can use later to examine the proposed algorithms.\n\nThe proposed algorithms are variants of SGD but it is not clear why they should converge faster than existing strategies.\nSome parts of the text are badly written, see for example the following line(see paragraph before Sec 3)\n\n\"Since the converge of SGD is\ninversely proportional to the magnitude of its gradients (Lacoste-Julien et al., 2012), we expect the\nformulation to converge faster.\"\n \nwhich could have shed more light on the matter. \n\nThe title is also misleading in using the word \"exact\". I have understand it correct the proposed SGD method solves the optimization problem to an additive error.\n\nIn summary the algorithms are novel variants of SGD but the associated claims of numerical stability and speed of convergence vis-a-vis existing methods are missing. The choice of word exact is also not clear.\n", "The paper develops an interesting approach for solving multi-class classification with softmax loss.\n\nThe key idea is to reformulate the problem as a convex minimization of a \"double-sum\" structure via a simple conjugation trick. SGD is applied to the reformulation: in each step samples a subset of the training samples and labels, which appear both in the double sum. The main contributions of this paper are: \"U-max\" idea (for numerical stability reasons) and an \"\"proposing an \"implicit SGD\" idea.\n\nUnlike the first review, I see what the term \"exact\" in the title is supposed to mean. I believe this was explained in the paper. I agree with the second reviewer that the approach is interesting. However, I also agree with the criticism (double sum formulations exist in the literature; comments about experiments); and will not repeat it here. I will stress though that the statement about Newton in the paper is not justified. Newton method does not converge globally with linear rate. Cubic regularisation is needed for global convergence. Local rate is quadratic. \n\nI believe the paper could warrant acceptance if all criticism raised by reviewer 2 is addressed.\n\nI apologise for short and late review: I got access to the paper only after the original review deadline.", "The modified paper with the recommended changes has been uploaded. We believe that the paper has been much improved by addressing the concerns raised by the reviewers (thank you again for your feedback). In particular the new bounds on Implicit SGD's step size and computation using the bisection method give new insights into its performance and make it a more reliable method. The new experiments have very similar results to the old experiments, with Implicit SGD outperforming all other methods.", "Thank you for your valuable feedback. Below we respond to your points and mention changes that we have made to the paper to address your concerns.\n\n- Our method only requires O(ND) per epoch, since in each epoch we take an O(D) stochastic gradient at each data point. This is a factor of K smaller than Raman et al. In most of our experiments, the second epoch of Raman would not have even started by the time our algorithms had already nearly converged!\n\nThe reason why their method is so slow is that they require the denominator (i.e. the partition function) for each data point to be fully computed in each epoch with an associated cost of O(NKD). This is necessary to ensure that their stochastic gradients do not become too large. Our methods never require that the denominator be computed exactly - instead we optimize over u_i which approximates this quantity. As a result our algorithms cost only O(ND), opposed to O(NKD), per epoch.\n\n- The objective in the implicit SGD updates is always strongly convex (this is explicitly stated in the appendix). You are quite correct that first order methods can achieve the log(1/epsilon) rate and that Newton's method is not required (thanks for pointing this out!). This improves the run-time bound of our algorithm. We have adjusted our comments on the bound accordingly.\n\n- The OVE, NCE, IS and U-max algorithms all have virtually the same runtime. Implicit SGD has a longer runtime because it has to solve an optimization problem to compute the update. If multiple data points and multiple classes are sampled each iteration, the run time to compute the update can be significant, and may be more expensive than computing the inner products x^\\top w. This will be the case even if first order methods with O(log(1/epsilon)) convergence rates are used. To assess the speed of Implicit SGD in this general setting we agree that a timing comparison would be required.\n\nThat said, we have now added a very tight bound for the Implicit SGD update when a single data point and single class is sampled. This is because, in this case, we can update using the bisection method. This is a new result that has been added as Proposition 2. Since the size of the initial bisection interval is provably small, the cost of the bisection search will be less than that of calculating the x^\\top w inner products and so Implicit SGD will have very similar runtime to methods like OVE, NCE, IS and U-max. Indeed we found this to be the case in practice.\n\n- Our decision to fix the learning rate using one experiment and then apply this rate to all other datasets was done in order to reflect how the algorithm might be used in practice. However, you suggest a nice alternative and we will follow your suggestion. We will rerun the experiments using 10% of the data to tune the learning rate, and then apply the tuned learning rate to the full dataset. This also gives us the opportunity to compare to vanilla SGD, which was unstable on most datasets except for very small learning rates (which we will now tune for each dataset). We will add the results to the paper once the experiments are complete.\n\n- Our claim that \"Implicit SGD never overshoots the optimum\" was limited to the quadratic for which we gave the explicit Implicit SGD update formula: \\theta^{(t+1)} = \\theta^{(t)}/(1+\\eta_t). In general it is possible for Implicit SGD to overshoot the optimum. We will improve our wording to clarify this.\n\n- We will work on the text to improve its quality.", "Thank you for your valuable feedback. Below we respond to your points and mention changes that we have made to the paper to address your concerns.\n\n- You are correct to point out that \"The problem of numerical instability in applying SGD to soft-max minimization [formulated as double sum] is the motivation\". We will make this clearer in the text.\n\n- Our response to the comment that we have not formally proven the stability of our algorithms is as follows. The stability of all these algorithms is essentially determined by the magnitude of the gradient vector. For learning rates that are sufficiently large to have reasonably fast convergence, the magnitude of the gradients should always be small enough so that the step size (i.e. the product of the learning rate and the gradient) does not overflow numerical precision or grossly overshoot the optimum. In the original version, we provided a bound on the magnitude of the gradients for U-max (see the paragraph under Theorem 1). We have now added Proposition 3 that proves that the magnitude of the Implicit SGD step size is O(x_i^\\top (w_k - w_{y_i})-u_i) (as opposed to vanilla SGD which has an exponentially larger bound). Thus, we have formal bounds on the magnitude of the stochastic gradients for both of our algorithms.\n\nYou correctly observe that the exponential form of the vanilla stochastic gradients was the cause of its instability. Having bounded gradients, as our algorithms do, directly resolves this source of instability. This was confirmed in our (yet to be added - see below) experiments where our algorithms were stable for learning rates where vanilla SGD was not.\n\n- Since Implicit SGD has roughly the same convergence rates as vanilla SGD, we do not make any formal claims of faster convergence for our methods. However, since the gradients of our methods are bounded, we expect their variance to be lower and the allowed step length to be larger, leading to faster convergence in practice. To make these claims testable, we will add empirical comparisons to vanilla SGD where we use sufficiently small learning rates to ensure that it is stable. See our response to the reviewer #2 for more details on our experiment design.\n\n- By \"exact\" we were implying \"converges to the optimal MLE\". Although the MLE is an efficient estimator, for a finite number of samples, its optimum will differ from the true parameters. We will change \"exact\" to \"unbiased\", which should resolve potential confusion.\n\n- We will work on the text to improve its quality.", "Thank you for your valuable feedback. Since your comments overlap significantly with reviewer #2, we have addressed your points in the response to reviewer #2." ]
[ 5, 5, 5, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_H1bhRHeA-", "iclr_2018_H1bhRHeA-", "iclr_2018_H1bhRHeA-", "iclr_2018_H1bhRHeA-", "BJ_UlF_lM", "HJBXAQFeM", "HJIPOSAbf" ]
iclr_2018_HyKZyYlRZ
Large Scale Multi-Domain Multi-Task Learning with MultiModel
Deep learning yields great results across many fields, from speech recognition, image classification, to translation. But for each problem, getting a deep model to work well involves research into the architecture and a long period of tuning. We present a single model that yields good results on a number of problems spanning multiple domains. In particular, this single model is trained concurrently on ImageNet, multiple translation tasks, image captioning (COCO dataset), a speech recognition corpus, and an English parsing task. Our model architecture incorporates building blocks from multiple domains. It contains convolutional layers, an attention mechanism, and sparsely-gated layers. Each of these computational blocks is crucial for a subset of the tasks we train on. Interestingly, even if a block is not crucial for a task, we observe that adding it never hurts performance and in most cases improves it on all tasks. We also show that tasks with less data benefit largely from joint training with other tasks, while performance on large tasks degrades only slightly if at all.
rejected-papers
Pros: + Interesting and promising approach to multi-domain, multi-task learning. + Paper is clearly written. Cons: - Reads more like a technical report than a research paper: more space should be devoted to explaining the design decisions behind the model and the challenges involved, as this will help others tackle similar problems. This paper had extensive discussion between the reviewers and authors, and between the reviewers. In the end, the reviewers want more insight into the architectural choices made, either via ablation studies or via a series of experiments in which tasks or components are added one at a time. The consensus is that this would give readers a lot more insight into the challenges involved in tackling multiple domains and multiple tasks in a single model and a lot more guidance on how to do it.
train
[ "rJ6r_eqlf", "rJEgR88Nf", "Sy4tKnHNM", "S1VMUmbeM", "rke2Ms7ez", "Syf0DbVNM", "r1iTXuT7G", "ry8Hwv67z", "HyKMBPp7z" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper presents a multi-task, multi-domain model based on deep neural networks. The proposed model is able to take inputs from various domains (image, text, speech) and solves multiple tasks, such as image captioning, machine translation or speech recognition. The proposed model is composed of several features learning blocks (one for each input type) and of an encoder and an auto-regressive decoder, which are domain-agnostic. The model is evaluated on 8 different tasks and is compared with a model trained separately on each task, showing improvements on each task.\n\nThe paper is well written and easy to follow.\n\nThe contributions of the paper are novel and significant. The approach of having one model able to perform well on completely different tasks and type of input is very interesting and inspiring. The experiments clearly show the viability of the approach and give interesting insights. This is surely an important step towards more general deep learning models. \n\nComments:\n\n* In the introduction where the 8 databases are presented, the tasks should also be explained clearly, as several domains are involved and the reader might not be familiar with the task linked to each database. Moreover, some databases could be used for different tasks, such as WSJ or ImageNet.\n\n* The training procedure of the model is not explained in the paper. What is the cost function and what is the strategy to train on multiple tasks ? The paper should at least outline the strategy.\n\n* The experiments are sufficient to demonstrate the viability of the approach, but the experimental setup is not clear. Specifically, there is an issue about the speech recognition part of the experiment. It is not clear what the task exactly is: continuous speech recognition, isolated word recognition ? The metrics used in Table 1 are also not clear, they should be explained in the text. Also, if the task is continuous speech recognition, the WER (word error rate) metric should be used. Information about the detailed setup is also lacking, specifically which test and development sets are used (the WSJ corpus has several sets).\n\n* Using raw waveforms as audio modality is very interesting, but this approach is not standard for speech recognition, some references should be provided, such as:\nP. Golik, Z. Tuske, R. Schluter, H. Ney, Convolutional Neural Networks for Acoustic Modeling of Raw Time Signal in LVCSR, in: Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2015, pp. 26–30.\nD. Palaz, M. Magimai Doss and R. Collobert, (2015, April). Convolutional neural networks-based continuous speech recognition using raw speech signal. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on (pp. 4295-4299). IEEE.\nT. N. Sainath, R. J. Weiss, A. Senior, K. W. Wilson, and O. Vinyals. Learning the Speech Front-end With Raw Waveform CLDNNs. Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2015.\n\nRevised Review:\nThe main idea of the paper is very interesting and the work presented is impressive. However, I tend to agree with Reviewer2, as a more comprehensive analysis should be presented to show that the network is not simply multiplexing tasks. The experiments are interesting, except for the WSJ speech task, which is almost meaningless. Indeed, it is not clear what the network has learned given the metrics presented, as the WER on WSJ should be around 5% for speech recognition.\nI thus suggest to either drop the speech experiment, or the modify the network to do continuous speech recognition. A simpler speech task such as Keyword Spotting could also be investigated.\n", "> we only did Table 4 and skipped ablations where nothing worked because it is unusual to report very bad results, but we will happily put it back.\n\nYes, that's what I was aiming for. I'm not sure what architecture you started with, hopefully a more standard one (if not, justify). Pick a few of your tasks (ideally two). Run your base model and show the readers that if you use the naive solution you would get really bad results. It would be great if you can run the base model on the individual tasks and compare. These results demonstrate the problem we are facing, and ideally should go to the end of section 1.\n\nIn section 2, you introduce several basic components. I believe you have some intuitions why these components are useful. I think this section is already good, but try to add more motivation to each components. For example, some of the components are more problem agnostic, and some of them are more problem specific. (This might be a good place to mention the potential multiplexing issue.) The problem specific ones are easier to justify, because they are aligned with the standard architectures for each individual tasks. You might get into trouble when explaining the problem agnostic ones, because all of these components are powerful by themselves. For example, the attention blocks and convolution blocks can pretty much do anything.\n\nIn the experiments, first use the two-task setting in section 1, add the proposed components one by one, and show that they help. (Ideally you should analyze the why here and address the multiplexing problem. See the first post.) Repeat the experiments with different task combination and component combination. Finally report the ones with all eight tasks and show the readers the proposed architecture can indeed solve the problem introduced in section 1.\n\nThis is how I would write the paper if I had all the results. You don't need to follow the structure, but the experimental comparisons for the minimal settings should be there. I hope these suggestions are concrete enough. Please let me know if anything is unclear and if you need more input.\n", "We are very grateful for the suggestions on what to improve on. We will certainly add a section with more negative results on multi-task training -- we only did Table 4 and skipped ablations where nothing worked because it is unusual to report very bad results, but we will happily put it back. We will also add more intuitions about the architectural choices, is the beginning of Section 2 the right place? We would be grateful for even more suggestions and concrete points to improve in the final version of the paper.", "The paper describes a neural end-to-end architecture to solve multiple tasks at once. The architecture consists of an encoder, a mixer, a decoder, and many modality networks to cover different types of input and output pairs for different tasks. The engineering endeavor is impressive, but the paper has little scientific value. Below are a few suggestions to make the paper stronger.\n\nIt is possible that the encoder, mixer, and decoder are just multiplexing tasks based on the input. One way to analyze whether this happens is to predict the identity of the task from the hidden vectors. If this is the case, how to prevent it from happening? If this does not happen, what is being shared across tasks? This can be analyzed by embedding the inputs from different tasks and looking for inputs from other tasks within a neighborhood in the embedding space.\n\nWhy multitask learning help the model perform better is still unclear. If the model is able to leverage knowledge learned from one task to perform another task, then we expect to see either faster convergence or good performance with fewer samples. The authors should analyze if this is the case, and if not, what are we actually benefiting from multitask learning?\n\nIf the modality network is shared across multiple tasks, we expect the learned hidden representation produced by the modality network is more universal. If that is the case, what information of the input is being retained when training with multiple tasks and what information of the input is being discarded when training with a single task?\n\nReporting per-token accuracies, such as those in Table 2, is problematic. It's unclear how to compute per-token accuracies for structured prediction tasks, such as speech recognition, parsing, and translation. Furthermore, based on the results in Table 2, the model clearly fails on the speech recognition task. The author should also report the standard speech recognition metric, word error rates (WER), for the speech recognition task in Table 1.\n", "The paper presents a multi-task architecture that can perform multiple tasks across multiple different domains. The authors design an architecture that works on image captioning, image classification, machine translation and parsing.\n\nThe proposed model can maintain performance of single-task models and in some cases show slight improvements. This is the main take-away from this paper. \n\nThere is a factually incorrect statement - depthwise separable convolutions were not introduced in Chollet 2016. Section 2 of the same paper also notes it (depthwise convolutions can be traced back to at least 2012).", "Thanks for the response.\n\nWe probably have a different definition about what science is. To me, any set of rigorous experiments with good controls is considered science. From your response, I can tell that the task that you are working on is hard, and you have made many design choices to make it work. I would consider a paper with scientific value if you document what works, what doesn't, what the potential problems are, and what individual changes make it work. That's how I interpret \"implementation issues\" mentioned in the call-for-papers. This is partially fulfilled in Table 3 and 4, but it is more like a post-hoc analysis, attacking a straw man rather than the real problem.\n\nWhen I ask for WERs for the speech recognition task, I'm not going to criticize you for not getting SOTA results. As long as the proposed approach has potential and merit, SOTA results are not required. When you report WERs that are comparable to other papers, it gives the readers a sense of how the model is performing. I believe that's the reason you report BLEU scores in the paper. Token error rates have very little meaning to speech researchers.\n\nI think we can all agree the above is the minimum requirement of a scientific study. I believe you have spent a huge effort optimizing the architecture, so you must already have all the experimental numbers to write a good paper and you could have written a paper including all these numbers and comparisons for your design decisions.\n\nBeyond the bare minimum, it would be great if you can give hunches and analyses about why you are having these problems, why you make certain decisions, and why those decisions work.\n\nLet me end my response with a quote from Carl Meyer's Matrix Analysis book.\n\n\"Reacting to criticism concerning the lack of motivation in his writings, Gauss remarked that architects of great cathedrals do not obscure the beauty of their work by leaving the scaffolding in place after the construction has been completed. His philosophy epitomized the formal presentation and teaching of mathematics throughout the nineteenth and twentieth centuries, and it is still commonly found in mid-to-upper-level mathematics textbooks. The inherent efficiency and natural beauty of mathematics are compromised by straying too far from Gauss' viewpoint. But, as with most things in life, appreciation is generally preceded by some understanding seasoned with a bit of maturity, and in mathematics this comes from seeing some of the scaffolding.\"", "We are thankful to the reviewer for suggestions on how to improve the paper, which we address below. But we need to start with the main negative point, summarized in this sentence in the review: \"The engineering endeavor is impressive, but the paper has little scientific value.\"\n\nWe are the first to admit that our paper does not provide final insights on multi-task learning: we do not know exactly why the transfer learning we observe happens and we cannot pinpoint the exact parts of the joint representation that would be responsible for it. That's why we use the words \"first step\" in our paper. On the other hand, we had to work hard to make this multi-task learning work at all. All key components that we describe (small modality nets, mixing of convolutions, attention, and especially large scale sparely-gated mixture-of-experts layers) are crucial to making this work. Without any single of these components, or if they are put together in a wrong way, the multi-problem training just doesn't work: the results get far worse than in single-problem training or some tasks fail to train at all.\n\nWe feel that it has been the case many times in deep learning (and in science in general) that an engineering solution was presented first, and a proper detailed analysis only followed later. In fact, one could argue that we are only beginning to understand neural network representations in general, years after they were constructed. So we'd insist that presenting an engineered system, like ours, has value for the development of science later. Also, to get back from the discussions that touch on philosophy, we need to point out that the ICLR'18 call for papers states \"we take a broad view of the field\" and the list of relevant topics even includes \"implementation issues\". We believe this means that our paper fits the conference and we hope that the reviewer will take this into account and revise the score.\n\nAs for the questions in the comments.\n\n* \"It is possible that the encoder, mixer, and decoder are just multiplexing tasks?\" Iit is not possible that they are doing just that, because we see improvements in final scores due to transfer learning. If they were only multiplexing, then there would be no transfer learning. But it is possible that they are multiplexing to a certain extent, and we are planning a future work extending the MultiModel with a domain-adversarial loss, to check if that makes transfer learning more pronounced.\n\n* \"We expect to see either faster convergence or good performance with fewer samples.\" We do see transfer learning, e.g. on parsing, where there are fewer data samples available we see better error, going down from ~3% to ~2% on accuracy (which is correlated with other metrics, see below).\n\n* \"What information of the input is being retained when training with multiple tasks and what information of the input is being discarded when training with a single task?\" We believe this is a great question, but it has not even been truly answered for much simpler single-task deep networks. We believe that papers like (https://arxiv.org/abs/1703.00810) start to provide methods for answering these questions, but these are too recent for us to be able to apply to our large model at this point.\n\n* \"In Table 2, it's unclear how to compute per-token accuracies for structured prediction tasks [...] the model clearly fails on the speech recognition [...] should also report WER in Table 1\". \n\nWe make a clear distinction in the paper between Table 1 and Table 2. In Table 2, we compare different versions of our model against each other. In that case, it is clear how to compute per-token accuracies: it's the same model operating on the same data tokenized in the same way, so we simply report the accuracy and perplexity. As has been observed many times in sequence-to-sequence models, internal per-token accuracy and perplexity correlate strongly with external metrics, so we believe that this is a good way to compare different versions of the same model (it is also the default way of model selection in neural machine translation and other sequence-to-sequence tasks). In Table 1, we try to put the results of our model in the context of other models in the field. We did not do it for all 8 tasks because there are technical difficulties, but we do get some fully correctly transcribed sentences on the dev set in speech recognition, and in general we'd rather not report a number than report a wrong one (see the reply to Reviewer 1 above for the discussion on problems with WER). Since we are not claiming SOTA results and didn't do heavy tuning, we believe that this is a reasonable way to report our results. We are working on adding more and we will be grateful for more suggestions for the final version.\n\nWe hope that the above discussion will convince the reviewer to revise the score.", "We are grateful for pointing out the false statement about depthwise separable convolutions. We changed the statement in Section 2.1 and we still have a more complete discussion in Section 2.7 (was Section 2.6 in the previous revision). The earliest reference we could find was by Laurent Sifre and Stéphane Mallat from 2013, but we believe these were discussed even earlier. We will be grateful if the reviewer can provide us with a reference or a formulation about such earlier work and we will include it in the final version of the paper.\n\nWe also want to point out that, in addition to maintaining performance of single-task models, the MultiModel shows substantial gains on tasks where only a smaller corpus is available, such as on parsing (going from ~3% per-token error to ~2% error, a large relative reduction) We find this transfer learning one of the key results of our paper and we hope that the reviewer will take it into account.", "We'd like to thank the reviewer for the constructive suggestions. We tried to follow them all, space permitting, in the new revision of the paper.\n\n* We added some general clarification in the list of the corpora. Specifying all details (train/dev/test splits, text normalization, tokenizers, etc.) would take a lot of space, but in the final version we will add pointers to data preparation code in our open-source code base where all these details are included.\n\n* We are very grateful for the suggestion to add an \"experimental setup\" section, which we did. We used cross-entropy loss for each task and trained in an asynchronous distributed way, as described now in that section.\n\n* We described the metrics used in Table 1 in the text, as suggested. We did not include WER in Table 1 for a number of technical reasons. Mainly, since we use subword tokens, to get comparable WER we need to go through de-tokenization, renormalization and re-tokenization with a comparable tokenizer, and we are not confident that this process makes for truly comparable results. Note that this a direct consequence of the multi-problem setting: we need a single tokenizer for all corpora, so we cannot afford to just use the default one for each data-set as they are different. We are now re-implementing SOTA speech-to-text models in our codebase, so by publication time we hope to get accuracy results from SOTA models, which we could use as a base for comparison. Another problem is whether we should compare to results with or without a language model: we do not use any explicit LM on top of our network, but the multi-task training can basically act as learning a language model too. In any case, we are not claiming SOTA results as we focus on transfer learning and other aspects of multi-task training.\n\n* We added the suggested references and will be grateful for more.\n\nWe want to thank the reviewer again for constructive comments." ]
[ 6, -1, -1, 3, 6, -1, -1, -1, -1 ]
[ 3, -1, -1, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_HyKZyYlRZ", "Sy4tKnHNM", "Syf0DbVNM", "iclr_2018_HyKZyYlRZ", "iclr_2018_HyKZyYlRZ", "r1iTXuT7G", "S1VMUmbeM", "rke2Ms7ez", "rJ6r_eqlf" ]
iclr_2018_rJ4uaX2aW
Large Batch Training of Convolutional Networks with Layer-wise Adaptive Rate Scaling
A common way to speed up training of large convolutional networks is to add computational units. Training is then performed using data-parallel synchronous Stochastic Gradient Descent (SGD) with a mini-batch divided between computational units. With an increase in the number of nodes, the batch size grows. However, training with a large batch often results in lower model accuracy. We argue that the current recipe for large batch training (linear learning rate scaling with warm-up) is not general enough and training may diverge. To overcome these optimization difficulties, we propose a new training algorithm based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled AlexNet and ResNet-50 to a batch size of 16K.
rejected-papers
Pros: + The proposed large-batch, synchronous SGD method is able to generalize at larger batch sizes than previous approaches (e.g., Goyal et al., 2017). Cons: - Evaluation on more than one task would make the paper more convincing. - The addition of more hyperparameters makes the proposed algorithm less appealing. - Some theoretical justifiction of the layer-wise rate scaling would help. - It isn't clear that the comparison to Goyal et al., 2017 is entirely fair, because that paper also had recommendations for the implementation of batch normalization, weight decay, and a momentum correction as the learning rate is scaled up, but this submission does not address any of those. Although the revised paper addressed many of the reviewers' concerns, they still did not feel it was quite strong enough to be accepted to ICLR.
train
[ "ry33G5FxM", "Sk-rQ7qxf", "S1b88I5xM", "BJLfcihbz", "r1rHHihWG", "S1J7jF2-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper provides an optimization approach for large batch training of CNN with layer-wise adaptive learning rates. \nIt starts from the observation that the ratio between the L2-norm of parameters and that of gradients on parameters varies\nsignificantly in the optimization, and then introduce a local learning rate to consider this observation for a more stable and efficient optimization. Experimental results show improvements compared with the state-of-the-art algorithm.\n\nReview:\n(1) Pros\nThe proposed optimization method considers the dynamic self-adjustment of the learning rate in the optimization based on the ratio between the L2-norm of parameters and that of gradients on parameters when the batch size increases, and shows improvements in experiments compared with previous methods.\n\n(2) Cons\ni) LR \"warm-up\" can mitigate the unstable training in the initial phase and the proposed method is also motivated by the stability but uses a different approach. However, it seems that the authors also combine with LR \"warm-up\" in your proposed method in the experimental part, e.g., Table 3. So does it mean that the proposed method cannot handle the problem in general?\n\nii) There is one coefficient that is independent from layers and needs to be set manually in the proposed local learning rate. The authors do not have a detail explanation and experiments about it. In fact, as can be seen in the Algorithm 1, this coefficient can be as an independent hyper-parameter (even is put with the global learning rate together as one fix term).\n\niii) In the section 6, when increase the training steps, experiments compared with previous methods should be implemented since they can also get better results with more epochs.\n\niv) Writing should be improved, e.g., the first paragraph in section 6. Some parts are confusing, for example, the authors claim that they use initial LR=0.01, but in Table 1(a) it is 0.02. ", "The paper proposes a new approach to determine learning late for convolutional neural networks. It starts from observation that for batch learning with a fixed number of epochs, the accuracy drops when the batch size is too large. Assuming that the number or epochs and batch size are fixed, the contribution of the paper is a heuristic that assigns different learning late to each layer of a network depending on a ratio of the norms of weights and gradients in a layer. The experimental results show that the proposed heuristic helps AlexNet and ResNet end up in a larger accuracy on ImageNet data.\n Positives:\n- the proposed approach is intuitively justified\n- the experimental results are encouraging\n Negatives:\n- the methodological contribution is minor\n- no attempt is made to theoretically justify the proposed heuristic\n- the method introduces one or two new hyperparameters and it is not clear from the experimental results what overhead is this adding to network training\n- the experiments are done only on a single data set, which is not sufficient to establish superiority of an approach\n Suggestions:\n- consider using different abbreviation (LARS is used for least-angle regression) \n", "This paper proposes a training algorithm based on Layer-wise Adaptive Rate Scaling (LARS) to overcome the optimization difficulties for training with large batch size. The authors use a linear scaling and warm-up scheme to train AlexNet on ImageNet. The results show promising performance when using a relatively large batch size. The presented method is interesting. However, the experiments are poorly organized since some necessary descriptions and discussions are missing. My detailed comments are as follows.\n\nContributions:\n\n1.\tThe authors propose a training algorithm based LARS with the adaptive learning rate for each layer, and train the AlexNet and ResNet-50 to a batch size of 16K. \n2.\tThe training method shows stable performance and helps to avoid gradient vanishing or exploding.\n\nWeak points:\n\nThe training algorithm does not overcome the optimization difficulties when the batch size becomes larger (e.g. 32K), where the training becomes unstable, and the training based on LARS and warm-up can’t improve the accuracy compared to the baselines. \n\nSpecific comments: \n\n1.\tIn Algorithm 1, how to choose $ \\eta $ and $ \\beta $ in the experiment?\n2.\tUnder the line of Equation (3), $ \\nabla L(x_j, w_{t+1}) \\approx L(x_j, w_{t}) $ should be $ \\nabla L(x_j, w_{t+1}) \\approx \\nabla L(x_j, w_{t}) $.\n3.\tHow can the training algorithm based on LARS improve the generalization for the large batch? \n4.\tIn the experiments, what is the parameter iter_size? How to choose it?\n5.\tIn the experiments, no descriptions and discussions are given for Table 3, Figure 4, Table 4, Figure 5, Table 5 and Table 6. The authors should give more discussions on these tables and figures. Furthermore, the captions of these tables and figures confusing.\n6.\tOn page 4, there is a statement “The ratio is high during the initial phase, and it is rapidly decreasing after few epochs (see Figure 2).” This is quite confusing, since Figure 2 is showing the change of learning rates w.r.t. training epochs.\n", "1. Comment : \"LR \"warm-up\" can mitigate the unstable training in the initial phase and the proposed method is also motivated by the stability but uses a different approach. However, it seems that the authors also combine with LR \"warm-up\" in your proposed method in the experimental part, e.g., Table 3. So does it mean that the proposed method cannot handle the problem in general?\"\nA: Warm-up alone is not able to mitigate the unstable training for Alexnet. LARS with warm-up can. There is also a new version of algorithm which eliminates warm-up completely\n\n2. Comment: \"There is one coefficient that is independent from layers and needs to be set manually in the proposed local learning rate. The authors do not have a detail explanation and experiments about it. In fact, as can be seen in the Algorithm 1, this coefficient can be as an independent hyper-parameter (even is put with the global learning rate together as one fix term).\"\nA: Agree. In the paper we used fixed trust coefficient $eta\" and changed learning rate. One can used instead fixed global learning rate policy which does not depend on networks, and scale up only trust coefficient . I will add the explanation to the revised paper.\n\n3. Comment \"In the section 6, when increase the training steps, experiments compared with previous methods should be implemented since they can also get better results with more epochs.\" \nA: The point of section 6 was to show that there is no \"fundamental\" limit on the accuracy of large batch training, provided we do train it long enough and regularize well (e.g. increase weight decay or add data augmentation. \n\n4. Comment 4: \"the authors claim that they use initial LR=0.01, but in Table 1(a) it is 0.02\"\n A: typo is fixed in the revised paper. \n\n \n", "1. Comment 1: \"the methodological contribution is minor\"\n A: We proposed a new training method, which enable training with large batch of networks which is not possible with all other methods (AFAK). \n\n2. Comment 2: \" no attempt is made to theoretically justify the proposed heuristic\"\n A: \" Agree, unfortunately most methods used for deep learning don't have formal proof yet\"\n\n3. Comment 3: \"the method introduces one or two new hyperparameters and it is not clear from the experimental results what overhead is this adding to network training\"\nA: there is one hyper-parameter - trust coefficient $0<eta<1\"$ . I added the explanation how it depends to revised paper \n\n4. Comment 4: \"the experiments are done only on a single data set, which is not sufficient to establish superiority of an approach\"\nA: We focused on Imagenet classification only for large batch training The results in the paper are for 3 models (Alexnet, Alexnet-BN, and Resnet-50). I will add Googlenet results for completeness. \n\n5. Suggestions: \" consider using different abbreviation (LARS is used for least-angle regression) \"\n A: Agree, probably LARC (Layer-wise Adaptive Rate Control\" would be better, but the algorithm was already implemented in nvcaffe when we realized that there is a name collision. ", "Q: \"Weak points: The training algorithm does not overcome the optimization difficulties when the batch size becomes larger (e.g. 32K), where the training becomes unstable, and the training based on LARS and warm-up can’t improve the accuracy compared to the baselines\"\nA:\n1) Standard recipe \"increase learning rate proportionally to batch size\" does not work for such networks as Alexnet and Googlenet even with warm-up. LARS is the only algorithm (AFAK) that allows to train Alexnet with batch > 2K to the same accuracy as for small batches\n2) We added Appendix with data on LARS performanse comparing to other \"Large Batch training\" methods for Resnet-50. \n\nSpecific comments: \n1) Q: \"In Algorithm 1, how to choose $ \\eta $ and $ \\beta $ in the experiment?\"\nA: $ $0<\\eta<1 $, and it depends on the batch size $B$. It grows with B: for example for Alexnet with B=1K the optimal $\\eta=0.002$, with B=4K the optimal $\\eta=0.005$, with B=4K the optimal $\\eta=0.008$,... Weight decay $\\beta$ is chosen as usual. We found that with large batch it's beneficial to increase weight decay to improve the regularization \n2) typo: fixed in the revised paper \n3) Q: \"How can the training algorithm based on LARS improve the generalization for the large batch?\"\nA: LARS does not replace standard regularization methods (weight decay, batch norm, or data augmentation). But we found that with LARS we can use larger weight decay than usual, since LARS automatically limits the norm of weights during training: $|| W(T)|| <= ||W(0)|| * exp \\int_{0}^{T} \\gamma(t) dt$. \n4) Q: In the experiments, what is the parameter iter_size? How to choose it?\n A: iter_size is used in caffe to emulate large batch if batch does not fit into GPU DRAM. For example if the batch which fits in GPU memory is 1K, and we want to use B=8K, then iter_size=8.\n5) and 6) We will add more explanation to the revised paper\n " ]
[ 5, 4, 5, -1, -1, -1 ]
[ 3, 4, 5, -1, -1, -1 ]
[ "iclr_2018_rJ4uaX2aW", "iclr_2018_rJ4uaX2aW", "iclr_2018_rJ4uaX2aW", "ry33G5FxM", "Sk-rQ7qxf", "S1b88I5xM" ]
iclr_2018_HJSA_e1AW
Normalized Direction-preserving Adam
Optimization algorithms for training deep models not only affects the convergence rate and stability of the training process, but are also highly related to the generalization performance of trained models. While adaptive algorithms, such as Adam and RMSprop, have shown better optimization performance than stochastic gradient descent (SGD) in many scenarios, they often lead to worse generalization performance than SGD, when used for training deep neural networks (DNNs). In this work, we identify two problems regarding the direction and step size for updating the weight vectors of hidden units, which may degrade the generalization performance of Adam. As a solution, we propose the normalized direction-preserving Adam (ND-Adam) algorithm, which controls the update direction and step size more precisely, and thus bridges the generalization gap between Adam and SGD. Following a similar rationale, we further improve the generalization performance in classification tasks by regularizing the softmax logits. By bridging the gap between SGD and Adam, we also shed some light on why certain optimization algorithms generalize better than others.
rejected-papers
The paper proposes a modification to Adam which is intended to ensure that the direction of weight update lies in the span of the historical gradients and to ensure that the effective learning rate does not decrease as the magnitudes of the weights increase. The reviewers wanted a clearer justification of the changes made to Adam and a more extensive evaluation, and held to this opinion after reading the authors' rebuttal and revisions. Pros: + The basic idea of treating the direction and magnitude separately in the optimization is interesting. Cons: - Insufficient evaluation of the new method. - More justification and analysis needed for the modifications. For example, are there circumstances under which they will fail? - The modification to Adam and batch-normalized softmax idea are orthogonal to one another, making for a less coherent story. - Proposed method does not have better generalization performance than SGD. - Concern that constraining weight vectors to the unit sphere can harm generalization.
test
[ "BktDDeo4z", "Bk_mQkcgM", "S1-Kfe5lM", "SJ_kd7ixf", "HJq1ofsmf", "HJy8rIyGM", "ryVoBIJMG", "Hk71lIyMG", "SkCcbLkGf", "rkEceLJzG", "HkcU1LJfM", "Bks3kZqgM", "SJ9yMkqlf", "r1864VYlG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "Thank the authors for their response. I am disappointed the latest revised paper does not provide any further insight into the ND-Adam updates. It is with much regret that my score remains the same. ", "Method:\n\nThe paper is missing analysis of some important related works such as\n\n\"Beyond convexity: Stochastic quasi-convex optimization\" by E. Hazan et al. (2015) \n\nwhere Stochastic Normalized Gradient Descent (SNGD) was proposed. \n\nThen, normalized gradient versions of AdaGrad and Adam were proposed in \n\n\"Normalized Gradient with Adaptive Stepsize Method for Deep\nNeural Network Training\" by A. W. Yu et al. (2017).\n\nAnother work which I find to be relevant is \n\n\"Follow the Signs for Robust Stochastic Optimization\" by L. Balles and P. Hennig (2017).\n\nFrom my personal experiments, restricting w_i to have L2 norm of 1, i.e., to be +-1 \nleads to worse generalization. One reason for this is that weight decay is not \nreally functioning since it cannot move w_i to 0 or make its amplitude any smaller. \nPlease correct me if I misunderstand something here. \n\nThe presence of +-1 weights moves us to the area of low-precision NNs, \nor more specifically, NNs with binary / binarized weights as in \n\n\"BinaryConnect: Training Deep Neural Networks with\nbinary weights during propagations\" by M. Courbariaux et al. (2015)\n\nand \n\n\"Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1\" by M. Courbariaux et al. (2016). \n\nRegarding\n\"Moreover, the magnitude of each update does not depend on themagnitude of the gradient. Thus, ND-Adam is more robust to improper initialization, and vanishing or exploding gradients.\"\n\nIf the magnitude of each update does not depend on the magnitude of the gradient, then the algorithm heavily depends on the learning rate. Otherwise, it does not have any means to approach the optimum in a reasonable number of steps *when* it is initialized very / unreasonably far from it. The claim of your second sentence is not supported by the paper. \n\nEvaluation:\n\nI am not confident that the presented experimental validation is fair. First, the original WRN paper and many other papers with ResNets used weight decay of 0.0005 and not 0.001 or 0.002 as used for SGD in this paper. It is unclear why this setting was changed. One could just use \\alpha_0 = 0.05 and \\lambda = 0.0005.\n\nThen, I don't see why the authors use WRN-22-7.5 which is different from WRN-28-10 which was suggested in the original study and used in several follow-up works. The difference between WRN-22-7.5 and WRN-28-10 is unlikely to be significant, \nthe former might have about only 2 times less parameters which should barely change the final validation errors. However, the use of WRN-22-7.5 makes it impossible to easily compare the presented results to the results of Zagoruyko who had 3.8\\% with WRN-28-10. I believe that the use of the setup of Zagoruyko for WRN-22-7.5 would allow to get much better results than 4.5\\% and 4.49\\% shown for SGD and likely better 4.14\\% shown for ND-Adam. I note that the use of WRN-22-7.5 is unlikely to be due to the used hardware because later in paper the authors refer to WRN-34-7.5.\n\nMy intuition is that the proposed ND-Adam moves the algorithm back to SGD but with potentially harmful constraints of w_i=+-1. Even the values of \\alpha^v_0 found for ND-Adam (e.g., \\alpha^v_0=0.05 in Figure 1B) are in line of what would be optimal values of \\alpha_0 for SGD. \n\nI find it uncomfortable that BN-Softmax is introduced here to support the use of an optimization algorithm, moreover, that the values of \\gamma_c are different for CIFAR-10 and CIFAR-100. I wonder if the proposed values are optimal (and therefore selected) for all three tested algorithms or only for Adam-ND. I expect that hyperparameters of SGD and Adam would also need to be revised to account for BN-Softmax.", "This paper proposes a variant of ADAM optimization algorithm that normalizes the weights of each hidden unit. They further suggest using batch normalization on the output of the network before softmax to improve the generalization. The main ideas are new to me and the paper is well-written. The arguments and derivations are very clear. However, the experimental results suggest that the proposed method is not superior to SGD and ADAM.\n\nPros: \n\n- The idea of optimizing the direction while ignoring the magnitude is interesting and make sense.\n- Using batch normalization before softmax is interesting.\n\nCons:\n\n- In the abstract, authors claim that the proposed method has good optimization performance of ADAM and good generalization performance of SGD. Such a method could be helpful if one can get to the same level of generalization faster (less number of epochs). However, the experiments suggest that optimization advantages of the proposed method do not translate to faster generalization. Figures 2,3 and Table 1 indicate that the generalization performance of this method is very similar to SGD.\n\n- The paper is not coherent. In particular, direction-preserving ADAM and batch-normalized softmax trick are completely orthogonal ideas. \n\n- In the introduction and Section 2.2, authors claim that weight decay has a significant effect on the generalization performance of DNNs. I wonder if authors can refer to any work on this. My own experience and several empirical works have suggested that weight decay does not improve generalization significantly.\n\n", "The paper extended the Adam optimization algorithm to preserve the update direction. Instead of using the un-centered variance of individual weights, the proposed method adapts the learning rate for the incoming weights to a hidden unit jointly using the L2 norm of the gradient vector. The authors empirically demonstrated the method works well on CIFAR-10/100 tasks.\n\nComments:\n\n- I found the paper very hard to follow. The authors could improve the clarity of the paper greatly by listing their contribution clearly for readers to digest. The authors also combined the proposed method with a few existing deep learning tricks in the paper. All those tricks that, ie. section 3.3 and 4, should go into the background section.\n\n- Overall, the only contribution of the paper seems to be the ad-hoc modification to Adam in Eq. (9). Why is this a reasonable modification? Do we expect this modification to fail in any circumstances? The experiments on CIFAR dataset and one CNN architecture do not provide enough evidence to show the proposed method work well in general.\n\n", "Dear reviewers,\n\nThank you very much for your valuable comments. We have updated the paper according to the comments. The major changes are listed below:\n\n1. We list our contribution in the introduction.\n\n2. In Sec. 3.2, we note the difference between various gradient normalization schemes and the normalization\nscheme employed ND-Adam. We also briefly explain how they improve the robustness to vanishing/exploding gradients, and to improper weight initialization.\n\n3. In Sec. 4, we explain more about the rationale behind regularized softmax. We show that softmax exhibits a similar problem to one of the problems we have identified in Adam. Moreover, as an alternative to batch-normalized softmax, we propose to apply L2-regularization to the softmax logits, which serves the same purpose as batch-normalized softmax, but is easier to use.\n\n4. Due to several differences, the performance of the original WRN implementation is better than that of the TensorFlow implementation we use. Therefore, in Sec. 5, we further compare the performance of SGD and ND-Adam based on the original WRN implementation. The results confirm that ND-Adam eliminates the generalization gap between Adam and SGD. The code for the experiments is available at https://github.com/zj10/ND-Adam.", "4. Then, I don't see why the authors use WRN-22-7.5 which is different from WRN-28-10 which was suggested in the original study and used in several follow-up works. The difference between WRN-22-7.5 and WRN-28-10 is unlikely to be significant, the former might have about only 2 times less parameters which should barely change the final validation errors. However, the use of WRN-22-7.5 makes it impossible to easily compare the presented results to the results of Zagoruyko who had 3.8\\% with WRN-28-10. I believe that the use of the setup of Zagoruyko for WRN-22-7.5 would allow to get much better results than 4.5\\% and 4.49\\% shown for SGD and likely better 4.14\\% shown for ND-Adam. I note that the use of WRN-22-7.5 is unlikely to be due to the used hardware because later in paper the authors refer to WRN-34-7.5.\n\nWe use WRN-22-7.5 instead of WRN-28-10 due to limited hardware resources. We have tested WRN-28-10 on CIFAR-10, which was more than two times slower than WRN-22-7.5 (the latter took about 10 hours on a single GTX 1080 GPU), but only slightly outperforms WRN-22-7.5 (as you mentioned) for both SGD and ND-Adam. To obtain the results in Table 1, we need 36 runs in total, so we decide to use WRN-22-7.5.\n\nDue to the differences mentioned above, the generalization performance of the TensorFlow implementation is worse than that of the original WRN implementation. As you can find in the GitHub links above, the error rates of WRN-28-10 on CIFAR-10 are 5% and 4% (the 3.8% you mentioned is the result of WRN-40-10 with dropout), respectively. The performance of SGD reported in our paper is actually improved upon the original TensorFlow implementation, by using a cosine learning rate schedule and by removing the FC layer (see Sec. 5.1). Since we are not trying to improve WRN itself, we believe the TensorFlow implementation of WRN is a fairly good testbed for comparing the performance of SGD, Adam, and ND-Adam.\n\nWe would like to emphasize that the main contribution of our work is to identify why Adam generalizes worse than SGD, and give a solution to fix the problem. However, we agree with you that comparing with the original implementation of WRN would be more convincing. So we have reimplemented ND-Adam with PyTorch. Interestingly, we find that applying L2 regularization on the scales and biases of BN is indeed important for achieving better generalization performance. However, in this case SGD does not benefit from explicitly regularizing softmax, which is likely due to the L2 regularization applied on BN scales, as it indirectly restricts the magnitude of the softmax logits. For fair comparison, we also apply L2 regularization to the biases when using ND-Adam (the scales are not used by ND-Adam). The average results of 2 runs are summarized in the following table:\n+----------------+-------------------+--------------------+\n| | WRN-22-7.5 | WRN-28-10 |\n+ +-------------------+--------------------+\n| | C10 | C100 | C10 | C100 |\n+----------------+--------+----------+--------+----------+\n| SGD (orig.) | 4.15 | --- | 4.00 | 19.25 |\n+----------------+--------+----------+--------+----------+\n| SGD | 3.84 | 19.24 | 3.80 | 18.37 |\n+----------------+--------+----------+--------+----------+\n| ND-Adam | 3.70 | 19.24 | 3.68 | 18.37 |\n+----------------+--------+----------+--------+----------+\nFor both SGD and ND-Adam, we have slightly modified the implementation as done before, i.e., we use a cosine learning rate schedule and remove the FC layer. Some results of the original WRN implementation (SGD (orig.)) are also included for comparison. We will update the paper and the code accordingly.\n\n5. My intuition is that the proposed ND-Adam moves the algorithm back to SGD but with potentially harmful constraints of w_i=+-1. Even the values of \\alpha^v_0 found for ND-Adam (e.g., \\alpha^v_0=0.05 in Figure 1B) are in line of what would be optimal values of \\alpha_0 for SGD.\n\nWe normalize each weight vector to have a norm of 1, but do not constrain each weight to be +-1. As stated in the response to the first comment, the normalization does not reduce the expressiveness of the model. As you observed in Fig. 1(b), we deliberately match the effective learning rate of ND-Adam to that of SGD. And by showing that similar effective learning rates lead to similar generalization performance, we argue that the effective learning rate is a more natural learning rate measure than the learning rate hyperparameters (Sec. 5.2).", "6. I find it uncomfortable that BN-Softmax is introduced here to support the use of an optimization algorithm, moreover, that the values of \\gamma_c are different for CIFAR-10 and CIFAR-100. I wonder if the proposed values are optimal (and therefore selected) for all three tested algorithms or only for Adam-ND. I expect that hyperparameters of SGD and Adam would also need to be revised to account for BN-Softmax.\n\nOne of the problems that degrades the generalization performance of Adam (or SGD without L2 weight decay) is that, for different magnitudes of an input weight vector, the updates given by the same update rule can have different effects on the overall network function. For example, if we increase the magnitude of a weight vector without changing the overall network function, the same update rule will result in a smaller effective learning rate for this weight vector. While this problem can be alleviated by L2 weight decay or solved by the proposed spherical weight optimization, a similar problem also exists for the softmax layer. As discussed in Sec. 4, we can scale the logits by a positive factor without changing the predictions of the model, whereas the gradient backpropagated from it can vary greatly, which motivates the idea of batch-normalized softmax. Regularizing softmax is important for fixing the problem we have identified. We will explain more on this motivation in the paper.\n\nBatch-normalized softmax is a simple way to constrain the magnitude of the logits. In our experiments, \\gamma_c is chosen from {1, 1.5, 2.5, ...}, and was tuned for SGD and ND-Adam. SGD and ND-Adam share the same optimal values of \\gamma_c, which also significantly improve the performance of Adam. In addition, we have recently simplified BN-Softmax to an L2 penalty on the logits, the value of which can be shared by different dataset and models with different layers. But unlike BN-Softmax, the value of the L2 penalty is not likely to be shared by SGD or Adam, since the same penalty may lead to different magnitudes of the logits. The experimental results are shown in the table above. We will describe the details in the paper.", "Dear reviewer,\n\nThank you for your comments.\n\n1. In the abstract, authors claim that the proposed method has good optimization performance of ADAM and good generalization performance of SGD. Such a method could be helpful if one can get to the same level of generalization faster (less number of epochs). However, the experiments suggest that optimization advantages of the proposed method do not translate to faster generalization. Figures 2,3 and Table 1 indicate that the generalization performance of this method is very similar to SGD.\n\nThank you for pointing this out, and we agree with you that better generalization performance should help get to the same level of generalization faster. In this case (i.e., training a wide residual network on CIFAR-10/100), optimization is not difficult for either SGD or ND-Adam, but we expect the advantage to become more significant when optimization is difficult. For example, Adam is often preferred over SGD when training GANs, due to better optimization performance.\n\nHowever, since the potential advantage in optimization is not strongly supported by the experiments in this case, we will rephrase this part of the paper. Instead, we would like to emphasize that the main contribution of our work is to identify why Adam generalizes worse than SGD, and give a solution to fix the problem. The hyperparameters of ND-Adam are not extensively tuned, but are tuned to match the effective learning rate (defined as |\\delta w|/|w| in Sec. 2.2) of SGD, as shown in Fig. 1b. Nevertheless, we show that the generalization performance is significantly improved upon that of Adam, and also outperforms SGD when batch-normalized softmax is used.\n\n2. The paper is not coherent. In particular, direction-preserving ADAM and batch-normalized softmax trick are completely orthogonal ideas. \n\nOne of the problems that degrades the generalization performance of Adam (or SGD without L2 weight decay) is that, for different magnitudes of an input weight vector, the updates given by the same update rule can have different effects on the overall network function. For example, if we increase the magnitude of a weight vector without changing the overall network function, the same update rule will result in a smaller effective learning rate for this weight vector. While this problem can be alleviated by L2 weight decay or solved by the proposed spherical weight optimization, a similar problem also exists for the softmax layer. As discussed in Sec. 4, we can scale the logits by a positive factor without changing the predictions of the model, whereas the gradient backpropagated from it can vary greatly, which motivates the idea of batch-normalized softmax. Regularizing softmax is important for fixing the problem we have identified. We will explain more on this motivation in the paper.\n\n3. In the introduction and Section 2.2, authors claim that weight decay has a significant effect on the generalization performance of DNNs. I wonder if authors can refer to any work on this. My own experience and several empirical works have suggested that weight decay does not improve generalization significantly.\n\nL2 weight decay is widely used in state-of-the-art models for image classification, examples include residual networks (He et al., 2016), Inception/Xception (Chollet, 2016), and squeeze-and-excitation networks (Hu et al., 2017). To our knowledge, it is also used in language models, although it is not always stated in papers (e.g., see the code at https://github.com/zihangdai/mos). In this work, we find that a major function of L2 weight decay is to implicitly normalize weight vectors (see Eq. (7)), in order to keep the effective learning rate from decreasing undesirably. As a result, L2 weight decay may not be very useful when decreasing learning rate is not a problem.", "3. I am not confident that the presented experimental validation is fair. First, the original WRN paper and many other papers with ResNets used weight decay of 0.0005 and not 0.001 or 0.002 as used for SGD in this paper. It is unclear why this setting was changed. One could just use \\alpha_0 = 0.05 and \\lambda = 0.0005.\n\nOur experiments were based on a TensorFlow implementation of WRN available at https://github.com/tensorflow/models/tree/master/research/resnet. We have recently compared this implementation to a PyTorch implementation provided by the WRN paper at https://github.com/szagoruyko/wide-residual-networks, and found the following differences:\n+--------------------------------------+-------------------------+-----------------------------------------------+\n| | TensorFlow impl. | PyTorch impl. |\n+--------------------------------------+-------------------------+-----------------------------------------------+\n| Input standardization | Per sample | Per dataset |\n+--------------------------------------+-------------------------+-----------------------------------------------+\n| Skip conn. between stages | Avg. pooling | 1 by 1 conv. |\n+--------------------------------------+-------------------------+-----------------------------------------------+\n| Nonlinearity | Leaky ReLU (0.1) | ReLU |\n+--------------------------------------+-------------------------+-----------------------------------------------+\n| L2 regularization | Weights only | Weights, scales and biases of BN |\n+--------------------------------------+-------------------------+-----------------------------------------------+\n\nThere are other subtle differences that are not listed here, such as different parameter initializations. Due to these differences, the two implementations may have different optimal hyperparameter configurations. In the PyTorch implementation, they use the initial learning rate and weight decay configuration of (0.1, 0.0005) with a multi-step learning rate schedule. In our preliminary experiments, we find the configuration of (0.1, 0.001) slightly better than (0.1, 0.0005) with the cosine learning rate schedule we use, but the results are very similar. We use the configuration of (0.05, 0.002) only to show that the effective learning rate of weights stays the same as long as the product of learning rate and weight decay stays the same (see Eq. (7) and Fig.1(b)), though the performance is also very close to that of (0.1, 0.001) due to the same effective learning rate.", "Dear reviewer,\n\nThank you for your comments.\n\n1. The paper is missing analysis of some important related works. From my personal experiments, restricting w_i to have L2 norm of 1, i.e., to be +-1 leads to worse generalization. One reason for this is that weight decay is not really functioning since it cannot move w_i to 0 or make its amplitude any smaller. Please correct me if I misunderstand something here. The presence of +-1 weights moves us to the area of low-precision NNs.\n\nThank you for pointing out the related works on normalized gradient, we will discuss about them in the paper. We note here the difference between the normalization done by ND-Adam and gradient normalization. ND-Adam normalizes the gradient by a running average of its magnitude, but this is just something inherited from Adam. In ND-Adam, we further normalize the input weight vector of each hidden unit, as a more precise way to produce the normalization effect of L2 weight decay (see Eq. (7) and (14)).\n\nWe agree with your point that restricting w_i to have L2 norm of 1 can lead to worse generalization. And the reason is exactly as you stated, L2 weight decay does not work on the restricted weights. Our analysis in Sec 2.2 shows that L2 weight decay implicitly normalizes weight vectors, and thus keeps the effective learning rate (defined as |\\delta w|/|w| in Sec. 2.2) from decreasing undesirably. For SGD, restricting weight norms to 1 eliminates the normalization effect and leads to smaller effective learning rate than expected. For ND-Adam, on the other hand, the effective learning rate can be controlled in a more precise way (see Eq. (14)) with normalized weight vectors, as we replace L2 weight decay with the proposed spherical weight optimization.\n\nBy normalizing a weight vector, we only restrict its norm, rather than the individual weights. So each weight can have any real value between -1 and 1 (as long as the vector norm is 1), instead of just +-1. A trainable scaling factor can be multiplied to the weight vector in case other norm values than 1 are necessary, and the scaling factor of batch normalization can be shared for this purpose (see Sec. 3.3). Therefore, normalizing weight vectors does not reduce the expressiveness of the model. On the other hand, the expressiveness of low-precision NNs, such as binary networks, are often significantly weakened as a trade-off for consuming less computational resources.\n\n2. \"Moreover, the magnitude of each update does not depend on the magnitude of the gradient. Thus, ND-Adam is more robust to improper initialization, and vanishing or exploding gradients.\" If the magnitude of each update does not depend on the magnitude of the gradient, then the algorithm heavily depends on the learning rate. Otherwise, it does not have any means to approach the optimum in a reasonable number of steps *when* it is initialized very / unreasonably far from it. The claim of your second sentence is not supported by the paper.\n\nThe first sentence of the quote is not clear enough, and thank you for pointing it out. For Adam and ND-Adam, each parameter update is normalized by a running average of the gradient magnitude, thus is less susceptible to small/large gradient magnitude than SGD. In many cases, such as this one, both Adam/ND-Adam and SGD work best with a decaying learning rate rather than a constant one. Indeed, Adam with a constant learning rate is sometimes preferred over SGD due to better optimization performance, such as for GAN training. However, as clarified above, it is a property inherited from Adam. For ND-Adam, we emphasize the technique we propose (i.e., spherical weight optimization) to optimize the directions of weight vectors, which normalizes each weight vector and projects the gradient onto a unit sphere, such that the effective learning rate can be controlled more precisely. Moreover, since we only optimize the direction of each weight vector and keep its magnitude constant, the initial magnitude doesn't matter for ND-Adam as it does for SGD. We will explain it more clearly in the paper.", "Dear reviewer,\n\nThank you for your comments.\n\n1. I found the paper very hard to follow. The authors could improve the clarity of the paper greatly by listing their contribution clearly for readers to digest. The authors also combined the proposed method with a few existing deep learning tricks in the paper. All those tricks that, ie. section 3.3 and 4, should go into the background section.\n\nThanks for the suggestion, we will list our contribution in the introduction. We also clarify the contribution as follows. We first identify two problems that degrade the generalization performance of Adam (Sec. 2): 1) the direction of Adam update does not lie in the span of historical gradients, which can lead to drastically different solutions than SGD in some cases (Wilson et al., 2017), and 2) the effective learning rates (defined as |\\delta w|/|w| in Sec. 2.2) of both Adam and SGD tend to decrease as the norms of weight vectors increase during training, and lead to sharp local minima that do not generalize well. We further show that, when combined with SGD, L2 weight decay can implicitly and approximately normalize weight vectors, such that the effective learning rate is no longer dependent on the norms of weight vectors (see Eq. (7)). The normalization view of L2 weight decay provides a more concrete explanation for how it works for DNNs. Next, we fix the first problem by adapting the learning rate to each weight vector, instead of each individual weight, such that the direction of the gradient is preserved. We fix the second problem by explicitly normalizing each weight vector, which produces the normalization effect of L2 weight decay in a more precise way (see Eq. (14)). Finally, we propose batch-normalized softmax to regularize the learning signal backpropagated from the softmax layer.\n\nSec. 3.3 explains the relationship between the proposed spherical weight optimization (Sec. 3.2) and batch/weight normalization. For instance, we show that the scaling factors of BN can be shared with ND-Adam as the scales of weight vectors. To our knowledge, BN-Softmax described in Sec. 4 is a simple but new idea, and it works very well for Adam and ND-Adam.\n\n2. Overall, the only contribution of the paper seems to be the ad-hoc modification to Adam in Eq. (9). Why is this a reasonable modification? Do we expect this modification to fail in any circumstances? The experiments on CIFAR dataset and one CNN architecture do not provide enough evidence to show the proposed method work well in general.\n\nThe modifications to Adam are in Eq. (9), (12) and (13b). Here we explain the rationale behind these modifications. The updates of SGD lie in the span of historical gradients, whereas it is not the case for Adam. This difference between Adam (as well as other adaptive gradient methods) and SGD has been discussed in Wilson et al., 2017, where they show it can lead to drastically different but worse solutions compared to SGD. Eq. (9) eliminates this difference, while keeping the adaptive learning rate scheme intact for weight vectors. Eq. (12) projects the gradient of each weight vector onto a unit sphere, and Eq. (13b) ensures the weight vector stays on the unit sphere. This modification is designed to distill the normalization effect of L2 weight decay and apply it to Adam. The latter problem is also addressed by a concurrent work submitted to ICLR 2018 (https://openreview.net/forum?id=rk6qdGgCZ).", "Code can be found at https://github.com/zj10/ND-Adam.", "Thank you for your comments.\n\nIn https://arxiv.org/pdf/1707.04822.pdf, they normalize the gradient of each step to keep only its direction. However, the gradient is further multiplied by individually adapted step sizes to form the actual updates, thus changing the direction of the gradient.\n\nIn this work, we normalized the input weight vector of each hidden unit, rather than the gradient, at the end of each step. We also adapt the learning rate to weight vectors instead of individual weights, in order to preserve the direction of gradient.", "There is probably a missing related work: https://arxiv.org/pdf/1707.04822.pdf\n\nIn that work, the gradient is normalized to preserve the direction, while the adaptive step size (Adam) is used. This sounds exactly match your paper title. Could you elaborate the difference if there is any?\n" ]
[ -1, 5, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "HkcU1LJfM", "iclr_2018_HJSA_e1AW", "iclr_2018_HJSA_e1AW", "iclr_2018_HJSA_e1AW", "iclr_2018_HJSA_e1AW", "Bk_mQkcgM", "Bk_mQkcgM", "S1-Kfe5lM", "Bk_mQkcgM", "Bk_mQkcgM", "SJ_kd7ixf", "iclr_2018_HJSA_e1AW", "r1864VYlG", "iclr_2018_HJSA_e1AW" ]
iclr_2018_SybqeKgA-
On Batch Adaptive Training for Deep Learning: Lower Loss and Larger Step Size
Mini-batch gradient descent and its variants are commonly used in deep learning. The principle of mini-batch gradient descent is to use noisy gradient calculated on a batch to estimate the real gradient, thus balancing the computation cost per iteration and the uncertainty of noisy gradient. However, its batch size is a fixed hyper-parameter requiring manual setting before training the neural network. Yin et al. (2017) proposed a batch adaptive stochastic gradient descent (BA-SGD) that can dynamically choose a proper batch size as learning proceeds. We extend the BA-SGD to momentum algorithm and evaluate both the BA-SGD and the batch adaptive momentum (BA-Momentum) on two deep learning tasks from natural language processing to image classification. Experiments confirm that batch adaptive methods can achieve a lower loss compared with mini-batch methods after scanning the same epochs of data. Furthermore, our BA-Momentum is more robust against larger step sizes, in that it can dynamically enlarge the batch size to reduce the larger uncertainty brought by larger step sizes. We also identified an interesting phenomenon, batch size boom. The code implementing batch adaptive framework is now open source, applicable to any gradient-based optimization problems.
rejected-papers
The reviewers generally thought the proposed algorithm was a straightforward extension of Yin et al., 2017, and not enough for a new paper. They also objected to a lack of test results (to show generalization), but the authors did provide these in their revision. Pros: + Adaptive batch sizing is useful, especially if the larger batches license parallelization. Cons: - Small, incremental change to the algorithm from Yin et al., 2017 - Test performance did not improve over well-tuned momentum optimization, which limits the appeal of the method.
train
[ "HkvBT03Jf", "HktgWy7xM", "r1b1AtYlG", "SyW3N3jQf", "rJkjw8sMz", "Byg8vUiMM", "H1ilDIiMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors propose extending the recently-proposed adaptive batch-size approach of Yin et al. to an update that includes momentum, and perform more comprehensive experiments than in the Yin et al. paper validating their approach.\n\nThe basic idea makes a great deal of intuitive sense: inaccurate gradient estimates are fine in early iterations, when we're far from convergence, and accurate estimates are more valuable in later iterations, when we're close. Finding the optimal trade-off between computational cost and expected decrease seems like the most natural way to accomplish this, and this is precisely what they propose. That said, I'm not totally convinced by the derivation of sections 2 and 3: the Gaussian assumption is fine as a heuristic (and they don't really claim that it's anything else), but I don't feel that the proposed algorithm really rests on a solid theoretical foundation.\n\nThe extension to the momentum case (section 3) seems to be more-or-less straightforward, but I do have a question about equation 15: am I misreading this, or is it saying that the variance of the momentum update \\mathcal{P} is the same as the variance of the most recent minibatch? Shouldn't it depend on the previous terms which are included in \\mathcal{P}?\n\nI'm also not convinced by the dependence on the \"optimal\" objective function value S^* in equation 6. In their algorithm, they take S^* to be zero, which is a good conservative choice for a nonnegative loss, but the fact that this quantity is present in the first place, as a user-specified parameter, makes me nervous, since even for a nonnegative loss, the optimum might be quite far from zero, and on a non-convex problem, the eventual local optimum at which we eventually settle down may be further still.\n\nAlso, the \"Robbins 2007\" reference should, I believe, be \"Robbins and Monro, 1951\".\n\nThese are all relatively minor issues, however. My main criticism is that the experiments only report results in terms of *training* loss. The use of adaptive batch sizes does indeed appear to result in faster convergence in terms of training loss, but the plots are in log scale (which I do think is the right way to present it), so the difference is smaller in reality than it appears visually. To determine whether this improvement in training performance is a *real* improvement, I think we need to see the performance (in terms of accuracy, not loss) on held-out data.\n\nFinally, as the authors mention in the final paragraph of their conclusion, some recent work has indicated that large-batch methods may generalize worse than small-batch methods. They claim that, by using small batches early and large batches late, they may avoid this issue, and I don't necessarily disagree, but I think an argument could be made in the opposite direction: that since the proposed approach becomes a large-batch method in the later iterations, it may suffer from this problem. I think that this is worth exploring further, and, again, without results on testing data being presented, a reader can't make any determination about how well the proposed method generalizes, compared to fixed-size minibatches.\n", "Overall, the manuscript is well organized and written with solid background knowledge and results to support the claim of the paper.  The authors borrow the idea from a previously published work and claim that their contributions are twofold: (1) extend batch adaptive SGD to adaptive momentum, and (2) adopt the algorithms to complex neural networks problems (while the previous paper only demonstrates with simple neural networks).  In this regard, it does not show much novelty.  Several issues should be addressed to improve the quality of the paper:  \n
1) The paper has demonstrated that the proposed method exhibits fast convergence and lower training loss.  However, the test accuracy is not shown.  This makes it hard to justify the effectiveness of the proposed method.  \n
2) From Fig. 4(b), it shows that the batch size is updated in every iteration.  The reviewer wonders whether it is too frequent.  Moreover, the paper does not explicitly show the computation cost of computing the batch size. \n3) The comparison of other methodologies seems not fair.  All the compared methods adopt a fixed batch size, but the proposed method uses an adaptive batch size.  The paper can compare the proposed method with adaptive batch size in intuitive settings, e.g., small batch size in the beginning of training and larger batch size later.\n4) The font size is too small in some figures, e.g., Figure 7(a).\n", "The paper proposes a generalization of an algorithm by Yin et al. (2017), which performs SGD with adaptive batch sizes. The present paper generalizes the algorithm to SGD with momentum. Since the original algorithm was already formulated with a general utility function, the proposed algorithm is similar in structure but replaces the utility function so that it takes momentum into account. Experiments on an image classification task show improvements in the training loss. However, no test accuracies are reported and the learning curves have suspicious artifacts, see below. Experiments on a relation extraction task show little improvement over SGD with momentum and constant batch size.\n\n\nCOMMENTS:\n\nThe paper discusses a relevant issue. While adaptive learning algorithms are popular in deep learning, most algorithms adapt the learning rate or the momentum coefficient, but not the batch size. It appears to me that the main idea and the overall structure of the proposed algorithm is the same as in the one published by Yin et al. (2017), and that only few changes were necessary to include momentum. Given the incremental process, I find the presentation unnecessarily involved, and experiments not convincing enough.\n\nConcerning the presentation, the paper dedicates two full pages on a review of the algorithm by Yin et al. (2017). The first page of this review states that, for large enough batch sizes, the change of the objective function in SGD is normal distributed with a variance that is inversely proportional the batch size. It seems to me that this is a direct consequence of the central limit theorem. The derivation, however, is quite technical and introduces some quantities that are never used (e.g., $\\vec{\\xi}_j$ is never used individually, only the combined term $\\epsilon_t$ defined below Eq. 12 is). The second page of the review seems to discuss the main part of the algorithm, but I could not follow it. First, a \"state\" $s_t$ (also written as $S$) is introduced, which, according to the text, is \"the objective value\", which was earlier denoted by $F$. Nevertheless, the change of $s_t$, Eq. 5, appears to obey a different probability distribution than the change of $F$. The paper provides a verbal explanation for this discrepancy, saying that it is possible that $S$ is first reduced to the minimum $S^*$ of the objective and then increased again. However, in my understanding, the minimum of the objective is only realized at a singular point in parameter space. Crossing this point in an update step should have zero probability as long as the model has more than one parameter. The explanation also does not make it clear why the argument should apply to $S$ (or $s$) but not to $F$.\n\nPage 5 provides pseudocode for the proposed algorithm. However, I couldn't find an explanation of the code. The code suggests that, for each update step, one gradually increases the batch size until it becomes larger or equal than a running estimate of the optimal batch size. While this may be a plausible strategy in practice, it seems to have a bias that is not addressed in the paper: the algorithm recalculates a noisy estimate of the optimal batch size after each increase of the batch size, and it terminates as soon as the noisy estimate happens to be small enough, resulting in a bias towards a smaller than optimal batch size. A probably more important issue is that the algorithm is sequential and hard to parallelize, where parallelization is usually the main motivation to use larger batch sizes. As the gradient noise scales inversely proportional to the batch size, I don't see why increasing the batch size should be preferred over decreasing the learning rate unless optimizations with a larger batch size can be parallelized. The experiments don't compare the two alternatives.\n\nConcerning the experiments, it seems peculiar that the learning curves in Figure 1 remain at a constant value for a long time at the beginning of the optimization before they begin to drop. Do the authors understand this behavior? It could indicate that the magnitude of the random initialization was chosen too small. I.e., the parameters might have been initialized too close to zero, where the loss is stationary due to symmetries. Also, absolute values of the training loss can be deceptive since there is often no natural scale. A better indicator of convergence would be the test accuracy. The identification of the \"batch size boom\" is interesting.", "We would like to bring to the reviewers’ attention that several changes have been made in the latest revised paper. They are listed below.\n1. We add a comparison with the manually adjusted mini-batch method in Section 4.2, as one of the reviewers suggested. Figure 1, 2, 3 and the corresponding descriptions of results are all updated.\n2. Test accuracies achieved by each method are presented in Appendix D.\n3. We provide the cost of computing the optimal batch size in Appendix C.\n4. We add more clarifications of the proposed pseudocode in Section 3.3.\n", "Thank you for your detailed comments on our work.\n\nYou mentioned that the proposed algorithm in Page 5 might not rest on a solid theoretical foundation. We would like to make it clear that the algorithm is a trade-off in practice. In this algorithm, we aim to calculate the optimal batch size for each update step. When an optimal size is determined and it is larger than the current batch size, we need to add more instances to enlarge the batch. However, $s_t$, $\\mu_t$, $\\sigma_t$ will change every time we add more instances, leading to a different optimal size. Thus in practice, we can only gradually increase the batch size until it becomes larger than or equal to a running estimate of the optimal batch size.\n\nFor the extension to momentum, the variance of the $\\mathcal{P}_t$ is indeed the same as the variance of the most recent mini-batch, which is to be determined for the t-th iteration. This is because though the previous updates have their noises, their batches which respectively determined their noises have already been selected, thus their noises are no longer random variables but constants. This point has been clarified in the last paragraph in Page 4.\n\nAs for S^*, thank you for your suggestion, it is indeed a user-specified parameter and the user can specify the S^* in terms of the specific definition of the loss function.\n\nConcerning the test accuracy, we would like to make it clear that the very aim of this batch adaptive method is to achieve the lowest loss possible within a certain budget of training data. It is the model’s aim, but not an optimizer’s aim to pursue a higher test accuracy. At your request, we still provide the test accuracy of each experiment, which can be found in Appendix D of the latest revised version. The results show that the proposed batch adaptive methods achieve two best test accuracies in all four cases (i.e. SGD-based and momentum-based for two tasks), and in the other two cases, the test accuracies achieved by batch adaptive methods, 91.33% and 88.46%, are still very close to the best ones, 91.64% and 89.02% respectively. What's more, these results are realized in a self-adaptive way and require no fine-tuning, while the best accuracies in the other two cases are achieved by totally different fixed batch sizes, indicating a tuning process.\n\nFinally, you expressed your concern about the feasibility of our method avoiding the generalization degradation of large-batch training. In fact, Keskar et al. (2016) studied the generation gap of large batch training and proposed one solution that is to warm-start with certain epochs of the small-batch regime, and then use large batch for the rest of the training. They examined this solution and it worked. However, the number of epochs needed to warm start with small batch varies for different data sets, thus a batch adaptive method that can dynamically change the batch size against the characteristics of data might be the key to solving this problem. Anyway, it is just a possibility worth exploring further.\n", "We thank you for your constructive comments on our work.\n\nConcerning the test accuracy, we would like to make it clear that the very aim of this batch adaptive method is to achieve the lowest loss possible within a certain budget of training data. It is the model’s aim, but not an optimizer’s aim to pursue a higher test accuracy. At your request, we still provide the test accuracy of each experiment, which can be found in Appendix D of the latest revised version. The results show that the proposed batch adaptive methods achieve two best test accuracies in all four cases (i.e. SGD-based and momentum-based for two tasks), and in the other two cases, the test accuracies achieved by batch adaptive methods, 91.33% and 88.46%, are still very close to the best ones, 91.64% and 89.02% respectively. What's more, these results are realized in a self-adaptive way and require no fine-tuning, while the best accuracies in the other two cases are achieved by totally different fixed batch sizes, indicating a tuning process.\n\nAs for the cost of computing the optimal batch size, on average it takes up 1.03% and 0.61% of the total computing time per iteration for BA-Momentum and BA-SGD respectively on the image classification task, and the percentage on the relation extraction task is 1.31% and 0.92% for BA-Momentum and BA-SGD respectively. Computing the optimal batch size involves calculating some means and variances, and a binary search to find the $m^*$ that maximizes the utility function. Both operations take little time.\n\nYou suggest comparing the batch adaptive method with adaptive batch size in intuitive settings. We add one that doubles the batch size after certain epochs of training (see the revised version). This manually adjusted mini-batch method achieves a slightly higher test accuracy in one setting while it performs not so well in other three settings compared with our batch adaptive method. It demonstrates that this manual way of increasing batch size still requires manual setting to realize a satisfactory performance whereas our batch adaptive method is self-adaptive and it achieves a satisfactory test accuracy without fine-tuning.\n", "Thank you for the detailed comments.\n\nThe key idea of the extension to momentum is that we need to consider the past parameter updates when calculating the mean of change of objective value, while its variance is only determined by the $\\epsilon_t$, the noise at the t-th iteration. This is because, although the previous updates have their noises, their batches which respectively determined their noises have already been selected, thus their noises are no longer random variables but constants. In this sense, we argue that the extension to momentum might not be unnecessary.\n\nYou asked about the definition of a “state” $s_t$ in the algorithm from Yin et al. (2017) in Page 3. Due to the page limit, this algorithm may lack more specific illustrations. Here a “state” $s_t$ is defined as $s_t = - \\delta F / \\eta$, where $\\delta F$ is the change of objective value and $\\eta$ is the learning rate. We omitted the definition of this denotation in our paper. Now it has been added in the latest revised version. We apologize for the confusion.\n\nAs for the proposed algorithm in Page 5, it is indeed sequential. However, for parallelization, the largest possible batch size per iteration is limited by the computing resources. Thus our algorithm provides a way to examine whether it is the best time to update the parameters after a sequence of paralleled computations if we set the increment $m_0$ (a key parameter in the algorithm) to be the largest possible batch size that the computing resources allow.\n\nAs you mentioned, we did not compare increasing the batch size with decreasing the learning rate. However, decreasing the learning rate is not an alternative to our batch adaptive method, it is complimentary. The batch adaptive method actually finds the optimal batch size adapted to different learning rates and different data sets. No matter how the learning rate is set, kept constant or decreasing, the algorithm still attempts to find the optimal batch size for each iteration adapted to the experiment’s settings, since it takes the learning rate into account when deciding the optimal batch size, see Eq. 7,8,9. The experiment in Section 4.3 also suggests the batch adaptive method can dynamically adapt the batch size against different settings of the learning rate within a certain range.\n\nYou suggested that the magnitude of our random initialization is too small which causes the learning curves in Figure 1 and 2 remaining at constant values for several epochs at the start of the training, we scaled up the magnitude of random initialization, and then the curves drop much earlier than before, which can be seen in the latest revised version of the paper. We really appreciate your suggestion!\n\nConcerning the test accuracy, we would like to make it clear that the very aim of this batch adaptive method is to achieve the lowest loss possible within a certain budget of training data. It is the model’s aim, but not an optimizer’s aim to pursue a higher test accuracy. At your request, we still provide the test accuracy of each experiment, which can be found in Appendix D of the latest revised version. The results show that the proposed batch adaptive methods achieve two best test accuracies in all four cases (i.e. SGD-based and momentum-based for two tasks), and in the other two cases, the test accuracies achieved by batch adaptive methods, 91.33% and 88.46%, are still very close to the best ones, 91.64% and 89.02% respectively. What's more, these results are realized in a self-adaptive way and require no fine-tuning, while the best accuracies in the other two cases are achieved by totally different fixed batch sizes, indicating a tuning process.\n\nLastly, thank you for your interest in “batch size boom”.\n" ]
[ 5, 5, 4, -1, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1, -1 ]
[ "iclr_2018_SybqeKgA-", "iclr_2018_SybqeKgA-", "iclr_2018_SybqeKgA-", "iclr_2018_SybqeKgA-", "HkvBT03Jf", "HktgWy7xM", "r1b1AtYlG" ]
iclr_2018_H1A5ztj3b
Super-Convergence: Very Fast Training of Residual Networks Using Large Learning Rates
In this paper, we show a phenomenon, which we named ``super-convergence'', where residual networks can be trained using an order of magnitude fewer iterations than is used with standard training methods. The existence of super-convergence is relevant to understanding why deep networks generalize well. One of the key elements of super-convergence is training with cyclical learning rates and a large maximum learning rate. Furthermore, we present evidence that training with large learning rates improves performance by regularizing the network. In addition, we show that super-convergence provides a greater boost in performance relative to standard training when the amount of labeled training data is limited. We also derive a simplification of the Hessian Free optimization method to compute an estimate of the optimal learning rate. The architectures to replicate this work will be made available upon publication.
rejected-papers
The paper reports unusally rapid convergence of the ResNet-56 model on CIFAR-10 when a single cycle of a cyclic learning rate schedule is used. The effect is analyzed from several different perspectives. However, the reviewers were not convinced because the effect is only observed for one task, so they question the significance of the result. There was significant discussion of the paper by the reviewers and area chair before this decision was reached. Pros: + Paper illustrates a "super-convergence" phenomenon in which training of a ResNet-56 reaches an accuracy of 92.4% on CIFAR-10 in 10,000 iterations using a single cycle of a cyclic learning rate schedule, while a more standard piecewise-constant schedule reaches 91.2% accuracy in 80,000 iterations. + There was partial, independent replication of the results on other tasks reported on OpenReview. Cons: - In the paper, the effect is shown for only one architecture and one task. - In the paper, the effect is shown for only a single run. - There are no error bars to indicate which differences are significant.
train
[ "rJfAp3Zef", "H1yQ04YxG", "Hyn-NPJbz", "ByWgFx2Mf", "HkvgXwlfM", "SyXNU83Wf", "r1J5783bz", "rJJuxtxeG", "H1Df4xU0Z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "author", "author", "author", "public" ]
[ "The paper discusses a phenomenon where neural network training in very specific settings can profit much from a schedule including large learning rates. Unfortunately, this paper feels to be hastily written and can only be read when accompanied with several references as key parts (CLR) are not described and thus the work can not be reproduced from the paper.\n\nThe main claim of the author hinges of the fact that in some learning problems the surface of the objective function can be very flat near the optimum. In this setting, a typical schedule with a decreasing learning rate would be a bad choice as the change of curvature must be corrected as well. However, this is not a general problem in neural network training and might not be generalizable to other datasets or architectures as the authors acknowledge.\n\nIn the end, the actual gain of this paper is only in the form of a hypothesis but there is only very little enlightenment, especially as the only slightly theoretical contribution in section 5 does not predict the observed behavior. \n\nPersonally i would not use the term \"convergence\" in this setting at all as the runs are very short and thus we might not be close to any region of convergence. Most of the plots shown are actually not converged and convergence in test accuracy is not the same as convergence in training loss, which is not shown at all. The results of smaller test error with larger learning rates on small training sets might therefore just be the inability of the optimizer to get closer to the optimum as steps are too long to decrease the expected loss, thus having a similar effect as early stopping.\n\nPros:\n- Many experiments which try to study the effect\nCons:\n-The described phenomenon seems to depend strongly on the problem surface and might never \nbe encountered on any problem aside of Cifar-10\n- Only single runs are shown, considering the noise on those the results might not be reproducible.\n-Experiments are not described in detail\n-Experiment design feels \"ad-hoc\" and unstructured\n-The role and value of the many LR-plots remains unclear to me.\n\nForm:\n- The paper does not maker clear how the exact schedules work. The terms are introduced but the paper misses the most basic formulas\n- Figures are not properly described, e.g. axes in Figures 3 a) and b)\n- Explicit references to code are made which require familiarity with the used framework(if at all published). ", "In this paper, the authors analyze training of residual networks using large cyclic learning rates (CLR). The authors demonstrate (a) fast convergence with cyclic learning rates and (b) evidence of large learning rates acting as regularization which improves performance on test sets – this is called “super-convergence”. However, both these effects are only shown on a specific dataset, architecture, learning algorithm and hyper parameter setting. \n\n\nSome specific comments by sections:\n\n2. Related Work: This section loosely mentions other related works on SGD, topology of loss function and adaptive learning rates. The authors mention Loshchilov & Hutter in next section but do not compare it to their work. The authors do not discuss a somewhat contradictory claim from NIPS 2017 (as pointed out in the public comment): http://papers.nips.cc/paper/6770-train-longer-generalize-better-closing-the-generalization-gap-in-large-batch-training-of-neural-networks.pdf\n\n3. Super-convergence: This is a well explained section where the authors describe the LR range test and how it can be used to understand potential for super-convergence for any architecture. The authors also provide sufficient intuition for super-convergence. Since CLRs were already proposed by Smith (2015), the originality of this work would be specifically tied to their application to residual units. It would be interesting to see a qualitative analysis on how the residual error is impacting super-convergence.\n\n4. Regularization: While Fig 4 demonstrates the regularization property, the reference to Fig 1a with better test error compared to typical training methods could simply be a result of slower convergence of typical training methods. \n5. Optimal LRs: Fig.5b shows results for 1000 iterations whereas the text says 10000 (seems like a typo in scaling the plot). Figs 1 and 5 illustrate only one cycle (one increase and one decrease) of CLR. It would be interesting to see cases where more than one cycle is required and to see what happens when the LR increases the second time.\n\n6. Experiments: This is a strong section where the authors show extensive reproducible experimentation to identify settings under which super-convergence works or does not work. However, the fact that the results only applies to CIFAR-10 dataset and could not be observed for ImageNet or other architectures is disappointing and heavily takes away from the significance of this work. \n\nOverall, the work is presented as a positive result in very specific conditions but it seems more like a negative result. It would be more appealing if the paper is presented as a negative result and strengthened by additional experimentation and theoretical backing.", "This paper discusses the phenomenon of a fast convergence rate for training resnet with cyclical learning rates under a few particular setting. It tries to provide an explanation for the phenomenon and a procedure to test when it happens. However, I don't find the paper of high significance or the proposed method solid for publication at ICLR.\n\nThe paper is based on the cyclical learning rates proposed by Smith (2015, 2017). I don't understand what is offered beyond the original papers. The \"super-convergence\" occurs under special settings of hyper-parameters for resnet only and therefore I am concerned if it is of general interest for deep learning models. Also, the authors do not give a conclusive analysis under what condition it may happen.\n\nThe explanation of the cause of \"super-convergence\" from the perspective of transversing the loss function topology in section 3 is rather illustrative at the best without convincing support of arguments. I feel most content of this paper (section 3, 4, 5) is observational results, and there is lack of solid analysis or discussion behind these observations.", ">It would be interesting to see a qualitative analysis on how the residual error is impacting super-convergence.\n\nResNet is not needed, actually. My experiments with HardNet local descriptor (see me previous public comment) use plain VGG-like architecture and still achieve some king of \"super-convergence\".", "I have chosen to reproduce elements of this paper as part of the ICLR 2018 Reproducibility Challenge: http://www.cs.mcgill.ca/~jpineau/ICLR2018-ReproducibilityChallenge.html\n\nThe key claim of this paper that I attempted to reproduce is that a Resnet-56 network can be trained to ~90% accuracy on Cifar-10 in just 10,000 steps with a Cyclical Learning Rate (CLR). I also wanted to confirm the baseline result that it would take 80,000 steps to train the same network to similar accuracy using a traditional multistep learning.\n\nThese results were presented in Figure 1A from the paper.\n\nI took two approaches when reproducings this work:\n\n1. I attempted to reproduce the work in Tensorflow using both the paper and the author's Caffe code as a guide.\n2. I attempted to reproduce the work using the author's Caffe code and GitHub instructions.\n\n\nReproducing with Tensorflow\n\nI have made my code available on GitHub at: https://github.com/JoshVarty/ReproducingSuperconvergence\n\nUsing Tensorflow I was able to weakly reproduce evidence of super-convergence.\n\nAfter 10,000 steps training with CLR, the network achieved ~85% accuracy. See: https://i.imgur.com/e9RXHl1.png\nAfter 20,000 steps training with multistep, the network achieved ~80% accuracy. See: https://i.imgur.com/PGZ9nlI.png\n\t\nAlthough these results do not quite align perfectly with those of the paper, I believe they support it. Although multistep training was run for 80,000 steps it did not improve after the first 20,000 steps. I was also unable to achieve accuracies over 90% as shown in the paper. I believe this may be due to the fact I was only able to use a mini-batch size of 125 compared to the author's mini-batch size of 1,000.\n\n\nReproducing with Caffe\n\nUsing the provided Caffe code, I was able to partially reproduce the results presented in the paper. \n\nFor baseline multistep learning, I achieved a test accuracy of 85%. See: https://i.imgur.com/8SaqJJ3.png\nFor CLR learning, I achieved a test accuracy of 91.2%. See: https://i.imgur.com/zVds4VF.png\n\nThe overall trend looks similar to that of the author's results, but the test accuracy of CLR does not quite match the expected results presented in the paper.\n\nPotential reasons for lack of reproduction\n\t- The author trained their network using an 8-GPU machine with a mini-batch size of 1,000. I used a batch size of 125 on a single K80 GPU.\n\t- Difference in Batch Normalization parameter settings. Currently investigating this here: https://github.com/lnsmith54/super-convergence/issues/2\n\n\nCorrections\n\n\t- Appendix A claims the first Conv Layer has stride=2, but the code provided uses stride=1. \n\t\n\nUndocumented Elements\n\nSome elements of the network were undocumented in the paper making it harder to reproduce:\n\n\t- While training, images were flipped left-to-right with 50% probability\n\t- All weights before ReLUs are initialized according to \"Delving Deep into Rectifiers.\" [1]\n\t- All weights before softmax are initialized according to \"Understanding the difficulty of training deep feedforward neural networks.\" [2]\n\t- Bias variables are initialized to zero.\n\t- Learning rate scaling on weights layers was 1\n\t- Learning rate scaling on bias layers was 2\n\t\n\nConclusion\n\nThere is evidence to suggest that super-convergence reproduces in some form on Cifar-10 with a Resnet-56 architecture. On a personal note, I will be incorporating Cyclical Learning Rates into future projects of mine.\n\t\n\t\n[1] https://arxiv.org/pdf/1502.01852v1.pdf\n[2] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.207.2059&rep=rep1&type=pdf", "1. AnonReviewer1 comments and replies:\n\"Loshchilov & Hutter in next section but do not compare it to their work.\"\nWe stated that Loshchilov & Hutter's form for CLR (called SGDR) does not work for super-convergence. The paper now states this more clearly in the Related Works Section.\n\n\"contradictory claim from NIPS 2017 (as pointed out in the public comment)\"\nWe do not consider the claims in Hoffer, et al. (2017) contradictory. They show that a longer training is a form of regularization, which doesn't contradict the regularization effects of large learning rates any more than it contradicts the use of dropout for regularization. From a practical perspective, training longer has the obstacle of an even larger computational burden, hence other forms of regularization are preferable.\n\n\"It would be interesting to see a qualitative analysis on how the residual error is impacting super-convergence.\"\nWe don't think it is the residual nature of the networks that are relevant but how batch norm stabilizes the training in the presence of large learning rates causing gradient noise. We discuss this more clearly now.\n\n\"Fig 1a with better test error compared to typical training methods could simply be a result of slower convergence of typical training methods.\" \nHoffer, et al. (2017) implies that longer training would improve the slower convergence rate in Fig 1a. We actually let the training for the typical training schedule go to 120,000 iterations but the test accuracy was higher at 80,000 so Fig 1a shows longer training in a better light. \n\n\"It would be interesting to see cases where more than one cycle is required and to see what happens when the LR increases the second time.\"\nThis has been done. For example, see \"Snapshot ensembles: Train 1, get m for free\" arXiv:1704.00109. Most of our experiments were performed last winter, prior to this paper but we saw similar results as they described.\n\n\"Overall, the work is presented as a positive result in very specific conditions but it seems more like a negative result.\"\nThe super-convergence paper presents empirical evidence of a new phenomenon that is not yet adequately explained by the literature on SGD, as such it is a positive result. The Discussion Section should make the impact of this work clearer.\n\nThank you for your comments and the opportunity to address your concerns.\n\n\n2. AnonReviewer2 comments and replies:\n\"the work cannot be reproduced from the paper.\"\nArchitectures and code will be available on github.com.\n\n\"convergence in training loss, which is not shown at all\"\nThe training loss is shown in Figure 4. Furthermore, we examined the training loss for all of the figures but did not include them in most of the figures for readability and it did not provide any additional insights. \n\n\"-The described phenomenon seems to depend strongly on the problem surface and might never be encountered on any problem aside of Cifar-10\"\n\"- Only single runs are shown, considering the noise on those the results might not be reproducible.\"\nIf the purpose of the paper was to demonstrate another new technique to obtain a half a percent improvement in results, we would have averaged over 10 runs to show that the half-percent improvement. Also, the limitation of the effect to only Cifar would heavily detract from the practical significance of this paper. However, that is tangential to the primary purpose of this paper. Instead, this super-convergence paper presents empirical evidence of a new phenomenon that is not yet adequately explained by the literature on SGD and regularization.\n\n\"-Experiments are not described in detail.\"\n\"-Experiment design feels \"ad-hoc\" and unstructured\"\n\"-The role and value of the many LR-plots remains unclear to me.\"\n\"- The paper does not maker clear how the exact schedules work. The terms are introduced but the paper misses the most basic formulas\"\nArchitectures and code will be available on github.com.\n\n\"- Figures are not properly described, e.g. axes in Figures 3 a) and b)\"\nThe caption for Figure 3 was amended. This figure was borrowed with permission from \"Qualitatively characterizing neural network optimization problems.\" arXiv:1412.6544 (2014) and a full description is available in that paper.\n\n\"- Explicit references to code are made which require familiarity with the used framework(if at all published).\" \nArchitectures and code will be available on github.com.\n\n3. AnonReviewer3 comments and replies:\n\"I don't understand what is offered beyond the original papers.\"\n\"I am concerned if it is of general interest for deep learning models.\"\n\"Also, the authors do not give a conclusive analysis under what condition it may happen.\"\n\"a lack of solid analysis or discussion behind these observations.\"\n\nWe believe the significance of this paper and how it is intertwined with recent discussions in the literature on SGD and generalization is made clearer by the Discussions Section.", "Thank you to all the reviewers for your time and effort in reading our paper.\n\nAlthough many papers in the deep learning literature suggest new techniques for training deep networks, we did not intend for this paper to be of this kind. Instead, this super-convergence paper presents empirical evidence of a new phenomenon that is not yet adequately explained by the literature on SGD. While super-convergence might be of some practical value, the primary purpose of this paper is to provide empirical support and theoretical insights to the active discussions in the literature on SGD and understanding generalization. Based on the reviewers' comments, it is apparent that the relevance of super-convergence to ongoing discussions in the literature is unclear. We have rewritten the Discussion Section and revised various other parts of the paper to more explicitly show how our results are relevant to ongoing discussions in the literature on SGD and generalizations. We hope the response to super-convergence is similar to the reaction to the initial report of network memorization, which sparked an active discussion within the deep learning research community on better ways of understanding the factors in SGD leading to solutions that generalize well. \n\n", "Jastrz{\\k{e}}bski, et al. [1] show that the larger the ratio of the learning rate to the batch size, the greater the noise during training and the better the network generalizes. They also demonstrate that instead of increasing the learning rate via cyclical learning rates, one obtains a similar effect by decreasing the batch size. Independently, Chaudhari, et al. [2] show that the entropy of the steady-state distribution of the weights scales linearly with the ratio of the learning rate over two times the batch size and this ratio completely determines the strength of SGD's regularization. Although the authors don't suggest a cycle, they do recommend that this ratio be large in practice, which coincides with our empirical results.\n\n1. Jastrzębski, Stanisław, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amos Storkey. \"Three Factors Influencing Minima in SGD.\" arXiv preprint arXiv:1711.04623 (2017).\n2. Chaudhari, Pratik, and Stefano Soatto. \"Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks.\" arXiv preprint arXiv:1710.11029 (2017).\n", "I have done an additional experiement in different domain. \n\n1) Task: local patch descriptor learning. \nArchitecture: Siamese network, output is 128-channel descriptor, which is L2-normed.\nThen triplet margin loss applied to the triplet of : anchor, positive, negative.\n\n2)Networks itself is VGG-style:\n32x32 grayscale, locally normalized patch -> 32C3-32C3-64C3/2-64C3-128C3/2-128C8 - L2norm\nNo residual connections, no bottlenecks, but batch-normalization after each conv layer.\n\n3)Dataset: 5M triplets, randomly sampled from 100K patches from Brown dataset \nhttp://phototour.cs.washington.edu/patches/default.htm\n\n4) lr_rate decay is linear from max_lr to 0 , as it work better than standard \"step\" one. \n\n5) Metric is mAP two view matching on two other datasets: W1BS and HPatches. So metric really tests generatlization\n\nSo, results:\n\nLR policy | Iterations | mAP\n\nLinear, from 0.1 to 0 | 50K | 0.1065\nLinear, from 50 to 0 | 5K | 0.1087\n(0.9 * abs(sin)) * + 0.1) *(Linear, from 50 to 0)| 5K | 0.1100\n\nSo I am not sure, if it can be called \"super-convergence\" in authors sense, but large learning rate lead to improved performance in my case + \"cyclic modulation\" makes effect bigger.\n\n\nFirst, batch normalization seems necessary part, because it basically allows to have huge weights, which does not influence output. And at the end of my network there is L2norm, so everything is always fine-scaled. \n\nSecond, if the large weights are one of the responsible parts, may be recent paper on \"Feature Incay\" is relevant https://arxiv.org/pdf/1705.10284.pdf \nIn short, authors argue, that large values of the features contrary to common practice, lead to better generalization. But they don`t tell anything about convergency speed.\n\nThird, unfortunately, result of the faster converged network was _worse_ on real-world with matching. \n\nThe last, but not least, this paper contradicts recent NIPS oral \"Train longer, generalize better: closing the\ngeneralization gap in large batch training of neural\nnetworks\" https://arxiv.org/pdf/1705.08741.pdf, where authors show that longer training is important for generalization. \n\nSolving this contradiction could lead to new interesting results.\n\n\n****\nPaper overall is good written and opens an interesting discussion. I would vote for poster acceptance" ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1A5ztj3b", "iclr_2018_H1A5ztj3b", "iclr_2018_H1A5ztj3b", "H1yQ04YxG", "iclr_2018_H1A5ztj3b", "iclr_2018_H1A5ztj3b", "iclr_2018_H1A5ztj3b", "iclr_2018_H1A5ztj3b", "iclr_2018_H1A5ztj3b" ]
iclr_2018_ByJ7obb0b
Understanding and Exploiting the Low-Rank Structure of Deep Networks
Training methods for deep networks are primarily variants on stochastic gradient descent. Techniques that use (approximate) second-order information are rarely used because of the computational cost and noise associated with those approaches in deep learning contexts. However, in this paper, we show how feedforward deep networks exhibit a low-rank derivative structure. This low-rank structure makes it possible to use second-order information without needing approximations and without incurring a significantly greater computational cost than gradient descent. To demonstrate this capability, we implement Cubic Regularization (CR) on a feedforward deep network with stochastic gradient descent and two of its variants. There, we use CR to calculate learning rates on a per-iteration basis while training on the MNIST and CIFAR-10 datasets. CR proved particularly successful in escaping plateau regions of the objective function. We also found that this approach requires less problem-specific information (e.g. an optimal initial learning rate) than other first-order methods in order to perform well.
rejected-papers
The reviewers thought that idea of trying to exploit low-rank structure in the loss gradients of a feedforward network to improve training was interesting; however they expressed many concerns about the clarity of the presentation, quality of the empirical evaluation, and significance of the result (since the tests were not done on an architecture anywhere near state-of-the-art). Because the authors did not participate in the discussion period, none of these concerns were addressed. Pros: + Promising idea for new approaches to optimization. Cons: - Unclear notation for the intended machine learning audience - Algorithm should be illustrated using pseudocode - Limited significance if the method is only usable with purely feedforward networks. - Limited empirical evaluation: positive results only if weights are poorly initialized.
train
[ "H1b4hwrxf", "HyyKwSvlG", "Hk22fOybz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "[Main comments]\n\n* The authors made a really odd choice of notation, which made the equations hard to follow.\nApparently, that notation is used in differential geometry, but I have never seen it used in\nan ML paper. If you talk about outer product structure, show some outer products!\n\n* The function f that the authors differentiate is not even defined in the main manuscript!\n\n* The low-rank structure they describe only holds for a single sample at a time.\nI don't see how this would be \"understanding low rank structure of deep networks\"\nas the title claims... What is described is basically an implementation trick.\n\n* Introducing cubic regularization seems interesting. However, either some\nextensive empirical evidence or some some theoretical evidence that this is useful are needed.\nThe present paper has neither (the empirical evidence shown is very limited).\n\n[Other minor comments]\n\n* Strictly speaking Adagrad has not been designed for Deep Learning.\nIt is an online algorithm that became popular in the DL community later on.\n\n* \"Second derivatives should suffice for now, but of course if a use arose for\nthird derivatives, calculating them would be a real option\"\n\nThat sentence seems useless.\n\n* Missing citation:\n\nGradient Descent Efficiently Finds the Cubic-Regularized Non-Convex Newton Step. \nYair Carmon, John Duchi.\n", "Summary: \nThis paper shows the feedforward network (with ReLU activation functions in the hidden layers, softmax at the output, and cross entropy-loss) exhibits a low-rank derivative structure, which is able to use second-order information without approximating Hessian. For numerical experiments, the author(s) implemented Cubic Regularization on this network structure with SGD (on MNIST and CIFAR10) and Adagrad and Adadelta (on MNIST). \n\nComments: \nThe idea of showing low rank structure which makes it possible to use second-order information without approximations is interesting. This feedforward network with ReLU activation, output softmax and cross-entropy-loss is well-known structure for neural networks. \n\nI have some comments and questions as follows. \n\nHave you tried to apply this to another architecture of neural networks? Do you think whether your approach is able to apply to convolutional neural networks, which are widely used? \n\nThere is no gain on using CR with Adam as you mention in Discussion part of the paper. Do you think that CR with SGD (or with Adagrad and Adadelta) can be better than Adam? If not, why do people should consider this approach, which is more complicated, since Adam is widely used? \n\nThe author(s) should do more experiments to various dataset to be more convincing. \n\nI do like the idea of the paper, but at the current state, it is hard to evaluate the effective of this paper. I hope the author(s) could provide more experiments on different datasets. I would suggest to also try SVHN or CIFAR100. And if possible, please also consider CNN even if you are not able to provide any theory. \n", "\nThis paper proposes to set a global step size gradient-based optimization algorithms such as SGD and Adam using second order information. Instead of using second-order information to compute the update directly (as is done in e.g. Newton method), it is used to estimate the change of the objective function in a pre-computed direction. This is computationally much cheaper than full Newton because (a) the Hessian does not need to be inverted (b) vector-Hessian multiplication is only O(#parameters) for a single sample.\n\nThere are many issues.\n\n### runtime and computational issues ###\n\nFirstly, the paper does not clearly specify the algorithm it espouses. It states: \"once the step direction had been determined, we considered that fixed, took the average of gT Hg and gT ∇f over all of the sample points to produce m (α) and then solved for a single αj value\" You should present pseudo-code for this computation and not leave the reader to determine the detailed order of computation for himself. As it stands, it is not only difficult for the reader to infer these details, but also laborious to determine the computational cost per iteration on some network the reader might wish to apply your algorithm to. Since the paper discusses the computational cost of CR only in vague terms, you should at least provide pseudo-code.\n\nSpecifically, consider equation (80) at the very end of the appendix and consider the very last term in that equation. It contains d^2v/dwdw. This is a \"heavy\" term containing the second derivative of the last hidden layer with respect to weights. You do not specify how you compute this term or quantities involving this term. In a ReLU network, this term is zero due to local linearity, but since you claim that your algorithm is applicable to general networks, this term needs to be analyzed further.\n\nWhile the precise algorithm you suggest is unclear, it's purpose is also unclear. You only use the Hessian to compute the g^THg terms, i.e. for Hessian-vector multiplication. But it is well-known that Hessian-vector multiplication is \"relatively cheap\" in deep networks and this fact has been used for several algorithms, e.g. http://www.iro.umontreal.ca/~lisa/pointeurs/ECML2011_CAE.pdf and https://arxiv.org/pdf/1706.04859.pdf. How is your method for computing g^THg different and why is it superior? \n\nAlso note that the low-rank structure of deep gradients is well-known and not a contribution of this paper. See e.g. https://www.usenix.org/system/files/conference/atc17/atc17-zhang.pdf\n\n### Experiments ###\n\nThe experiments are very weak. In a network where weights are initialized to sensible values, your algorithm is shown not to improve upon straight SGD. You only demonstrate superior results when the weights are badly initialized. However, there are a very large number of techniques already that avoid the \"SGD on ReLU network with bad initial weights\" problem. The most well-known are batch normalization, He initialization and Adam but there are many others. I don't think it's a stretch to consider that problem \"solved\". Your algorithm is not shown to address any other problems, but what's worse is that it doesn't even seem to address that problem well. While your learning curves are better than straight SGD, I suspect they are well below the respective curves for He init or batchnorm. In any case, you would need to compare your algorithm against these state-of-the-art methods if your goal is to overcome bad initializations. Also, in appendix A, you state that CR can't even address weights that were initialized to values that are too large.\n\nYou claim that your algorithm helps with \"overcoming plateaus\". While I have heard the claim that deep network optimization suffers from intermediate plateaus before, I have not seen a paper studying / demonstrating this behavior. I suggest you cite several papers that do this and then replicate the plateau situations that arose in those papers and show that CR overcomes them, instead of resorting to a platenau situation that is essentially artificially induced by intentionally bad hyperparameter choices.\n\nI do not understand why your initial learning rate for SGD in figures 2 and 3 (0.02 and 0.01 respectively) differ so much from the initial learning rate under CR. Aren't you trying to show that CR can find the \"correct\" learning rate? Wouldn't that suggest that initial learning rate for SGD should be comparable to the early learning rates chosen by CR? Wouldn't that suggest you should start SGD with a learning rate of around 2 and 0.35 respectively? Since you are annealing the learning rate for SGD, it's going to decline and get close to 0.02 / 0.01 anyway at some point. While this may not be as good as CR or indeed batchnorm or Adam, the blue constant curve you are showing does not seem to be a fair representation of what SGD can do.\n\nYou say the minibatch size is 32. For MNIST, this means that 1 epoch is around 1500 iterations. That means your plots only show the first epoch of training. But MNIST does not converge in 1 epoch. You should show the error curve until convergence is reached. Same for CIFAR. \n\n\"we are not interested in network performance measures such as accuracy and validation error\" I strongly suspect your readers may be interested in those things. You should show validation classification error or at least training classification error in addition to cross-entropy error. \n\n\"we will also focus on optimization iteration rather than wall clock time\" Again, your readers care more about the latter. You need to show either error curves by clock time or the total time to convergence or supplement your iteration-based graphs with a detailed discussion of how long an iteration takes.\n\nThe scope of the experiments is limited because only a single network architecture is considered, and it is not a state-of-the art architecture (no convolution, no normalization mechanism, no skip connections).\n\nYou state that you ran experiments on Adam, Adadelta and Adagrad, but you do not show the Adam results. You say in the text that they were the least favorable for CR. This suggests that you omitted the detailed results because they were unfavorable to you. This is, of course, unacceptable!\n\n### (Un)suitability of ReLU for second-order analysis ###\n\nYou claim to use second-order information over the network to set the step size. Unfortuantely, ReLU networks do not have second-order information! They are locally linear. All their nonlinearity is contained in non-differentiable region boundaries. While this may lead to the Hessian being cheaper to compute, it means it is not representative of the actual behavior of the network. In fact, the only second-order information that is brought to bear in your experiments is the second-order information of the error function. I am not saying that this particular second-order information could not be useful, but you need to make a distinction in your paper between network second-order info and error function second-order info and make explicit that you only use the former in your experiments. As far as I know, most second-order papers use either tanh or a smoothed ReLU (such as the smoothed hinge used recently by Koh & Liang (https://arxiv.org/pdf/1703.04730.pdf)) for experiments to overcome the local linearity.\n\n### The \\sigma hyperparameter ###\n\nYou claim that \\sigma is not as important / hard to set as \\alpha in SGD or Adam. You state: \"We also found that this ap- proach requires less problem-specific information (e.g. an optimal initial learning rate) than other first-order methods in order to perform well.\" You have not provided sufficient evidence for this claim. You say that \\sigma can be chosen by considering powers of 10. In many networks, choosing \\alpha by considering powers of 10 is sufficient! Even if powers of 2 are considered for \\alpha, this would reduce the search effort only by factor log_2(10). Also, what if the range of \\sigma values that need to be considered is larger than the range of \\alpha values? Then setting \\sigma would take more effort.\n\nYou do not give precise protocols how you set \\sigma and how you set \\alpha for non-CR algorithms. This should be clearly specified in Appendix A as it is central to your argument of easing hyperparameter search.\n\n### Minor points ###\n\n- Your introduction could benefit from a few more citations\n- \"The rank of the weighted sum of low rank components (as occurs with mini-batch sampling) is generally larger than the rank of the summed components, however.\" I don't understand this. Every sum can be viewed as a weighted sum and vice versa.\n- Equation (8) could be motivated a bit better. I know it derives from Taylor's theorem, but it might be good to discuss how Taylor's theorem (and its assumptions) relate to deep networks.\n- why the name \"cubic regularization\"? shouldn't it be something like \"quadratic step size tuning\"?\n\n.\n.\n.\n\nThe reason I am giving a 2 instead of a 1 is because the core idea behind the algorithm given seems to me to have potential, but the execution is sorely lacking. \n\nA final suggestion: You advertise as one of your algorithms upsides that it uses exact Hessian information. Howwever, since you only care about the scale of the second-order term and not its direction, I suspect exact calculation is far from necessary and you could get away with very cheap approximations, using for example techniques such as mean field analysis (e.g. http://papers.nips.cc/paper/6322-exponential-expressivity-in-deep-neural-networks-through-transient-chaos.pdf)." ]
[ 4, 5, 2 ]
[ 4, 4, 4 ]
[ "iclr_2018_ByJ7obb0b", "iclr_2018_ByJ7obb0b", "iclr_2018_ByJ7obb0b" ]
iclr_2018_HJg1NTGZRZ
Bit-Regularized Optimization of Neural Nets
We present a novel regularization strategy for training neural networks which we call ``BitNet''. The parameters of neural networks are usually unconstrained and have a dynamic range dispersed over a real valued range. Our key idea is to control the expressive power of the network by dynamically quantizing the range and set of values that the parameters can take. We formulate this idea using a novel end-to-end approach that regularizes a typical classification loss function. Our regularizer is inspired by the Minimum Description Length (MDL) principle. For each layer of the network, our approach optimizes a translation and scaling factor along with integer-valued parameters. We empirically compare BitNet to an equivalent unregularized model on the MNIST and CIFAR-10 datasets. We show that BitNet converges faster to a superior quality solution. Additionally, the resulting model is significantly smaller in size due to the use of integer instead of floating-point parameters.
rejected-papers
Pros: + The idea of end-to-end training that simultaneously learns the weights and appropriate precision for those weights is very appealing. Cons: - Experimental results are far from the state-of-the-art, which makes the empirical evaluation unconvincing. - More justification is needed for the update of the number of bits using the sign of the gradient.
train
[ "B1qKE_Tmz", "HkrrVdTQG", "B1S1NdT7G", "HkGFQdT7z", "Syi6-muxf", "Sy4v3Bolz", "Bk1-V2plG", "B1jyA87EG", "B1IKaxXbf", "H12bJi4eM", "BkchXu6yG" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public" ]
[ "1- Only small networks on relatively small datasets are tested. \n>The results on VGG networks (larger networks) is being computed and will be included in the camera ready submission. \n\n2-The results on MNIST and CIFAR-10 are not good enough...\n>We found that our low performance on MNIST was caused by using 4X4 pooling layers. We have changed this experiment to use a the standard neural net architecture as a baseline and results on MNIST are shown in (Figure 1a) and CIFAR-10 (Figure 1b).", "1. After equation (5), I don't understand how the gradient of L(tilde_W) w.r.t. B(i) is computed. B(i) is discrete. The update rule seems to be clearly wrong. \n>The gradient update rule is correct, we added explanation that the number of bits is treated as a real number for the purpose of calculating gradients. The update rule ensures integrality. The next post discusses the details.\n\n2-a. End-to-end trained quantized networks have been studied in various previous works... None of these works have been compared with. \n>We cite these papers in our related work section. We would like the reviewer to note that the networks, in the papers referred to, are trained with a fixed (hand coded) number of bits. The research question we are answering is: What is the optimal number of bits? The research question these references answer which is: What is the performance given a certain fixed number of bits.\n\n2-b. All the baseline methods use 8 bits per value. This choice is quite ad-hoc. \n>We have added baseline experiments using 4 and 16 bits (Figure 1 a&b). We have to fix the number of bits for these algorithms. \n\n2-c. Only MNIST and CIFAR10 dataset with Lenet32 are used in the experiment. I find the findings not conclusive based on these. \n>Most other approaches on low precision training such as (BinaryConnect by Courbariaux et al NIPS 2015) and (Soft weight sharing for NN Compression by Ullrich et. al. 2017) only compare on these simple datasets. \n\n2-d. No wall-time and real memory numbers are reported.\n>We are unclear whether this is regarding training or inference time. There is about 4X savings. Re inference time: a meaningful comparison requires hardware support for low precision operations. This is currently unavailable for arbitrary precision.", "1- It's also difficult for me to understand how this interacts with the other terms in the objective (quantization error and loss)... it's not at all clear that the gradient of either the loss or the quantization error w.r.t. the number of bits will in general suggest increasing the number of bit. \n>We have clarified this in the current revision. It is reasonable to assume that the classification accuracy is similar for similar values of the parameters. The error due to the local linear approximation drops at a rate of 1/2^B. In the worst case, we use a fine grained approximation using 32 bits. This is clearly not needed as we show in the experiments, that a 5-6 bit local linear approximation gives good accuracy. \n\n2- It's unclear to me how effectively accuracy and precision are balanced by this training strategy. \n>We have clarified this in the current revision. As we show in our experiments, the accuracy and precision trade-off varies with different values for \\lambda_1 and \\lambda_2. \n\n3-The results on MNIST and CIFAR10 are so poor that they give me some concern about how the training was performed and whether the results are meaningful. \n>We believe the reviewer is referring to Table 1. These are after 30 epochs only for MNIST. Our final error rate was 3% on MNIST. Please note that we are using a small learning rate (1e-3) in order to show the big impact of bit regularization. In Figure 3(a)(right panel), we showed that increasing the learning rate does indeed improve the performance to about a 2% error rate. \n\n>In the initial submission, we showed LeNet32 and BitNet with about 4% test error rate. We found that our low performance on MNIST was also caused by using 4X4 pooling layers. We have changed this experiment to use a 2X2 pooling and now show error rates of 2-3% on MNIST (Figure 1a) and 27-29% on CIFAR-10 (Figure 1b). ", "We thank the reviewers for their feedback. AnonReviewer1 clearly understood the paper saying \"The idea is introduced clearly and rather straightforward.\" AnonReviewer2 also seemed to fully understand the paper and its contributions saying \"the idea is interesting, as providing an end-to-end trainable technique for distributing the precision across layers of a network would indeed be quite useful.\" AnonReviewer3 missed a couple of key points in our approach, which made the reviewer think the formulation is incorrect. \n\nThis revision contains the following modifications and we reference these modifications accordingly in each of the reviewer's rebuttal:\n1- All three reviewers raised the following concerns about the experiments:\nThe performance on MNIST and CIFAR doesn't match the state-of-the-art. \n>We found that our performance can be improved by using 2X2 pooling instead of 4X4 pooling. Please note that our experiments use a small learning rate (1e-3) because it better distinguishes BitNet from LeNet. We showed in Figure 3(a)(right panel) that increasing the learning rate does lead to better accuracy. We have changed this experiment and now show error rates of 2-3% on MNIST (Figure 1a) and 27-29% on CIFAR-10 (Figure 1b).\n\n2- AnonReviewer3 mentioned that there was no variation in base line experiments with regards the number of bits ( only focusing on 8bits)\n>We added baseline experiments using 4 and 6 bits (Figure 1a and 1b). \n\n3- Overall clarity of the text and formulation\n>We added overall text and formulation clarification \n1- Added explanation that the number of bits is treated as a real number for the purpose of calculating gradients. \n2- Added the closed form of the gradients of the quantization error wrt the number of bits and the weights. \n3- Expanded gradients in Eq (5) to show gradients wrt the terms of the proposed loss function.\n4- We added some clarifying text towards the end of Section 4.", "This paper proposes a direct way to learn low-bit neural nets. The idea is introduced clearly and rather straightforward.\n\npros:\n(1) The idea is introduced clearly and rather straightforward.\n(2) The introduction and related work are well written.\n\ncons:\nThe provided experiments are weak to demonstrate the effectiveness of the proposed method.\n(1) only small networks on relatively small datasets are tested.\n(2) the results on MNIST and CIFAR 10 are not good enough for practical deployment.", "This paper proposes to optimize neural networks considering the three different terms: original loss function, quantization error and the sum of bits. While the idea makes sense, the paper is not well executed, and I cannot understanding how gradient descend is performed based on the description of Section 4.\n\n1. After equation (5), I don't understand how the gradient of L(tilde_W) w.r.t. B(i) is computed. B(i) is discrete. The update rule seems to be clearly wrong.\n2. The experimental section of this paper needs improvement.\n a. End-to-end trained quantized networks have been studied in various previous works including stochastic neuron (Bengio et al 2013), quantization + fine tuning (Wu et al 2016 Quantized Convolutional Neural Networks for Mobile Devices), Binary connect (Courbariaux et al 2016) etc. None of these works have been compared with.\n b. All the baseline methods use 8 bits per value. This choice is quite ad-hoc.\n c. Only MNIST and CIFAR10 dataset with Lenet32 are used in the experiment. I find the findings not conclusive based on these.\n d. No wall-time and real memory numbers are reported.", "The paper proposes a technique for training quantized neural networks, where the precision (number of bits) varies per layer and is learned in an end-to-end fashion. The idea is to add two terms to the loss, one representing quantization error, and the other representing the number of discrete values the quantization can support (or alternatively the number of bits used). Updates are made to the parameter representing the # of bits via the sign of its gradient. Experiments are conducted using a LeNet-inspired architecture on MNIST and CIFAR10.\n\nOverall, the idea is interesting, as providing an end-to-end trainable technique for distributing the precision across layers of a network would indeed be quite useful. I have a few concerns: First, I find the discussion around the training methodology insufficient. Inherently, the objective is discontinuous since # of bits is a discrete parameter. This is worked around by updating the parameter using the sign of its gradient. This is assuming the local linear approximation given by the derivative is accurate enough one integer away; this may or may not be true, but it's not clear and there is little discussion of whether this is reasonable to assume.\n\nIt's also difficult for me to understand how this interacts with the other terms in the objective (quantization error and loss). We'd like the number of bits parameter to trade off between accuracy (at least in terms of quantization error, and ideally overall loss as well) and precision. But it's not at all clear that the gradient of either the loss or the quantization error w.r.t. the number of bits will in general suggest increasing the number of bit (thus requiring the bit regularization term). This will clearly not be the case when the continuous weights coincide with the quantized values for the current bit setting. More generally, the direction of the gradient will be highly dependent on the specific setting of the current weights. It's unclear to me how effectively accuracy and precision are balanced by this training strategy, and there isn't any discussion of this point either.\n\nI would be less concerned about the above points if I found the experiments compelling. Unfortunately, although I am quite sympathetic to the argument that state of the art results or architectures aren't necessary for a paper of this kind, the results on MNIST and CIFAR10 are so poor that they give me some concern about how the training was performed and whether the results are meaningful. Performance on MNIST in the 7-11% test error range is comparable to a simple linear logistic regression model; for a CNN that is extremely bad. Similarly, 40% error on CIFAR10 is worse than what some very simple fully connected models can achieve.\n\nOverall, while I like the and think the goal is good, I think the motivation and discussion for the training methodology is insufficient, and the empirical work is concerning. I can't recommend acceptance. ", "1. The state-of-the-art result on MNIST is clearly above 99.5% (top1 accuracy). Actually by simply using LeNet, one can achieve top 1 accuracy higher than 99%. [1] However, in the paper the best result is less than 97%.\n2. The state-of-the-art result on CIFAR is based on deep residual nets (eg. wide residual nets can achieve less than 4% top 1 error rate). And in the paper the best result is less than 90%.\nSo, the current version is still too weak to demonstrate the effectiveness of the proposed method.\n\n[1] http://yann.lecun.com/exdb/mnist/\n[2] https://github.com/szagoruyko/wide-residual-networks", "Re: 1. After equation (5), I don't understand how the gradient of L(tilde_W) w.r.t. B(i) is computed. B(i) is discrete. \nThis seems to be mistaken. Internally B(i) is a real number that is restricted to take integral values through this update rule using the sign function. \n\nIt is easy to see how the gradient of L(tilde_W) is computed wrt B(i), ie. the gradient of q(W,B(i)) wrt B(i). (n.b. q(.) is continuous and piecewise differentiable). To differentiate q wrt B(i) we need to differentiate tilde_W wrt B(i).\n\nFor a fixed W, you can write tilde_W as a case expression with outputs \\alpha+1\\delta, \\alpha+2\\delta etc for different conditions for w \\in W.\ndtilde_W/db is differentiated piecewise, and same as d\\delta/db, \\delta is a function of 1/(2**B(i)).\n\n", "Thank you for your comment. We have not open sourced our implementation yet. We will get back to you after the review period.", "Hello, I am working on reproducing your work for the ICLR 2018 Reproducibility Challenge. I was wondering if/when you plan to open source the code used to perform your experiments. \nI would also appreciate if you could provide more details on the model architectures used by you. \n \nThanks and best regards" ]
[ -1, -1, -1, -1, 4, 3, 4, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 5, 4, 4, -1, -1, -1, -1 ]
[ "Syi6-muxf", "Sy4v3Bolz", "Bk1-V2plG", "iclr_2018_HJg1NTGZRZ", "iclr_2018_HJg1NTGZRZ", "iclr_2018_HJg1NTGZRZ", "iclr_2018_HJg1NTGZRZ", "Syi6-muxf", "Sy4v3Bolz", "BkchXu6yG", "iclr_2018_HJg1NTGZRZ" ]
iclr_2018_r1Kr3TyAb
ANALYSIS ON GRADIENT PROPAGATION IN BATCH NORMALIZED RESIDUAL NETWORKS
We conduct a mathematical analysis on the Batch normalization (BN) effect on gradient backpropagation in residual network training in this work, which is believed to play a critical role in addressing the gradient vanishing/explosion problem. Specifically, by analyzing the mean and variance behavior of the input and the gradient in the forward and backward passes through the BN and residual branches, respectively, we show that they work together to confine the gradient variance to a certain range across residual blocks in backpropagation. As a result, the gradient vanishing/explosion problem is avoided. Furthermore, we use the same analysis to discuss the tradeoff between depth and width of a residual network and demonstrate that shallower yet wider resnets have stronger learning performance than deeper yet thinner resnets.
rejected-papers
Two of the reviewers liked the intent of the paper -- to analyze gradient flow in residual networks and understand the tradeoffs between width and depth in such networks. However, all reviewers flagged a number of problems in the paper, and the authors did not participate in the discussion period. Pros: + Interesting analysis suggests wider, shallower ResNets should outperform narrower, deeper ResNets, and empirical results support the analysis. Cons: - Independence assumption on weights is not valid after any weight updates. - The notation is not as clear as it should be. - Empirical results would be more convincing if obtained on several tasks. - The architecture analyzed in the paper is not standard, so it isn't clear how relevant it is for other practitioners. - Analysis and paper should take into account other work in this area, e.g. Veit et al., 2016 and Schoenholz et al., 2017.
test
[ "rkZAtAaxM", "B1BeKAgyz", "H1oD2H5xG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This manuscript is fairly well-written, and discusses how the batch normalization step helps to stabilize the scale of the gradients. Intriguingly, the analysis suggests that using a shallower but wider resnet should provide competitive performance, which is supported by empirical evidence. This work should help elucidate the structure in the learning, and help to support efforts to improve both learning algorithms and the architecture.\n\nPros:\nClean, simple analysis\nEmpirical support suggests that theory captures reasonable effects behind learning\n\nCons:\nThe reasonableness of the assumptions used in the analysis needs a more careful analysis. In particular, the assumption that all weights are independent is valid only at the first random iteration. Therefore, the utility of this theory during initialization seems reasonable, but during learning the theory seems quite tenuous. I would encourage the authors to discuss their assumptions, and talk about how the math would change as a result of relaxing the assumptions.\nThe empirical support does provide evidence that the theory is reasonable. However, it is limited to a single dataset. It would be nice to see that the effect happens more generally. Second, it is clear that shallow+wide networks may be better than deep+narrow networks, but it's not clear about how the width is evaluated and supported. I would encourage the authors to do more extensive experiments and evaluate the architecture further.\n\nRevision:\nUpon examining the comments of the other reviews, I have agreed with several of their points and it is necessary to increase the explanation of the mathematical points. I encourage the authors to address these comments and revise their work.", "This paper attempts to analyze the gradient flow through a batchNorm-ReLU ResNet and make suggestions for reducing gradient explosion.\n\nFirstly, the paper has a fatal mathematical flaw. Consider equation (10). There, you show the variance of y_{L,i} taken over BOTH random weights AND the batch. Now consider equation (32). In that equation, Var(y_{L,i}) appears in the denominator but this variance is taken over ONLY the batch and NOT the random weights. This Var(y_{L,i}) came from batch normalization, which divides its incoming activation values by their standard deviation. However, batch normalization only sees the variation in the activations given to it by a SPECIFIC set of weights. It does not know about the random variation of the weights because that randomness is in a sense a superstructure imposed on the network that the network operations themselves cannot see. Therefore, your substitution and therefore equation (13) is incorrect. If you replace the variance in equation (32) by the correct value, you will get a very different result from which very different (and very interesting!) conclusions can be drawn. \n\nSecondly, in section 4, your analysis depends on the specific type of ResNet you chose. Specifically, when transitioning from one \"scale\" to the next, you chose to insert not just a convolutional layer, but also a batch normalization and ReLU layer on the residual path. To achieve scale transitions, in general, people use a single convolutional layer with 1*1 receptive field on the residual path. It is not a problem in itself to use a nonstandard architecture, but you do not discuss how your results would generalize to other ResNet architectures. Therefore your results have very limited relevance. (Note that again, of course, your results are corrupted by the variance problem I described earlier.)\n\nFinally, with regards to section 5, let me be honest. (I hope that my area chair agrees with me that honesty is the best and kindest policy.) This section makes no sense. You do not understand the work by Veit et al. You do not know how to interpret gradient variances. While I won't be able to dicuss \"gradient variance\" as a concept in full detail in this review, here's a quick summary. (A) Veit et al. argued that a deep ResNet behaves as an ensemble of shallower networks as long as the gradient flowing through the residual paths is not larger than the gradient flowing through the skip paths. (B) The exploding gradient problem refers to the size of the gradient growing exponentially. The vanishing gradient problem refers to the size of the gradient shrinking exponentially. This can make it difficult to train the network. See \"DEEP INFORMATION PROPAGATION\" by Schoenholz et al. from ICLR 2017 to learn more about how gradient explosion can arise. (C) For a neural network to model a ground truth function exactly, the gradients of the network with respect to the input data have to match the gradients of the ground truth function with respect to the input. From observations (A) through (C), we can derive three guidelines for gradient conditioning: (A) have the gradient flowing through residual paths be not too small relative to the gradient flowing through skip paths, (B) have the gradient not grow or shrink exponentially with too large a rate, (C) have the data gradient match that of the ground truth function. However, you seem to be arguing that it is a problem if the gradient scale does increases too little from one residual block to the next. I am not aware of an established argument that this is indeed a problem. To be fair, one might make an argument as follows: \"the point of deep nets is to be expressive, expressiveness of a layer relates to the spetrum of the layer-Jacobian, a small increase in gradient scale implies the layer-Jacobian has many similar singular values, therefore a small increase in gradient scale implies low expressiveness of the layer, therefore the layer is pathological\". However, much more analysis, detail and care would be required to make this argument successfully. In any case, I also don't think that was the argument you were trying to make. Note that after I skimmed through the submissions to this conference, there seem to be interesting papers on the topic of gradients. Those papers plus the references provided in those papers should provide a good introduction to the topic of gradients in neural networks.\n\nOther comments:\n - your notation is quite sloppy and may have lead to errors. Example: in the beginning of section 4, you say that from one \"scale\" to the next, the filter number increases k times. But in appendix C you say \"Since the receptive field for the last scale is k times smaller\". So is k the change in the receptive field size or the filter number of both? I would strongly recommend using dedicated variables to denote the width of the receptive field in each convolutional layer, the height of the receptive field in each convolutional layer as well as the filter number and then express all assumptions made in equation form. \n- Equation (20) deals with the change of gradient variance within a scale. Where is the equation that shows the change of gradient variance between scales?\n- I would suggest making all derivations in appendices A through D much more detailed. \n\n\n", "Summary:\nThis paper analyzed the effect of batch normalization (BN) on gradient backpropagation in residual networks (ResNets). The authors demonstrate that BN can confine the gradient variance to a certain range in each residual block. The analysis is extended to discuss the trade-off between the depth and width of residual networks. However, the effect of BN in ResNets is still not clear and some definitions in this paper are confusing.\n\nStrengths:\n1. This paper conducted mathematical analysis on the effect of batch normalization (BN) in residual networks during back-propagation. \n\n2. The authors demonstrated that BN confined the gradient variance to a certain range in each residual block. \n\n3. The authors discussed the tradeoff between the depth and width of residual networks based on the analysis of BN.\n\nWeak points:\n1. The motivation of the analysis on the effect of BN in residual network is not clear. Compared to the plain network with BN, the gradient vanishing/explosion problem has been largely addressed by the shortcut of identity mapping. After reading the paper, it is still not clear what kind of role the BN plays for addressing this issue, especially when compared to the effect of identity mapping.\n\n2. There seems a definition error in the first paragraph of Section 3.1. Should $\\delta x$ be the batch of input gradient and $\\delta \\tilde x$ be the batch of output gradient?\n\n3. In Section 3.1, what does “the standard normal variate of x is z” mean? \n\n4. The definition of x_i in Eqn. (4) is very confusing, which makes the paper hard to follow. Here, the subscript x_i should be the i-th channel of input feature map rather than the i-th example in a mini-batch. However, in the original BN paper, all the gradients are computed w.r.t. the i-th example in a mini-batch. So, how to transform the formulas in the original BN paper to the gradient w.r.t. a specific channel like Eqn. (4). More details should be provided.\n\n5. In Section 3.2, it is strange that the authors consider a basic block containing BN and ReLU, followed by a convolution layer. However, in general deep learning setting, the BN and ReLU is often put after a convolution layer. Please explain more on this point.\n\n6. In Section 3.3, the authors assume that the weights of convolution layer have zero-mean, because the mean of input/output gradient is zero. However, it does not mean that the gradient w.r.t. the weights has zero-mean and the gradient will introduce a distribution bias in the weights. \n\n7. In Section 5, the authors argued that the sharper gradient variance changes resulted in more discriminatory learning. However, this is not well justified. \n" ]
[ 4, 1, 4 ]
[ 4, 5, 5 ]
[ "iclr_2018_r1Kr3TyAb", "iclr_2018_r1Kr3TyAb", "iclr_2018_r1Kr3TyAb" ]
iclr_2018_rJUBryZ0W
Lifelong Learning by Adjusting Priors
In representational lifelong learning an agent aims to continually learn to solve novel tasks while updating its representation in light of previous tasks. Under the assumption that future tasks are related to previous tasks, representations should be learned in such a way that they capture the common structure across learned tasks, while allowing the learner sufficient flexibility to adapt to novel aspects of a new task. We develop a framework for lifelong learning in deep neural networks that is based on generalization bounds, developed within the PAC-Bayes framework. Learning takes place through the construction of a distribution over networks based on the tasks seen so far, and its utilization for learning a new task. Thus, prior knowledge is incorporated through setting a history-dependent prior for novel tasks. We develop a gradient-based algorithm implementing these ideas, based on minimizing an objective function motivated by generalization bounds, and demonstrate its effectiveness through numerical examples.
rejected-papers
The author's revisions addressed clarity issues and some experimental issues (e.g., including MAML results in the comparison). The work takes an original path to an important problem (transfer learning, essentially). There is a question of significance, and this is due to the fact that the empirical comparisons are still very limited. The task is an artificial one derived from MNIST. I would call this "toy" as well. On this toy task, the approach isn't that much different from MAML, which is not in of itself a problem, but it would be interested to have a less superficial discussion of the differences. The authors mention that they didn't have time for a larger empirical study. I think one is necessary in this case because the work is purposing a new learning algorithm/framework, and the question of its potential impact/significance is an empirical one.
train
[ "SJAvHP8BM", "rJX9THLHf", "HyJlQlQWf", "HyUIRnVSG", "r1IHf4oVf", "H1qBI-5gG", "B1l2Xr8Ef", "HyY2Sogef", "H1LX97Czz", "H1zptXAzM", "r1-wtXRzz" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "We thank the area chair for the helpful comment.\nIndeed there was a problem with the constant, please see our response to AnonReviewer1.\n \nP.S.\nSince the submission of the revised paper we added more experiments that demonstrate the meta-learning performance in varied task-environments and with different number of training-tasks.\nWe would be happy to include those in a revised version if possible.", "There are comments by AnonReviewer1 that require your immediate attention and may materially impact your article's acceptance. Please respond as soon as possible.\n\nNote that OpenReview seems to not be sending email announcements for messages not marked Everyone, so please use that designation.", "The paper considers multi-task setting of machine learning. The first contribution of the paper is a novel PAC-Bayesian risk bound. This risk bound serves as an objective function for multi-task machine learning. A second contribution is an algorithm, called LAP, for minimizing a simplified version of this objective function. LAP algorithm uses several training tasks to learn a prior distribution P over hypothesis space. This prior distribution P is then used to find a posterior distribution Q that minimizes the same objective function over the test task. The third contribution is an empirical evaluation of LAP over toy dataset of two clusters and over MNIST.\n\nWhile the paper has the title of \"life-long learning\", the authors admit that all experiments are in multi-task setting, where\nthe training is done over all tasks simultaneously. The novel risk bound and LAP algorithm can definitely be applied to life-long setting, where training tasks are available sequentially. But since there is no empirical evaluation in this setting, I suggest to adjust the title of the paper. \n \nThe novel risk bound of the paper is an extension of the bound from [Pentina & Lampert, ICML 2014]. The extension seems to be quite significant. Unlike the bound of [Pentina & Lampert, ICML 2014], the new bound allows to re-use many different PAC-Bayesian complexity terms that were published previously. \n\nI liked risk bound and optimization sections of the paper. But I was less convinced by the empirical experiments. Since \nthe paper improves the risk bound of [Pentina & Lampert, ICML 2014], I expected to see an empirical comparison of LAP and optimization algorithm from the latter paper. To make such comparison fair, both optimization algorithms should use the same base algorithm, e.g. ridge regression, as in [Pentina & Lampert, ICML 2014]. Also I suggest to use the datasets from the latter paper. \n\nThe experiment with multi-task learning over MNIST dataset looks interesting, but it is still a toy experiment. This experiment will be more convincing with more sophisticated datasets (CIFAR-10, ImageNet) and architectures (e.g. Inception-V4, ResNet). \n\nMinor remarks:\nSection 6, line 4: \"Combing\" -> \"Combining\"\nPage 14, first equation: There should be \"=\" before the second expectation.", "The authors addressed most of my concerns. I will upgrade my score. The only remaining issue is evaluation with more sophisticated datasets and architectures. ", "We appreciate your thorough review which greatly contributes to our work.\nRegarding the name 'lifelong', we followed the definition of Pentina & Lampert.\nHowever, we agree that the name might be misleading and we will change it to 'meta-learning' in future submissions.", "I personally warmly welcome any theoretically grounded methods to perform deep learning. I read the paper with interest, but I have two concerns about the main theoretical result (Theorem 1, lifelong learning PAC-Bayes bound).\n* Firstly, the bound is valid for a [0,1]-valued loss, which does not comply with the losses used in the experiments (Euclidean distance and cross-entropy). This is not a big issue, as I accept that the authors are mainly interested in the learning strategy promoted by the bound. However, this should clearly appear in the theorem statement.\n* Secondly, and more importantly, I doubt that the uaw of the meta-posterior as a distribution over priors for each task is valid. In Proposition 1 (the classical single-task PAC-Bayes bound), the bound is valid with probability 1-delta for one specific choice of prior P, and this choice must be independent of the learning sample S. However, it appears that the bound should be valid uniformly for all P in order to be used in Theorem 1 proof (see Equation 18). From a learning point of view, it seems counterintuitive that the prior used in the KL term to learn from a task relies on the training samples (i.e., the same training samples are used to learn the meta-posterior over priors, and the task specific posterior). \n\nA note about the experiments:\nI am slightly disappointed that the authors compared their algorithm solely with methods learning from fewer tasks. I would like to see the results obtained by another method using five tasks. A simple idea would be to learn a network independently for each of the five tasks, and consider as a meta-prior an isotropic Gaussian distribution centered on the mean of the five learned weight vectors.\n\nTypos and minor comments:\n- Equation 1: \\ell is never explicitly defined.\n- Equation 4: Please explicitly define m in this context (size of the learning sample drawn from tau).\n- Page 4, before Equation 5: A dot is missing between Q and \"This\".\n- Page 7, line 3: Missing parentheses around equation number 12.\n- Section 5.1.1, line 5: \"The hypothesis class is a the set of...\"\n- Equation 17: Q_1, ... Q_n are irrelevant.\n\n=== UPDATE ===\nI increased my score after author's rebuttal. See my other post.", "The authors performed a substantial amount of work to address reviewer comments, both from a theoretical and empirical perspective. The submitted revision turns out to be an improved paper, and I raised my score from 5 to 6.\nIn particular, the new PAC-Bayes theorem is much more interesting. \n\nNote that it took me a while to get convinced of the validity of the new proof; I was confused by the fact that the hyper-posterior $\\mathcal Q$ relies on the samples S_1, ..., S_i, ..., S_n, whereas this is never explicitly said in the proof of Section 8.1 (see Equation 18). But it turns out that the result is not affected by this. I think this should be made clearer for the readers benefit.\n\nHowever, the latter point made me realize that the learning algorithm promoted by the theoretical result needs to learn from all tasks simultaneously (it is indeed what is performed in the paper). Considering this, I agree with the two other reviewers that the term \"lifelong learning\" should not be used here, as there is no continuous learning involved. Personally, I consider this framework as a variant of transfer learning, where one observes multiple tasks before learning a target one. That being said, I conceive that this \"overuse\" of the buzzword \"lifelong learning\" has been present in several works lately.\n", "The author extends existing PAC-Bayes bounds to multi-task learning, to allow the prior to be adapted across different tasks. Inspired by the variational bayes literature, a probabilistic neural network is used to minimize the bound. Results are evaluated on a toy dataset and a synthetically modified version of MNIST. \n\nWhile this paper is well written and addresses an important topic, there are a few points to be discussed:\n\n* Experimental results are really week. The toy experiment only compares the mean of two gaussians. Also, on the synthetic MNIST experiments, no comparison is done with any external algorithms. Neural Statistician, Model-Agnostic Meta-Learning and matching networks all provide decent results on such setup. While it is tolerated to have minimal experiments in a theoretical papers, the theory only extends Pentina & Lampert (2014). Also, similar algorithms can be obtain through variational-bayes evidence lower bound. \n\n* The bound appears to be sub-optimal. A bound where the KL term vanishes by 1/n would probably be tighter. I went in appendix to try to see how the proof could be adapted but it’s definitively not as well written as the rest of the paper. I’m not against putting proofs in appendix but only when it helps clarity. In this case it did not.\n\n* The paper is really about multi-task learning. Lifelong learning implies some continual learning and addressing the catastrophic forgetting issues. I would recommend against overuse of the lifelong learning term.\n\nMinors:\n* Define KLD\n* Section 5.1 : “to toy”\n* Section 5.1.1: “the the”\n", "* We added a discussion in the introduction about the distinction from multi-task learning (section 1 - first paragraph). \nThere is a clear difference from multi-task, since in lifelong learning the goal is to acquire knowledge (prior) that when transferred to new tasks facilitates good learning.\nWhile we call this transfer setup “lifelong learning ” (as in Pentina and Lampert’s work), it can also be called “learning-to-learn”. But ‘multi-task learning’ is inappropriate because of the different goals and outcome of learning (a prior for learning tasks vs. solutions to given tasks).\n\n* We added an experimental comparison to a learning objective which is based on Pentina and Lampert’s main theorem. As can be seen in section 6, this bound leads to far worse empirical results. We believe that using our theorem leads to better performance since it is a tighter bound.\n\n* Due to technical difficulties and lack of time we cannot provide a high quality multiple data-set evaluation at this time.\nHowever, we did add a comparison to competitive recent approach - Model-Agnostic Meta-Learning (MAML, Finn, Abbeel, and Levine. \"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.\" arXiv preprint arXiv:1703.03400 (2017).) (see section 6). \n", "* We added a comment about the bounded loss issue (see end of section 2.2). Indeed, this is not a big issue since - theoretically, we can claim to bound a truncated version of the loss, and empirically the losses are almost always smaller than one.\n\n* Thank you for pointing out the delicate issue about our main Theorem. We have rewritten the proofs using a different technique, which clarifies the points made by the reviewer and, in fact, leads to improved bounds (see section 3.2 for overview and 8.1 for full proof). \nIn the new formulation, each task bound holds for all hyper-posteriors and all posteriors, so it is valid to optimize both using the same samples.\nNote that our new theorem deviates significantly in both proof technique and behavior from that in Pentina and Lampert’s work. \n\n* In section 6, we added the experiment you suggested and several other methods which use all the training tasks, including: \n1. Using the bound from Pentina and Lampert’s work as a learning objective, \n2. Using an objective derived from variational methods and hierarchical generative models and 3. A recent method - Model-Agnostic Meta-Learning (MAML, Finn, Abbeel, and Levine. \"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.\" arXiv preprint arXiv:1703.03400 (2017).) . \n\n", "* The toy example (section 5) was meant only for visualization of the setup. \nIn the revised version we separated it from the experimental results section.\n\nIn the experimental results part (section 6) we added a comparison to the recently introduced Model-Agnostic Meta-Learning (MAML, Finn, Abbeel, and Levine. \"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.\" arXiv preprint arXiv:1703.03400 (2017).) .\n\nWe also addressed the comparison to variational methods which maximize the evidence lower bound (section 6). Actually, such methods can be seen as minimizing a bound on the generalization error, but with a complexity terms of the KLD between posterior and prior, which is less tight than the bounds in the paper. We compared the results of such an objective ( which is referred to as LAP-KLD in section 6) and showed that it performed much worse.\n\n* We rewrote the proof - hopefully it is clearer. (see section 3.2 for overview and 8.1 for the full proof).\nIn section 8.2 we also added a bound in which the the KL term can vanish at a rate of 1/m (number of samples) if the empirical error is low. For the number of tasks, n, we preferred to keep the 1/n for simplicity and because this term is less important for the LAP algorithm.\n\n* We added a discussion in the introduction (section 1 - first paragraph) about the distinction from continual learning and from multi-task learning. We hope this clarifies our choice of paper title. There is a clear difference from multi-task learning, since the goal in our work is to acquire knowledge (prior) that, when transferred to new tasks, facilitates learning with low generalization error, rather than using multiple tasks collaboratively to aid each task in the given set of tasks.\n\n" ]
[ -1, -1, 6, -1, -1, 6, -1, 6, -1, -1, -1 ]
[ -1, -1, 4, -1, -1, 4, -1, 4, -1, -1, -1 ]
[ "rJX9THLHf", "iclr_2018_rJUBryZ0W", "iclr_2018_rJUBryZ0W", "H1LX97Czz", "B1l2Xr8Ef", "iclr_2018_rJUBryZ0W", "H1qBI-5gG", "iclr_2018_rJUBryZ0W", "HyJlQlQWf", "H1qBI-5gG", "HyY2Sogef" ]
iclr_2018_SkPoRg10b
Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behavior
We describe an approach to understand the peculiar and counterintuitive generalization properties of deep neural networks. The approach involves going beyond worst-case theoretical capacity control frameworks that have been popular in machine learning in recent years to revisit old ideas in the statistical mechanics of neural networks. Within this approach, we present a prototypical Very Simple Deep Learning (VSDL) model, whose behavior is controlled by two control parameters, one describing an effective amount of data, or load, on the network (that decreases when noise is added to the input), and one with an effective temperature interpretation (that increases when algorithms are early stopped). Using this model, we describe how a very simple application of ideas from the statistical mechanics theory of generalization provides a strong qualitative description of recently-observed empirical results regarding the inability of deep neural networks not to overfit training data, discontinuous learning and sharp transitions in the generalization properties of learning algorithms, etc.
rejected-papers
The concerns raised by AnonReviewer3 point out that, despite the effort of the authors to bridge the SM / ML divide, there is still some work to be done. The gulf between thermodynamic limits and finite effects is oft-cited in the author response. This seems to be a catch all. This gap needs to be addressed early. The authors might even suggest some open (empirical) questions looking for these phase transitions in finite systems in cases where they think engineering has placed us "not too close".
train
[ "HJdaJu51z", "S1IjqJvxf", "SyBLHu_gM", "r1bcRWlNf", "BJM0h8dZz", "SJ_fu6dZz", "S16pvadWM", "ryU4P6_-M", "ByHfw6dbM", "S1pJwadbG", "SkXpUTuZM", "ryQF86Obz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "The authors suggest that ideas from statistical mechanics will help to understand the \"peculiar and counterintuitive generalization properties of deep neural networks.\" The paper's key claim (from the abstract) is that their approach \"provides a strong qualitative description of recently-observed empirical results regarding the inability of deep neural networks not to overfit training data, discontinuous learning and sharp transitions in the generalization properties of learning algorithms, etc.\" This claim is restated on p. 2, third full paragraph.\n\nI am sympathetic to the idea that ideas from statistical mechanics are relevant to modern learning theory. However, I do not find this paper at all convincing. I find the paper incoherent: I am unable to understand the argument for the central claims. On the one hand, the paper seems to be written as a \"response\" to Zhang et al.'s \"Understanding Deep Learning Requires Rethinking Generalization\", (henceforth Z): the introduction mentions Z multiple times, and the title of this work refers to Z. On the other hand, none of the issues raised by Z are (as far as I can tell) addressed in any substantial way by this paper. In somewhat more detail, this work discusses two major observations:\n\n1. Neural nets can easily overtrain, even to random data.\n2. Popular ways to regularize may or may not help.\n\nZ certainly observes 1 and arguably observes 2. (I'd argue against, see below, but it's at least arguable.) I do not see how this paper addresses either observation. Instead, what the statistical mechanics (SM) approach seems to do is explain (or predict) the existence of phase transitions, where we suddenly go from a regime of poor generalization to good generalization or vice versa. However, neither Z nor, as far as I can tell, any other reference given here, suggests that these phase transitions are frequently observed in modern deep learning. The most relevant bit from Z is Figure 1c, which suggests that as the noise level is increased (corresponding to alpha decreasing in this paper), the generalization error increases smoothly. This seems to be in direct contradiction to the predictions made by the theories presented here.\n\nIf the authors wish to hold to the claim that their work \"can provide a qualitative explanation of recently-observed empirical properties that are not easily-understandable from within PAC/VC theory of generalization, as it is commonly-used in ML\" (p. 2), it is absolutely critical that they be more specific about which specific observations from which papers they think they are explaining. As written, I simply do not see which actual observations they think they explain.\n\nIn observation 2, the authors suggest that many popular ways to implement regularization \"do not substantially improve the situation\". A careful reading of Z (and this was corroborated by discussion with the authors) is that Z observed that regularization with parameters commonly used in practice (or, put differently, regularization parameters that led to the highest holdout accuracy in other papers) still led to substantial overtraining on noisy data. I think it is almost certainly true (see below for more discussion) that much larger values of regularization can prevent overfitting, at the cost of underfitting. It's also worth noting that Z agrees with basically all practitioners that various regularization techniques can make an important difference to practitioners who want to minimize test error; what they don't do (at least at moderate values) is *qualitatively* destroy a network's ability to overfit to noise. It is unclear to me how this paper explains observation 2 (see below for extensive discussion).\n\nI don't actually understand the first full paragraph on p. 2 well. It is true that we can always avoid overtraining by tuning regularization parameters to get better generalization *error* (difference beween train and test) on the test data set (but possibly worse generalization accuracy); the rest of the paper seems to take the opposite side on this. A Gaussian kernel SVM with a small enough bandwidth and small enough regularization parameter can also overfit to noise. The argument needs to be sharpened here.\n\nI find the discussion of noise at the bottom of p. 2 confusing. The authors describe tau \"having to do with noise in the learning process\", but then suggest that \"adding noise decreases the effective load.\" This is the first time noise is really talked about, and it seems like maybe noise in the data is about alpha, but noise in the \"learning process\" is about tau? This should be clarified.\n\nOn p. 3, the authors refer to \"the two parameters used by Z and many others.\" I am honestly not sure what's being referred to here. I just reread Z and I don't get it. What two parameters are used by Z?\n\np. 3, figure. The authors should be clear about what recent (ideally widely-discussed) experimental results look anything like this figure. I found nothing in Z et al. In Appendix A.4, there is a mention of Figure 3 of Chromanska et al. 2014; that figure also seems to be totally consistent with smooth transitions and does not (to me) present any obvious evidence of a sharp phase transition. (In any case, the main paper should not rely heavily on the appendix for its main empirical evidence.)\n\np. 3, figure 1a. What is essential in this figure? A single phase transition? That the error be very low on the r.h.s. of the phase transition (probably not that, judging from the related models in the\nAppendix).\n\np. 3, figure 1b/c. What does SG stand for? As far as I can tell it's never discussed.\n\np. 4. \"Thus, an important more general insight from our approach is that --- depending strongly on details of the model, the specific details of the learning algorithm, the detailed properties of the data and their noise etc. --- going beyond worst-case bounds can lead to a rich and complex array of manners in which generalization can depend on the control parameters of the ML process.\" This is well-known to all practitioners. This paper does not seem to offer any specific testable explanations or predictions of any sort. I certainly agree that the study of SM models is \"interesting\", but what would\nmake this valuable would be a more direct analogy, a direct explanation of some empirical phenomenon.\n\nSection 2 in general. The authors discuss a couple different types of observations: (1) \"strong discontinuities in generalization performance as a function of control parameters\" aka phase transitions, and (2) generalization performance can depend sensitively on details of the model, details of algorithms, implicit regularization properties, detailed properties of data and noise, etc.\" (1) shows up in the SM literature from the 90's discussed in Appendix A. I don't think it shows up in modern practice, and I don't think it shows up in Z. (2) is absolutely relevant to modern practitioners, but I don't see what this paper has to say about it beyond \"SM literature from the 90's exhibits similar phenomena.\" The model introduced in Section 3 abstracts all such concerns away.\n\nSection 3. I am not super comfortable with the idea of \"Claims\", especially since the 3 Claims seem to be different sorts of things. I would normally think of a \"Claim\" as something that could be true or false, possibly with some argument for its truth.\n\nClaim 1 introduces a model (VSDL), but I wouldn't call this a claim, since nothing is actually \"claimed.\" The subpoints of Claim 1 are arguably claims, but they're not introduced as such. I address these\nin turn:\n\n\"Adding noise decreases an effective load alpha.\" The paper states \"N is the effective capacity of the model trained on these data\", but \"effective capacity\" is never defined. Certainly, if we *define* alpha = m_eff / N and *define* m_eff = m - m_rand, the (sub)claim follows, but why are those definitions good? I *think* what's going on here is hidden in the sentence \"empirical results indicate that for realistic DNNs it is close to 1. Thus, the model capacity N of realistic DNNs scales with m and not m_eff.\", where \"it\" refers to the Rademacher complexity. Well, OK, but if we agree with that, then aren't we just *assuming* the main result of Z rather than explaining it? We're basically just stating that the models can memorize the data?\n\nI don't really understand the point the last part of the paragraph is trying to make (everything after what I quoted above).\n\n\"Early stopping increases an effective temperature tau.\" I find this plausible but don't understand the argument at all. To this reader, it's just \"stuff from SM I don't understand.\" I think the typical ML reader of this paper won't necessarily be familiar with any of \"the weights evolve according to a relaxation Langevin equation\", \"from the fluctuation-dissipation theorem\", or the reference to annealing rate schedules. Consider either explaining this more or just appealing to SM and relegating this to an appendix.\n\nAfter the claim, the paper mentions that the VSDL model ignores other \"knobs\". This is fine for a model, but I think it's totally disingenuous to then suggest that this model explains anything about other popular ways to regularize (Observation 2 in the intro, see also my comment on Section 2). In the intro, the claim is \"Other regularizations sometimes help and sometimes don't and we don't understand why\" (the claim is about overfitting but it's also true for improving performance in general), which is basically true. But introducing a model which completely abstracts these things away cannot possibly explain anything about the behavior.\n\nClaim 2 is that we should consider a thermodynamic limit where model complexity grows with data (the paper says grows with the number of parameters, I assume this is a typo). I would probably call this one an \"Assumption\", with some arguments for the justification. I think this is one of the most interesting and important ideas in the paper, and I don't fully understand it, even after reading the appendix. I have questions. How should / could this apply to practitioners, who cannot in general hope to obtain arbitrary amounts of data? Are we assuming that any (or all) modern DNN experiments are in the asymptotic regime? Are we assuming the experiments in Z are in this regime? Is there any relevance to the fact that in an ML problem (unlike in say a spin glass, at least as far as I know) the \"complexity\" of the *task* is *not* increasing with the data size, so eventually one will have seen \"enough\" data to \"saturate\" the task? I'd love to know more.\n\nClaim 3 is more of an \"Informal Theorem\" that under the model of Claim 1 and the assumption of Claim 2, the phase diagrams of Figure 1 hold. The \"proof\" is a reference to SM papers. This should be clarified.\n\nYet again, I point out that I do not know any modern large-scale NN experiments that correspond to any of the pictures in Figure 1.\n\nThere's a mention of \"tau = 0 or t > 0.\" What is the significance of tau = 0? How should an ML reader think about this?\n\nSection 3.2 suggests that Claim 3 (the existence of the 1 and 2d phase diagrams) \"explain\" Observations 1 and 2 from the Appendix. I simply do not see this. \n\nFor Observation 1, that NNs can easily overtrain, the \"argument\" seems to boil down to \"the system is in a phase where it cannot help but overtrain.\" This is hardly an explanation at all. How do we know what phase these experiments were in? How do we know these experiments were in the thermodynamic limit?\n\nFor Observation 2, the authors point out that in VSDL, \"the only way to prevent overfitting is to decrease the number of iterations.\" This seems true but vacuous: the authors introduced a model where regularization doesn't correspond to any knobs, so of course to the extent that that model explains reality, the knobs don't stop overfitting. But this feels like begging the question. If we accept the VSDL model, we'd also accept that various regularizations can't improve generalization, which goes directly against basically all practice. I guess I technically have to concede that \"Given the three\nclaims\", Observation 2 follows, but Claim 1 by itself seems to be already assuming the conclusion.\n\nMinor writing issues:\n\nThe authors mention at least four times that reproducing others' results is not easy (p. 1 observation 1, p. 4 first paragraph, p. 4 footnote 6, last sentence of the main text). While I think this statement is true, it is quite well-known, and I suggest that the authors may simply alienate readers by harping on it here.\n\np. 1. \"may always overtrain\" is unclear. I don't know what it means. Is the claim that SOTA DNNs wll always overtrain when presented with enough data? I don't think so from the rest of the paper, but I'm not sure.\n\nI'm a little unclear what the authors mean by \"generalization error\" (or \"generalization accuracy\", which seems to only be used on p. 2). Z use \"generalization error = training error - test error\". Check the appendix for consistency here too.\n\nReplace \"phenomenon\" with \"phenomena\", at least twice where appropriate.\n\np. 3, first paragraph. I think the reference to the Hopfield model should be relegated to a footnote. The text \"two or more such parameter holds more generally\" is confusing; is it two, or is it two or more? What will I understand differently if I use more than two parameters? The next paragraph, starting with \"Given these two identifications, which are novel to this work,\" seems odd, since we've\njust seen 7+ references and a claim that they have similar parameterizations, so it's unclear what's novel.\n\nAppendix A.5. \"For non-linear dynamical systems... NNs from the 80s/90s or our VSDL model or realistic DNNs today .. there is simply no reason to expect this to be true.\" where \"this\" refers to \"one can always choose a value of lambda to prevent overfitting, potentially at the expense of underfitting.\" I don't understand, and I also think this disagrees with the first full paragraph on p. 2. Is there some thermodynamic limit argument required here? The very next bullet states that x = argmin_x f(x) + lambda g(x) can prevent overfitting with large lambda. What's different? I'm overall not clear what's being implied here. Consider a modern DNN for classification. A network with all zero weights will have some empirical loss L(0). If I minimize, for the weights of a network w, L(w) + lambda ||w||^2, I have that L(w) + lambda ||w||^2 <= L(0) (assuming I can solve the optimization), and assuming L is non-negative, lambda ||w||^2 <= L(0), or ||w||^2 <= L(0) / lambda. So for very large lambda, I can drive ||w||^2 arbitrarily close to zero. How is this importantly different from the linear case? What am I missing?\n\np. 3. \"inability not to overfit.\" Avoid the double negative.\n\nIntro, last paragraph. Weird section order description, with ref to Section A coming before section 4.\n\nFootnote 2. \"but it can be quite limiting.\" More detail needed. Limiting how?\n\nFootnotes 3 and 4. The text says there are \"technical\" and \"non-technical\" reasons, but 3 and 4 both seem technical to me.\n\nAppendix A.2. \"on a randomly chosen subset of X.\" Is it really subset? Are we picking subsets uniformly at random?\n", "\nThis papers provides an interesting set of ideas related to theoretical understanding generalization properties of multilayer neural networks. It puts forward a qualitative analogy between some recently observed behaviours in deep learning and results stemming from previous quantitative statistical physics analysis of single and two-layer neural networks. The paper serves as a nice highlight into the not-so recent progress made in statistical physics for understanding of various models of neural networks. I agree with the authors that this line of work, that is not very well known in the current machine learning community, includes a number of ideas that should be able to shed light on some of the currently open theoretical questions. As such the paper would be a nice contribution to ICLR.\n\nOn the negative side, the paper is only qualitative. The Very Simple Deep Learning model that it introduces is not even a model in the physics or statistics sense, since it cannot be fit on data, it does not specify any macroscopic details. I only saw something like that to be called a *model* in experimental biology papers ... The models that are reviewed in the appendix, i.e. the continuous and Ising perceptron and the committee machine are more relevant. However, the present paper only reviews existing results about them. And even in that there are flaws, because it is not always clear from what previous works are the results taken nor is it clear how exactly they were obtained (e.g. Fig. 2 (a) is for Ising or continuous weights? How was it computed? Why in Fig. 3(a) the training and generalization error is the same while in Fig. 3(c) they are different? What exact formulas were evaluated to obtain these figures?). \n\nConcerning the lack of mathematical rigour in the statistical physics literature on which the authors comment, they might want to relate to a very recent work https://arxiv.org/pdf/1708.03395.pdf work that sets all the past statistical physics results on optimal generalization in single layer neural networks on fully rigorous basis by proving that the corresponding formulas stemming from the replica method are indeed correct. \n", "I find myself having a very hard time making a review of this paper, because I mostly agree with the intro and discussion, and certainly agree that the \"typical\" versus \"worse case\" analysis is certainly an important point. The authors are making a strong case for the use of these models to understand overfitting and generalization in deep leaning.\n\nThe problem is however that, except from advocating the use of these \"spin glass\" models studied back in the days by Seung, Sompolinksy, Opper and others, there are little new results presented in the paper. The arguments using the Very Simple Deep Learning (VSDL) are essentially a review of old known results --which I agree should maybe be revisited-- and the motivation to their application to deep learning stems from the reasoning that, since this is the behavior observed in all these model, well then deep learning should behave just the same as well. This might very well be, but this is precisly the point: is it ? \n\nAfter reading the paper, I agree with many points and enjoyed reading the discussion. I found interesting ideas discussed and many papers reviewed, and ended up discovering interesting papers on arxiv as a concequence.\n\nThis is all nice, interesting, and well written, but at the end of the day, the paper is not doing too much beyond being a nice review of all ideas. While this has indeed some values, and might trigger a renewal of interested for these approaches, I will let the comity decide if this is the material they want in ICLR.\n\nA minor comment: The generalization result of [9,11] obtained with heuristic tools (the replica method of statistical mechanics) and plotted in Fig.1 (a) has been proven recently with rigorous mathematical methods in arxiv:1708.03395 \n\nAnother remark: if deep learning is indeed well described by these models, then again so are many other simpler problems, such as compressed sensing, matrix and tensor factorization, error corrections, etc etc... with similar phase diagram as in fig. 1. For instance gaussian mixtures are discussed in http://iopscience.iop.org/article/10.1088/0305-4470/27/6/016/and SVM (which the authors argue should behave quite differently) methods have been treated by statistical mechanics tools in https://arxiv.org/pdf/cond-mat/9811421.pdf with similar phase diagrams. I am a bit confused what would be so special about deep learning then?\n", "We do have very preliminary results that indicate the presence of a phase transition in this system.\n\nWe have been able to reproduce the results of the 3-layer MLP. We identify the phase transition by measuring the \ngeneralized Von Neumann matrix entropy* of layer weight matrices S(W) . *(See PNAS August 29, 2000 u vol. 97 u no. 18 u 10101–10106)\n\n\nWe can measure S(W1), S(W2), and S(W3) for each weight matrix in the network. [We did not measure S(W1) but we assume the results would be similar] \n\nWe find that for a normal data set, S(W2) and S(W3) decreases slowly (within 1-2 %) or not all with each epoch. \n\nFor a data set with fully randomized labels, however, S(W2) and S(W3) both display a first order phase transition after about 20-25 epochs, changes in value by 10% or more. \n\nThese very early results indicate both a phase transition, as predicted by theory, and a drop in entropy, also predicted. We have not presented them because they are both very early and we prefer to present them in a second paper, which is less pedagogical and more about numerical results.\n\n", "Two reviewers were confident and positive. The least confident reviewer was least positive. The detailed questions (longer than our permitted response) indicate a well-intentioned reviewer, but one who has deep misunderstandings about our paper and the prior results our paper explains.\n\nFor the confident positive reviewers.\n\n1. It is correct that we did not present \"new\" technically incremental results. This approach makes it easier for readers to understand ideas which may be quite unfamiliar.\n\n2. As for SM applied to SVMs, both SM and PAC/VC and extensions can be applied to anything. The question is whether they say anything nontrivial. For SVMs, SM predicts phase transitions, and PAC/VC predicts generalization can be controlled with regularization parameters, number of support vectors, etc. For NNs, the latter is not true. This is the point of Zhang et al. (We will also use Z to refer to this.) SM can explain what is going on in Z. That is the point of our paper.\n\n3. We do more than draw a qualitative analogy. We propose this old theory applies to new deep NNs; and we explain how/why this happens in terms of entropy (in an appendix, due to page limitations). Fig 3a and 3c highlight the key difference, expanded in Figs 3e-3h, is whether a model has nontrivial entropy properties near the minimum energy. This is common to all other models, e.g., those in Fig 2. This connection is buried in previous work. Highlighting it is an important contribution.\n\n4. Thanks for the Barbier reference. \n\nNext, the comments of the least confident reviewer highlight deep misunderstandings about our paper, as well as the Z paper. We expected this would happen for some readers because the work relies on established results from the statistical mechanics of learning, which may be unfamiliar to some reviewers.\n\nThese misunderstandings are going to be shared by other readers, so we are glad to have the opportunity to respond. Some general points.\n\n5. Our paper is about theory applied to practice. It address the Z paper, which claims that VC theory does not work at all in practice. Z shows that \"Even with dropout and weight decay, Inception V3 is still able to fit [a] random training set extremely well if not perfectly\", but lacks any generalization capacity. VC theory is a theory about capacity control. But it does not exhibit the behavior described in Z. SM does describe this behavior. BTW, VC theory was never expected to apply to NNs, shallow or deep. This was pointed out by Vapnik, Levin, and LeCun in 1994 (\"Measuring the VC-dimension of a learning machine\"):\n\n\"The extension of this work to multilayer networks faces [many] difficulties ... the existing learning algorithms can not be viewed as minimizing the empirical risk over the entire set of functions implementable by the network ... [because] it is likely ... the search will be confined to a subset of [these] functions ... The capacity of this set can be much lower than the capacity of the whole set ... [and] may change with the number of observations. This may require a theory that considers the notion of a non-constant capacity with an ‘active’ subset of functions\"\n\nThis observation is true whether we are considering PAC/VC, or variants, e.g., Covering numbers, Rademacher complexity, etc. The authors of Z clearly know this. These approaches make gross assumptions that are clearly at odds with basic experimental observations (and SM theory). The value of Z is to highlight these old ideas that have largely been forgotten. The value of our paper is to remind the community about old ideas that have also largely been forgotten but that do describe this.\n\n6. Rarely would we just add more data (m) to a deep NN. We always increase the NN size (N) too. The reason is we can capture more detailed features/information from the data. That is, we do in practice what we argue for in the paper---take the limit of large size, with the ratio m/N fixed (as opposed to fixing m and let N increase, or vice versa, which is at odds with practice).\n\n7. As for whether phase behavior is directly observed in deep nets: of course not. Phases arises in the thermodynamic limit. Any real system will show finite size effects, i.e., the sharp behavior will be smoothed out. This is well know and well studied.\n\n8. As for reproducibility, Z did not provide code. There is non-current pyTorch code on github. This will take additional work.\n\n9. On the SVM, the point is that SVMs can always be regularized to avoid over training. For NNs, popular regularization methods can sometimes fail to prevent overtraining. The reason is the NN is beyond the critical value of alpha where this is possible. \n\n10. Experiments are important follow-up work. The thing to measure is the replica overlap or another order parameter of the layers in each phase. Similar work has begun by Ganguli et al:\n\nhttps://arxiv.org/abs/1611.01232\nhttps://arxiv.org/abs/1711.04735", "\nLet's provide a detailed response to the least confident reviewer's points.\n\n11. Reviewer:\n\nThe authors suggest that ideas from statistical mechanics will help to understand the \"peculiar and counterintuitive generalization properties of deep neural networks.\" The paper's key claim (from the abstract) is that their approach \"provides a strong qualitative description of recently-observed empirical results regarding the inability of deep neural networks not to overfit training data, discontinuous learning and sharp transitions in the generalization properties of learning algorithms, etc.\" This claim is restated on p. 2, third full paragraph.\n\nI am sympathetic to the idea that ideas from statistical mechanics are relevant to modern learning theory. However, I do not find this paper at all convincing. I find the paper incoherent: I am unable to understand the argument for the central claims.\n\nResponse:\n\nA recent blog has highlighted that \"There are several papers that also come from those trained in a field other than statistics, that will likely not see the light of day (or rather accepted in a conference). The incomprehensibility to the reviewer trained only in statistics is grounds for rejection.\" See:\n\nhttps://medium.com/intuitionmachine/revisiting-deep-learning-as-a-non-equilibrium-process-9cedb93a13a2\n\nWe are well aware that the SM methods are quite different from popular methods in ML/DL/AI, and that some readers will find these quite different methods initially incomprehensible/incoherent. We are trying to make these methods accessible to readers not trained in SM, since they can be used to understand the phenomena observed by Z.\n\n12. Reviewer:\n\nOn the one hand, the paper seems to be written as a \"response\" to Zhang et al.'s \"Understanding Deep Learning Requires Rethinking Generalization\", (henceforth Z): the introduction mentions Z multiple times, and the title of this work refers to Z. On the other hand, none of the issues raised by Z are (as far as I can tell) addressed in any substantial way by this paper. In somewhat more detail, this work discusses two major observations:\n\n1. Neural nets can easily overtrain, even to random data.\n2. Popular ways to regularize may or may not help.\n\nZ certainly observes 1 and arguably observes 2. (I'd argue against, see below, but it's at least arguable.) I do not see how this paper addresses either observation. Instead, what the statistical mechanics (SM) approach seems to do is explain (or predict) the existence of phase transitions, where we suddenly go from a regime of poor generalization to good generalization or vice versa. \n\nResponse: \n\nAt a superficial level, the SM approach explains/predicts phase transitions. More generally, it describes generalization behavior, one component of which is sharp transitions, as well as which control parameters of the learning process can be used to control generalization quality.\n\n13. Reviewer: \n\nHowever, neither Z nor, as far as I can tell, any other reference given here, suggests that these phase transitions are frequently observed in modern deep learning.\n\nResponse: \n\nWe do not claim that they are frequently observed. Clearly, they are not. The question Z specifically asks is whether there \"is a different form of capacity control that bounds generalization error for large neural nets.\" Large is the key word here. Phase transitions are a limiting phenomenon, and so in any system with only a finite amount of data, there will be finite-size effects. Plus, in practical systems, one often engineers the system to get close to but to avoid this transition, e.g., by engineering the data or the learning process to smooth out the transition. However, our analysis suggests that phase transitions are \"under the hood\" and, being a limiting effect, are predicted to be more relevant in larger systems. So it would be useful for the community to understand them better.\n\n14. Reviewer:\n\nThe most relevant bit from Z is Figure 1c, which suggests that as the noise level is increased (corresponding to alpha decreasing in this paper), the generalization error increases smoothly. This seems to be in direct contradiction to the predictions made by the theories presented here.\n\nResponse:\n\nWe tried but were not able to reproduce the results of the Z paper. Our working hypothesis which we are working on testing is that this is a finite size effect.\n\n", "\n15. Reviewer:\n\nIf the authors wish to hold to the claim that their work \"can provide a qualitative explanation of recently-observed empirical properties that are not easily-understandable from within PAC/VC theory of generalization, as it is commonly-used in ML\" (p. 2), it is absolutely critical that they be more specific about which specific observations from which papers they think they are explaining. As written, I simply do not see which actual observations they think they explain.\n\nResponse: \n\nOur results hold more generally, but as stated we \"explain\" the two main observations in the Z paper. The key observation in the Z paper is that \"Even with dropout and weight decay, Inception V3 is still able to fit [a] random training set extremely well if not perfectly\" , but lacks any generalization capacity. See their discussion and their main Table.\n\n16. Reviewer:\n\nIn observation 2, the authors suggest that many popular ways to implement regularization \"do not substantially improve the situation\". A careful reading of Z (and this was corroborated by discussion with the authors) is that Z observed that regularization with parameters commonly used in practice (or, put differently, regularization parameters that led to the highest holdout accuracy in other papers) still led to substantial overtraining on noisy data. \n\nResponse: \n\nWe are not sure what the reviewer is saying. That is what we are saying, i.e., \"do not substantially improve\" = \"still led to substantial overtraining\".\n\n17. Reviewer:\n\nI think it is almost certainly true (see below for more discussion) that much larger values of regularization can prevent overfitting, at the cost of under-fitting. \n\nResponse: \n\nWe are not so confident. This is an empirical question, beyond the scope of this paper both for idealized NNs as well as for realistic NNs. But we discuss this issue in detail in Appendix A.5, where we note that this intuition is true for popular ML models but not necessarily true in general.\n\n18. Reviewer:\n\nIt's also worth noting that Z agrees with basically all practitioners that various regularization techniques can make an important difference to practitioners who want to minimize test error; what they don't do (at least at moderate values) is *qualitatively* destroy a network's ability to overfit to noise. It is unclear to me how this paper explains observation 2 (see below for extensive discussion).\n\nResponse:\n\nWe agree. The qualitative destruction is a statement about the thermodynamic limit, which is a statement about a limit. For finite N, there are finite N effects. References 9, 30; 11, and 31, as well as many others they cite, clearly show that the limiting behavior is smudged out for finite N.\n\n19. Reviewer:\n\nI don't actually understand the first full paragraph on p. 2 well. It is true that we can always avoid overtraining by tuning regularization parameters to get better generalization *error* (difference between train and test) on the test data set (but possibly worse generalization accuracy); the rest of the paper seems to take the opposite side on this. A Gaussian kernel SVM with a small enough bandwidth and small enough regularization parameter can also overfit to noise. The argument needs to be sharpened here.\n\nResponse:\n\nThank you for highlighting that you are confused by this. This is the central point of the argument. The SM theory suggests that it is false that one can always do this. I.e., that this is true for SVMs as this paragraph point outs, but that it is false for DNNs. More precisely, if one's \"control knobs\" are traditional regularization parameters, then it is false. On the other hand, if one's control knobs include the iteration count in early stopped algorithms, which have a natural interpretation in terms of temperature as we and others have argued, then one can exert this control. \n\n20. Reviewer:\n\nI find the discussion of noise at the bottom of p. 2 confusing. The authors describe tau \"having to do with noise in the learning process\", but then suggest that \"adding noise decreases the effective load.\" This is the first time noise is really talked about, and it seems like maybe noise in the data is about alpha, but noise in the \"learning process\" is about tau? This should be clarified.\n\nResponse:\n\nThank you for highlighting that you are confused by this. One type of \"noise\" is adding noise to the data, as Z and others do. Another type of \"noise\" is variability in, e.g., early-stopped SGD algorithms. This is a very important difference, and we will clarify in the final version. \n\n", "\n21. Reviewer:\n\nOn p. 3, the authors refer to \"the two parameters used by Z and many others.\" I am honestly not sure what's being referred to here. I just reread Z and I don't get it. What two parameters are used by Z?\n\nResponse:\n\nThank you again for highlighting that you are confused by this. We will try to clarify. Z did a lot, and we present a very simple model of what they did. Basically, they fit a NN. Then they added noise to the training data (one knob, which we model as a load parameter); then they tried traditional regularization knobs (which we do not model since they didn't help substantially); then they plotted quality as a function of the number of iterations (another knob, which has a temperature interpretation, which we model as a temperature control parameter).\n\n22. Reviewer:\n\np. 3, figure. The authors should be clear about what recent (ideally widely-discussed) experimental results look anything like this figure. I found nothing in Z et al. In Appendix A.4, there is a mention of Figure 3 of Chromanska et al. 2014; that figure also seems to be totally consistent with smooth transitions and does not (to me) present any obvious evidence of a sharp phase transition. (In any case, the main paper should not rely heavily on the appendix for its main empirical evidence.)\n\nResponse: \n\nThe lack of sharpness is a finite size effect. In particular, any paper that mentions the words \"spin glass\" (including in the 5th line of the Choromanska et al paper, as well as many others in the area) has these transitions, since that is fundamental to what is a spin glass. See also several of the other papers mentioned above. BTW, in the arXiv version, the appendix is simply another section. We put this important information in an appendix to respect page limitation requests. Thanks for reading it, since it is an important part of the paper.\n\n23. Reviewer:\n\np. 3, figure 1a. What is essential in this figure? A single phase transition? That the error be very low on the r.h.s. of the phase transition (probably not that, judging from the related models in the Appendix).\n\nResponse:\n\nThis should be compared with Fig 3. There could be one or several phase transitions. The point is that the actual generalization error does not decrease smoothly in a nice inverse polynomial way, as the PAC/VC upper bounds do.\n\n24. Reviewer:\n\np. 3, figure 1b/c. What does SG stand for? As far as I can tell it's never discussed.\n\nResponse:\n\nThanks for catching this. This is \"Spin Glass\" phase. We will clarify in the final version.\n\n25. Reviewer:\n\np. 4. \"Thus, an important more general insight from our approach is that --- depending strongly on details of the model, the specific details of the learning algorithm, the detailed properties of the data and their noise etc. --- going beyond worst-case bounds can lead to a rich and complex array of manners in which generalization can depend on the control parameters of the ML process.\" This is well-known to all practitioners. This paper does not seem to offer any specific testable explanations or predictions of any sort. I certainly agree that the study of SM models is \"interesting\", but what would make this valuable would be a more direct analogy, a direct explanation of some empirical phenomenon.\n\nResponse:\n\nWe agree that this is well-known to practitioners. The point is that this is not at all predicted by PAC/VC theory. Our paper \"revisits\" old ideas from the SM theory of generalization that does predict this behavior in simple models. We would love to make strong quantitative predictions on large-scale realistic models. That seems more suited for a follow-up paper.\n\n26. Reviewer:\n\nSection 2 in general. The authors discuss a couple different types of observations: (1) \"strong discontinuities in generalization performance as a function of control parameters\" aka phase transitions, and (2) generalization performance can depend sensitively on details of the model, details of algorithms, implicit regularization properties, detailed properties of data and noise, etc.\" (1) shows up in the SM literature from the 90's discussed in Appendix A. I don't think it shows up in modern practice, and I don't think it shows up in Z. (2) is absolutely relevant to modern practitioners, but I don't see what this paper has to say about it beyond \"SM literature from the 90's exhibits similar phenomena.\" The model introduced in Section 3 abstracts all such concerns away.\n\nResponse:\n\nFor the first point, we agree; see the comments above about so-called finite size effects. For the second point, we tried to work with the simplest model that would \"explain\" the results, rather than a much more complex model that would hide the essential issues. So, it is not the case that our model \"abstracts all such concerns away,\" it just abstracts away almost all the things that are not essential to understand the basic point. See comments below about that.\n\n", "\n27. Reviewer:\n\nSection 3. I am not super comfortable with the idea of \"Claims\", especially since the 3 Claims seem to be different sorts of things. I would normally think of a \"Claim\" as something that could be true or false, possibly with some argument for its truth.\n\nResponse:\n\nWe agree. We did it for pedagogical reasons.\n\n28. Reviewer:\n\nClaim 1 introduces a model (VSDL), but I wouldn't call this a claim, since nothing is actually \"claimed.\" The subpoints of Claim 1 are arguably claims, but they're not introduced as such. I address these in turn:\n\n\"Adding noise decreases an effective load alpha.\" The paper states \"N is the effective capacity of the model trained on these data\", but \"effective capacity\" is never defined. Certainly, if we *define* alpha = m_eff / N and *define* m_eff = m - m_rand, the (sub)claim follows, but why are those definitions good? I *think* what's going on here is hidden in the sentence \"empirical results indicate that for realistic DNNs it is close to 1. Thus, the model capacity N of realistic DNNs scales with m and not m_eff.\", where \"it\" refers to the Rademacher complexity. Well, OK, but if we agree with that, then aren't we just *assuming* the main result of Z rather than explaining it? We're basically just stating that the models can memorize the data?\n\nResponse:\n\nWe are saying that data and/or noise added to labels/data can be viewed in terms of a load parameter. This is not assuming the main results of Z, and it says nothing about memorization or generalization. Then, appealing to SM results, this has implications for memorization/generalization that go beyond what PAC/VC theory can say. In addition, the parameter regime where \"memorization\" occurs is actually for much smaller values of the load than we are considering. Memorization and over-training are different phenomena in the SM theory, and memorization occurs at extremely small values of the load. This is related to point 42 below, and why we mention the Hopfield model. The Hopfield model is a model of memorization, and we don't think that is what is going on here.\n\n\n29. Reviewer:\n\nI don't really understand the point the last part of the paragraph is trying to make (everything after what I quoted above).\n\n\"Early stopping increases an effective temperature tau.\" I find this plausible but don't understand the argument at all. To this reader, it's just \"stuff from SM I don't understand.\" I think the typical ML reader of this paper won't necessarily be familiar with any of \"the weights evolve according to a relaxation Langevin equation\", \"from the fluctuation-dissipation theorem\", or the reference to annealing rate schedules. Consider either explaining this more or just appealing to SM and relegating this to an appendix.\n\nResponse:\n\nThanks for the honesty about not understanding this. Indeed, this was predicted in the blog we cited above:\n\nhttps://medium.com/intuitionmachine/revisiting-deep-learning-as-a-non-equilibrium-process-9cedb93a13a2\n\nThe point of this paper, and in particular the point of our more pedagogical explanation, is to highlight and explain these \"simple\" (simple for people trained in this area rather a different area) ideas, rather than wrapping these simple ideas into something technically complex that has epsilon novelty. Importantly, implementing these simple ideas is quite complex, and it is appropriate for follow up work. But, note that there are several recent papers that have begun to use these ideas. We are happy to move this material to a main section, if it is acceptable to the PC. As for an even more detailed explanation, we feel that it will be more appropriate for a follow-up work than an 8 page conference paper.\n\n", "\n30. Reviewer:\n\nAfter the claim, the paper mentions that the VSDL model ignores other \"knobs\". This is fine for a model, but I think it's totally disingenuous to then suggest that this model explains anything about other popular ways to regularize (Observation 2 in the intro, see also my comment on Section 2). In the intro, the claim is \"Other regularizations sometimes help and sometimes don't and we don't understand why\" (the claim is about overfitting but it's also true for improving performance in general), which is basically true. But introducing a model which completely abstracts these things away cannot possibly explain anything about the behavior.\n\nResponse:\n\nWe don't think that it is \"disingenuous\" to do this. Rather than asking for the most complex model we can come up with, the point of the paper is what is the simplest model that will shed insight into the problem. As for the comment \"A model which completely abstracts these things away cannot possibly explain anything about the behavior,\" we really don't know what to say. If this is the case, then noone should be talking about Rademacher complexities in the first place. Even SVMs have nothing to do with this since the hypothesis space is chosen in a data dependent way. The question is one of a level of abstraction, and what can be learned from that abstraction. The key point here is that SM treats learning as an \"emergent\" phenomena and studies the properties of learning using simple models because experience with other problems in complexity theory indicate that we can learn alot---not everything, but a lot---about complex systems without needing to model the very specific details of the network architectures. For example, we can use simplified McCulloch and Pitts neurons and study the emergent behavior of collections of these basic objects, etc. At a minimum, it provides a theory of learning that is worth \"revisiting,\" which is the entire point of our paper.\n\n31. Reviewer:\n\nClaim 2 is that we should consider a thermodynamic limit where model complexity grows with data (the paper says grows with the number of parameters, I assume this is a typo). I would probably call this one an \"Assumption\", with some arguments for the justification. I think this is one of the most interesting and important ideas in the paper, and I don't fully understand it, even after reading the appendix. I have questions. How should / could this apply to practitioners, who cannot in general hope to obtain arbitrary amounts of data? Are we assuming that any (or all) modern DNN experiments are in the asymptotic regime? Are we assuming the experiments in Z are in this regime? Is there any relevance to the fact that in an ML problem (unlike in say a spin glass, at least as far as I know) the \"complexity\" of the *task* is *not* increasing with the data size, so eventually one will have seen \"enough\" data to \"saturate\" the task? I'd love to know more.\n\nResponse: \n\nThanks for catching this. Technical complexities aside, we feel that one of the most interesting things there is our observation (be it a pedagogical claim or an assumptions) that the usual limiting arguments from mathematical statistics are less appropriate that the thermodynamic limit. As for applying it to practitioners, we can imagine many possibilities, and that is for the next paper. As for other problems in ML, while one may peek at the data to develop features, e.g., for an SVM, one arguably has a less strong dependence on the data than in NNs where one peeks at the data many many times. That is why PAC/VC can shed light on SVMs and many other models but not NNs. BTW, even for Tikhonov regularization, subtle properties are observed that are not captured by VC theory, e.g.:\n\nhttps://github.com/ryotat/ryotat.github.io/blob/master/teaching/enshu12.pdf\n\n32. Reviewer:\n\nClaim 3 is more of an \"Informal Theorem\" that under the model of Claim 1 and the assumption of Claim 2, the phase diagrams of Figure 1 hold. The \"proof\" is a reference to SM papers. This should be clarified.\n\nResponse:\n\nThanks, we can clarify that.\n\n33. Reviewer:\n\nYet again, I point out that I do not know any modern large-scale NN experiments that correspond to any of the pictures in Figure 1.\n\nResponse:\n\nSee our comments above.\n\n34. Reviewer:\n\nThere's a mention of \"tau = 0 or t > 0.\" What is the significance of tau = 0? How should an ML reader think about this?\n\nResponse:\n\ntau>0 is a positive temperature that interpolates between tau=infinity (where \"everything is random\") and tau=0 (where \"everything is discrete\"). It has been used, e.g., to have a \"relaxed\" or \"soft\" version of combinatorial optimization problems, e.g., as solved with temperature-annealed MCMC in simulated annealing.\n\n", "\n35. Reviewer:\n\nSection 3.2 suggests that Claim 3 (the existence of the 1 and 2d phase diagrams) \"explain\" Observations 1 and 2 from the Appendix. I simply do not see this. \n\nResponse:\n\nThanks. Again, we will try to clarify. In Fig 1c, we try to illustrate that, e.g., adding noise to the labels moves parallel to the X axis and can lead from the \"Perfect\" phase with good generalization to the \"SG\" phase or the \"Poor\" phase with bad generalization. While this isn't substantially improved by changing many regularization parameters, it can be fixed by changing the tau, e.g., by changing the number of iterations, which is what Z observed.\n\n36. Reviewer:\n\nFor Observation 1, that NNs can easily overtrain, the \"argument\" seems to boil down to \"the system is in a phase where it cannot help but overtrain.\" This is hardly an explanation at all. How do we know what phase these experiments were in? How do we know these experiments were in the thermodynamic limit?\n\nResponse:\n\nThe question of how to determine what phase can be subtle, basically since the learning process slows down dramatically, but it can be done by computing various overlap parameters. See the papers above. As for the question about the limits, that is claim or hypothesis that we are making. It is plausible, but the justification is after the fact. In more detail, and somewhat more precisely, due to finite data and finite size effects, we know that the computations were not done in the thermodynamic limit. Our claim is simply is that this is a less inappropriate (yes, we mean a double negative) and more useful limit than the limit taken when the model complexity is fixed and the amount of data grows.\n\n37. Reviewer:\n\nFor Observation 2, the authors point out that in VSDL, \"the only way to prevent overfitting is to decrease the number of iterations.\" This seems true but vacuous: the authors introduced a model where regularization doesn't correspond to any knobs, so of course to the extent that that model explains reality, the knobs don't stop overfitting. But this feels like begging the question. If we accept the VSDL model, we'd also accept that various regularizations can't improve generalization, which goes directly against basically all practice. I guess I technically have to concede that \"Given the three claims\", Observation 2 follows, but Claim 1 by itself seems to be already assuming the conclusion.\n\nResponse:\n\nThe claim that \"the only way to prevent overfitting is to decrease the number of iterations\" is not vacuous. It is one of the main observations in Z. Although they don't describe it as such, it is clear from their empirical results that while traditional regularization knobs don't help much, there is one knob that has a strong effect of regularization, and that is the stopping time. This was known in the 80s. We simply point out that revisiting these old ideas can explain what is going on in much more complex computation of interest to the ICLR community.\n\n38. Reviewer:\n\nMinor writing issues:\n\nThe authors mention at least four times that reproducing others' results is not easy (p. 1 observation 1, p. 4 first paragraph, p. 4 footnote 6, last sentence of the main text). While I think this statement is true, it is quite well-known, and I suggest that the authors may simply alienate readers by harping on it here.\n\nResponse:\n\nThanks for the suggestion. We will try to moderate the claims. We do, however, think this is not simply complaining. Instead, it is closely related to the thermodynamic limiting arguments. In this limit, little details matter a lot more, and so minor details in the problem can be extremely important.\n\n39. Reviewer:\n\np. 1. \"may always overtrain\" is unclear. I don't know what it means. Is the claim that SOTA DNNs wll always overtrain when presented with enough data? I don't think so from the rest of the paper, but I'm not sure.\n\nResponse:\n\nThanks. We agree is is slightly imprecise. Since we weren't able to reproduce results, we weren't able to come up with a more precise version of the statement with which we are comfortable. It certainly won't always overtrain, e.g., if we run zero steps of an iterative method. Whether it will \"always\" overtrain if we try to push the boundary and get good/state-of-the-art prediction results is unclear to us.\n\n40. Reviewer:\n\nI'm a little unclear what the authors mean by \"generalization error\" (or \"generalization accuracy\", which seems to only be used on p. 2). Z use \"generalization error = training error - test error\". Check the appendix for consistency here too.\n\nResponse:\n\nThanks, the literature is inconsistent, and we tried to be consistent, but we will double check.\n\n41. Reviewer:\n\nReplace \"phenomenon\" with \"phenomena\", at least twice where appropriate.\n\nResponse:\n\nThanks, we tried to be consistent, but we will double check.\n", "\n42. Reviewer:\n\np. 3, first paragraph. I think the reference to the Hopfield model should be relegated to a footnote. The text \"two or more such parameter holds more generally\" is confusing; is it two, or is it two or more? What will I understand differently if I use more than two parameters? The next paragraph, starting with \"Given these two identifications, which are novel to this work,\" seems odd, since we've just seen 7+ references and a claim that they have similar parameterizations, so it's unclear what's novel.\n\nResponse:\n\nWe can clarify the \"two or more\" issue. Basically, in a more realistic system, there may be many temperature-like knobs, e.g., number of iterations, annealing rate, batch size, etc., all of which control the \"temperature\" very imperfectly. We predict a more complex version of what our VSDL model predicts.\n\n43. Reviewer:\n\nAppendix A.5. \"For non-linear dynamical systems... NNs from the 80s/90s or our VSDL model or realistic DNNs today .. there is simply no reason to expect this to be true.\" where \"this\" refers to \"one can always choose a value of lambda to prevent overfitting, potentially at the expense of underfitting.\" I don't understand, and I also think this disagrees with the first full paragraph on p. 2. Is there some thermodynamic limit argument required here? The very next bullet states that x = argmin_x f(x) + lambda g(x) can prevent overfitting with large lambda. What's different? I'm overall not clear what's being implied here. Consider a modern DNN for classification. A network with all zero weights will have some empirical loss L(0). If I minimize, for the weights of a network w, L(w) + lambda ||w||^2, I have that L(w) + lambda ||w||^2 <= L(0) (assuming I can solve the optimization), and assuming L is non-negative, lambda ||w||^2 <= L(0), or ||w||^2 <= L(0) / lambda. So for very large lambda, I can drive ||w||^2 arbitrarily close to zero. How is this importantly different from the linear case? What am I missing?\n\nResponse:\n\nThere are several comments here. First, we can clarify the \"this\" confusion. Second, if we read the reviewer's comment correctly, this does disagree with the first full paragraph on page 2. That is our point: for non-linear dynamical systems, one gets something very different than, e.g, an SVM. In somewhat more detail, and as we discuss in detail in the appendix, it is not just the thermodynamic limit, but, also the discontinuities in the models, which are important. The SVM lacks the latter. See the discussion in Chapter 10 of the Engle and van der Broeck book, which shows how to apply something like the thermodynamic limit to the VC bounds. Third, this is independent of the thermodynamic limit, since for non-Langevin dynamics, there may not be such a thermodynamic system. Fourth, we can clarify that the SVM/lambda issues are \"the same\" while NNs are very different. If we understand the reviewer's question, then this would correspond to designing a network to work in a very high-temperature limit. This would not perform as well, but this would be closer to the phase where PAC/VC intuition would hold.\n\n44. Reviewer:\n\np. 3. \"inability not to overfit.\" Avoid the double negative.\n\nResponse:\n\nUsually when people a double negative, they are imprecise or sloppy. In this case, this is what we mean.\n\n45. Reviewer:\n\nIntro, last paragraph. Weird section order description, with ref to Section A coming before section 4.\n\nResponse:\n\nWe agree. In the arXiv version, it is a separate section, but we put it in an appendix to respect the page limit request.\n\n46. Reviewer:\n\nFootnote 2. \"but it can be quite limiting.\" More detail needed. Limiting how?\n\nResponse:\n\nThere are many ways in which it can be quite limiting. For example, when one tries to work with complex realistic deep NNs, this separation breaks down, and ideas from PAC/VC theory do not provide even a qualitative guide to practice.\n\n47. Reviewer:\n\nFootnotes 3 and 4. The text says there are \"technical\" and \"non-technical\" reasons, but 3 and 4 both seem technical to me.\n\nResponse:\n\nWe are saying that Footnote 3 is technical, meaning that there is a lot of technical stuff to deal with to apply the methods. We are saying that Footnote 4 is non-technical, since we feel that it is primarily a \"cultural\" issue: some people like \"rigorous\" methods that lead to upper bounds, presumably due to their training; while other people are comfortable with approximate, mean field models, that lead to qualitative and quantitative predictions, but that require some additional nontrivial mathematical and numerical analysis to establish their rigor.\n\n48. Reviewer:\n\nAppendix A.2. \"on a randomly chosen subset of X.\" Is it really subset? Are we picking subsets uniformly at random?\n\nResponse:\n\nIt could be a randomly chosen subset that is drawn from either a uniform or a non-uniform distribution.\n\n" ]
[ 3, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkPoRg10b", "iclr_2018_SkPoRg10b", "iclr_2018_SkPoRg10b", "HJdaJu51z", "iclr_2018_SkPoRg10b", "HJdaJu51z", "HJdaJu51z", "HJdaJu51z", "HJdaJu51z", "HJdaJu51z", "HJdaJu51z", "HJdaJu51z" ]
iclr_2018_ByJWeR1AW
Data augmentation instead of explicit regularization
Modern deep artificial neural networks have achieved impressive results through models with very large capacity---compared to the number of training examples---that control overfitting with the help of different forms of regularization. Regularization can be implicit, as is the case of stochastic gradient descent or parameter sharing in convolutional layers, or explicit. Most common explicit regularization techniques, such as dropout and weight decay, reduce the effective capacity of the model and typically require the use of deeper and wider architectures to compensate for the reduced capacity. Although these techniques have been proven successful in terms of results, they seem to waste capacity. In contrast, data augmentation techniques reduce the generalization error by increasing the number of training examples and without reducing the effective capacity. In this paper we systematically analyze the effect of data augmentation on some popular architectures and conclude that data augmentation alone---without any other explicit regularization techniques---can achieve the same performance or higher as regularized models, especially when training with fewer examples.
rejected-papers
The reviewers agree that the authors have made an interesting contribution studying the effect of data augmentation, but they also agree that the claims made by the paper require a broader empirical study beyond the limited number of tasks surveyed in the current revision. I urge the authors to follow this advice and see what they find.
train
[ "S1KIF7olf", "BJHwtGogM", "rJrLHq3yz", "ryojlc5Xz", "SkzKEY9QM", "B1FxNt97G", "S1U2QK5mG", "BJvBW-rXM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper presents an empirical study of whether data augmentation can be a substitute for explicit regularization of weight decay and dropout. It is a well written and well organized paper. However, overall I do not find the authors’ premises and conclusions to be well supported by the results and would suggest further investigations. In particular:\n\na) Data augmentation is a very domain specific process and limits of augmentation are often not clear. For example, in financial data or medical imaging data it is often not clear how data augmentation should be carried out and how much is too much. On the other hand model regularization is domain agnostic (has to be tuned for each task, but the methodology is consistent and well known). Thus advocating that data augmentation can universally replace explicit regularization does not seem correct.\n\nb) I find the results to be somewhat inconsistent. For example, on CIFAR-10, for 100% data regularization+augmentation is better than augmentation alone for both models, whereas for 80% data augmentation alone seems to be better. Similarly on CIFAR-100 the WRN model shows mixed trends, and this model is significantly better than the All-CNN model in performance. These results also seem inconsistent with authors statement “…and conclude that data augmentation alone - without any other explicit regularization techniques - can achieve the same performance to higher as regularized models…”\n", "The paper proposes data augmentation as an alternative to commonly used regularisation techniques like weight decay and dropout, and shows for a few reference models / tasks that the same generalization performance can be achieved using only data augmentation.\n\nI think it's a great idea to investigate the effects of data augmentation more thoroughly. While it is a technique that is often used in literature, there hasn't really been any work that provides rigorous comparisons with alternative approaches and insights into its inner workings. Unfortunately I feel that this paper falls short of achieving this.\n\nExperiments are conducted on two fairly similar tasks (image classification on CIFAR-10 and CIFAR-100), with two different network architectures. This is a bit meager to be able to draw general conclusions about the properties of data augmentation. Given that this work tries to provide insight into an existing common practice, I think it is fair to expect a much stronger experimental section. In section 2.1.1 it is stated that this was a conscious choice because simplicity would lead to clearer conclusions, but I think the conclusions would be much more valuable if variety was the objective instead of simplicity, and if larger-scale tasks were also considered.\n\nAnother concern is that the narrative of the paper pits augmentation against all other regularisation techniques, whereas more typically these will be used in conjunction. It is however very interesting that some of the results show that augmentation alone can sometimes be enough.\n\nI think extending the analysis to larger datasets such as ImageNet, as is suggested at the end of section 3, and probably also to different problems than image classification, is going to be essential to ensure that the conclusions drawn hold weight.\n\n\n\nComments:\n\n- The distinction between \"explicit\" and \"implicit\" regularisation is never clearly enunciated. A bunch of examples are given for both, but I found it tricky to understand the difference from those. Initially I thought it reflected the intention behind the use of a given technique; i.e. weight decay is explicit because clearly regularisation is its primary purpose -- whereas batch normalisation is implicit because its regularisation properties are actually a side effect. However, the paper then goes on to treat data augmentation as distinct from other explicit regularisation techniques, so I guess this is not the intended meaning. Please clarify this, as the terms crop up quite often throughout the paper. I suspect that the distinction is somewhat arbitrary and not that meaningful.\n\n- In the abstract, it is already implied that data augmentation is superior to certain other regularisation techniques because it doesn't actually reduce the capacity of the model. But this ignores the fact that some of the model's excess capacity will be used to model out-of-distribution data (w.r.t. the original training distribution) instead. Data augmentation always modifies the distribution of the training data. I don't think it makes sense to imply that this is always preferable over reducing model capacity explicitly. This claim is referred to a few times throughout the work.\n\n- It could be more clearly stated that the reason for the regularising effect of batch normalisation is the noise in the batch estimates for mean and variance.\n\n- Some parts of the introduction could be removed because they are obvious, at least to an ICLR audience (like \"the model would not be regularised if alpha (the regularisation parameter) equals 0\").\n\n- The experiments with smaller dataset sizes would be more interesting if smaller percentages were used. 50% / 80% / 100% are all on the same order of magnitude and this setting is not very realistic. In practice, when a dataset is \"too small\" to be able to train a network that solves a problem reliably, it will generally be one or more orders of magnitude too small, not 2x too small.\n\n- The choices of hyperparameters for \"light\" and \"heavy\" motivation seem somewhat arbitrary and are not well motivated. Some parameters which are sampled uniformly at random should be probably be sampled log-uniformly instead, because they represent scale factors. It should also be noted that much more extreme augmentation strategies have been used for this particular task in literature, in combination with padding (for example by Graham). It would be interesting to include this setting in the experiments as well.\n\n- On page 7 it is stated that \"when combined with explicit regularization, the results are much worse than without it\", but these results are omitted from the table. This is unfortunate because it is a very interesting observation, that runs counter to the common practice of combining all these regularisation techniques together (e.g. L2 + dropout + data augmentation is a common combination). Delving deeper into this could make the paper a lot stronger.\n\n- It is not entirely true that augmentation parameters depend only on the training data and not the architecture (last paragraph of section 2.4). Clearly more elaborate architectures benefit more from data augmentation, and might need heavier augmentation to perform optimally because they are more prone to overfitting (this is in fact stated earlier on in the paper as well). It is of course true that these hyperparameters tend to be much more robust to architecture changes than those of other regularisation techniques such as dropout and weight decay. This increased robustness is definitely useful and I think this is also adequately demonstrated in the experiments.\n\n- Phrases like \"implicit regularization operates more effectively at capturing reality\" are too vague to be meaningful.\n\n- Note that weight decay has also been found to have side effects related to optimization (e.g. in \"Imagenet classification with deep convolutional neural networks\", Krizhevsky et al.)\n\nREVISION: I applaud the effort the authors have put in to address many of my and the other reviewers' comments. I think they have done so adequately for the most part, so I've decided to raise the rating from 3 to 5, for what it's worth.\n\nThe reason I have decided not to raise it beyond that, is that I still feel that for a paper like this, which studies an existing technique in detail, the experimental side needs to be significantly stronger. While ImageNet experiments may be a lot of work, some other (smaller) additional datasets would also have provided more interesting evidence. CIFAR-10 and CIFAR-100 are so similar that they may as well be considered variants of the same dataset, at least in the setting where they are used here.\n\nI do really appreciate the variety in the experiments in terms of network architectures, regularisation techniques, etc. but I think for real-world relevance, variety in problem settings (i.e. datasets) is simply much more important. I think it would be fine if additional experiments on other datasets were not varied along all these other axes, to cut down on the amount of work this would involve. But not including them at all unfortunately makes the results much less impactful.", "This paper provides a systematic study of data augmentation in image classification problems with deep neural networks and argues that data augmentation could replace some common explicit regularizers like the weight decay and dropout. The data augmentation techniques are also shown to be insensitive to hyper parameters, so easier to use than explicit regularizers when changing architectures.\n\nIt is good to have a systematic study of data augmentations, however, the materials in this paper in the current state might not be a strong ICLR publication. The paper could potentially be made more interesting or solid if some of the followings could be investigated:\n\n- considering a wider range of different problems apart from image classification, and investigate the effectiveness of domain specific data augmentation and general data augmentation\n- systematically study each of the data augmentation techniques separately to see which is more important (as oppose to only having 'light' and 'heavy' scheme); potentially also study other less-traditional augmentation schemes such as adversarial examples, etc.\n- propose novel data augmentation schemes\n- more analysis of the interplay with Batch Normalization, why the results for BN vs no-BN is not presented for WRN?\n- carefully designed synthetic (or real) data / task to verify the statements. For example, the explicit regularizers are thought to unnecessarily constraint the model too much. Can you measure the norm (or other complexity measures) of the models learned with explicit regularizers vs models learned with data augmentation?\n\n", "Inspired by the reviewers' feedback, we have improved our paper with a number of changes. Here is a summary of the main improvements:\n\n- Graphical visualization of the main results - page 6. In the original version, all the results were presented only numerically in tables. The newer version presents the most important results in a graphical way, using color bars, which hopefully enables an easier comparisons of the different experiments. [From reviewer 3]\n\n- Detailed report of the results in the appendix - Appendix A, pages 12-14. Since the main results are now presented in a graphical way in the main body of the paper, we have moved the tables with the detailed report of the results to an appendix. [From reviewers 1, 2 and 3]\n\n- New experiments: results of training with and without batch normalization - Appendix A, pages 12-14. In the original version, only a subset of the experiments comparing the models trained with and without batch normalization were reported. The newer version reports both versions (with and without) for most of the models under test. [From reviewer 1]\n\n- New experiments: training with reduced data sets, 10 and 1 % of the data - Appendix A, page 13. Whereas the original version reports only the results of training with 80 and 50 % of the data, the newer version reports a wider set of experiments: 80, 50, 10 and 1 % of the training data. [From reviewer 2]\n\n- New analysis: norm of the weight matrix - Appendix B, pages 14-15. The analysis of the norm of the weight matrix provides a way to compare the complexity of the function learned by models trained with different levels of regularization and data augmentation. [From reviewer 1]\n\n- Definitions of explicit and implicit regularization - Introduction, page 1. In order to reduce the ambiguity and facilitate the understanding of the paper, we provide our definitions of the concepts of explicit and implicit regularization, which are unfortunately not clearly defined in the literature. [From reviewer 2]\n\nWe would like to sincerely thank again the reviewers for their useful feedback.", "We would first like to thank the reviewer for the feedback. In particular, we are grateful for the suggestions to make a stronger paper, which we have gladly received and added to our work. Besides, we highly appreciate the accurate summary of the paper, where the key points of our paper are correctly identified. That reflects a careful read of the paper.\n\nThe reviewer suggests some extensions for the work that would make the paper “more interesting or solid”. We comment on them next:\n\n- Complexity measures: we fully agree with the reviewer and we have accordingly added an appendix section (B) with an analysis of the Frobenius norm of the learnt weights by the models with different level of regularization and data augmentation. The main conclusions are: a) more augmentation yields solutions of larger norm b) more regularization yields solutions of smaller norm. This is in line with the hypotheses presented in the paper.\n\n- More analysis of the interplay with Batch Normalization: the full report of the results from experiments with and without batch normalization, together with the corresponding analysis, is now provided in the Appendix A. We also include new experiments that we hope will complement the already included ones. They had not been included in the original version due to space limitations, but we have rearranged the paper and included some appendices with the hope of making the paper more solid, according to the reviewer suggestion.\n\n- Novel data augmentation schemes: the proposal of new augmentation is schemes is not the aim of this paper, but rather analyzing the interplay of (general) data augmentation and regularization. We believe that the selected schemes are representative of common practices in the literature and of different levels of augmentation. We also think that there is no need for proposing new schemes, as there exist many in the literature. Several of them are referenced in our paper: (Graham, 2014), (DeVries & Taylor, 2017a), (DeVries & Taylor, 2017b), etc. Besides, we think that in the near future it will be possible to automatically learn the augmentation (Lemley et al., 2017; Ratner et al., 2017)\n\n- Systematic study of the augmentation schemes: in line with the previous suggestion, we believe that such an analysis is out of the scope. We aimed at analyzing data augmentation as a general concept/technique, rather than studying the individual types of augmentation.\n\n- Consider a wider range of different problems apart from image classification: although it would be very interesting to see the same analysis on other type of problems, the amount of work would increase very significantly and cannot be in the scope of a single paper, in our opinion. Furthermore and very importantly, we cannot assume that the same analysis and results would be easily transferred to other domains, because we cannot disregard the role of implicit regularization provided by convolutional layers, for instance, which are essential for the particular task of object recognition.\n", "The main concern of the reviewer is the lack of experiments on a wider variety of data sets, such as ImageNet. We would also, of course, like to be able of doing the same analysis on ImageNet and other data sets. However, we reckon that one of the strengths of our paper is that we analyze the differences between data augmentation and explicit regularization under a wide variety of factors: 2 different network architectures, 3 levels of data augmentation, 3 types of regularization, 3 levels of network depth, 2 data sets, 4 training data sizes, with and without batch normalization, etc. Applying the same amount of experiments on ImageNet is unfortunately unfeasible. As a matter of fact, the vast majority of papers on object recognition doing equivalent analysis (including most ICLR papers) limit their analysis to CIFAR, MNIST and SVHN. Since we have found a high degree of consistency in our experiments, we humbly believe that one can expect similar results on ImageNet (and probably even better benefits from data augmentation vs. explicit regularization due to the higher resolution). Nonetheless, we plan to extend the analysis to ImageNet, to a certain extent, in future works. Finally, please note that we acknowledge and comment on this limitation in the discussion section of the paper.\n\nAnother concern of the reviewer is that “the narrative of the paper pits augmentation against all other regularisation techniques, whereas more typically these will be used in conjunction.” In this regard, we deliberately propose not using explicit regularization techniques in the cases where sufficient data augmentation can be applied. In view of our results (please, see the updated version of the paper, with an improved visualization of the results on page 6 and the extended report in the appendix A), in the vast majority of cases data augmentation alone can achieve at least the same performance or even better than augmentation+regularization, as actually noted by the reviewer. Thus, why using them in conjunction. Dropping explicit regularization techniques such as dropout and weight decay has a number of advantages: faster training, better interpretable models, fewer hyperparameters, etc.\n", "We thank the reviewer for the very useful feedback, in particular for the interesting suggestions we have adopted to hopefully make our paper stronger. We especially thanks the words appreciating the usefulness of our study.\n\nIn the following we will answer the particular comments of the reviewer, given as a list:\n\n- [Distinction between explicit and implicit regularization]: this is a very good point and we thank the reviewer for the suggestion of providing a definition. We have updated the paper and we now provide a definition and distinction of explicit and implicit regularization in the introduction on page 1.\n\n- [“Data augmentation always modifies the distribution of the training data”]: As the reviewer indicates, some of the model’s capacity is used to model out-of-distribution data. However, these out-of-distribution data are generated in a plausible way, i.e. the augmentation schemes are designed so the generated examples reflect plausible variations of the original distribution present in the real world, as opposed to, for instance, dropout, which randomly turns off neurons during training.\n\n- [State the reason for the regularising effect of batch normalisation]: we appreciate the suggestion. We have added a sentence about this on the second-to-last paragraph on page 1.\n\n- [Obvious parts for an ICLR audience in the introduction]: We agree and have actually re-written the first part of the introduction. Instead of explaining the concept of regularization and highlighting its importance in machine learning, we now directly and briefly define regularization, then highlight its particular role on deep learning and finally provide our definition of the concepts of explicit and implicit regularization.\n\n- [Experiments with even smaller data set sizes]: we highly appreciate this suggestion. In the original version we only provided results with 80 and 50 % of the data. In the new version of the paper we now provide results with 10 and 1 %, additionally. The results with even smaller data set sizes actually strengthen the benefits of data augmentation with respect to explicit regularisation. We invite the readers to check the new results on the graphical visualization on page 6, the analysis on Section 2.3 and the full report of results in the Appendix A.\n\n- [“The choices of hyperparameters for \"light\" and \"heavy\" motivation seem somewhat arbitrary and are not well motivated.”]: As stated by the reviewer, the choices are indeed very arbitrary. However, we see this as a good thing, in contrast to regularization hyperparameters, which cannot be arbitrary at all, but very well chosen. One of the advantages of data augmentation with respect to regularization is that it does not require careful selection and validation of the hyperparameters. The schemes suggested by the reviewer are actually referenced in the paper. However, our goal is not performing a review of data augmentation strategies used in the literature, but showing the advantage of data augmentation as a general concept/technique over regularisation. The suggestion of sampling log-uniformly would have perhaps made more sense, but we consider this is not crucial, and the fact that we have (deliberately) designed a suboptimal data augmentation strategy better supports our hypothesis, and one could expect even more benefits from data augmentation compared to explicit regularization with more carefully designed augmentation schemes. \n\n- [“On page 7 it is stated that \"when combined with explicit regularization, the results are much worse than without it\", but these results are omitted from the table. This is unfortunate because it is a very interesting observation [...] Delving deeper into this could make the paper a lot stronger.”]: It should be noted that this sentence refers to the results with *batch normalization*, not to explicit regularization + data augmentation, which were indeed already provided and accordingly analyzed. Nonetheless, regarding the results analyzing the use of batch normalization, please note that we have extended the report of results in the Appendix A.\n\n- [“It is not entirely true that augmentation parameters depend only on the training data and not the architecture”]: This is a good point and we have accordingly reduced the scope of that claim (last paragraph of Section 2.4). \n\n- [“Phrases like \"implicit regularization operates more effectively at capturing reality\" are too vague to be meaningful”]: We have accordingly updated the fourth paragraph of Section 3.\n\n- [Optimization side-effects of weight decay]: This has been in fact observed by many researchers, included ourselves, although not always reported. We humbly think that it even supports our proposal of not overusing weight decay, since modern deep architectures, trained with batch normalization, SGD and appropriate typically do not present any optimization problems.", "We thank the reviewer for the insightful comments and for raising some concerns that we have positively received and used to, in our opinion, improve the paper.\n\nThe main concern is that “the results do not seem to support the conclusions”. We humbly believe that our results do support the conclusions, but we agree that they might not “seem” so because we failed at presenting the results in a more clear way in the original version of the paper. \n\nWe invite you to check the new version of the paper, where the main results are now presented graphically in page 6, enabling easier comparisons of the models with and without explicit regularization and the different augmentation schemes. We have moved the tables with the full details of all experiments to the appendix. We hope it is now easier to see how the results support the conclusions: data augmentation alone provides at least the same performance than data augmentation + explicit regularization in the vast majority of the experiments.\n\nRegarding your particular comments:\n\na) Although data augmentation is indeed domain specific and requires certain expert knowledge, we believe that 1) it should not be a reason to disregard it. Instead, expert knowledge should be exploited. Deep learning is full of very successful domain-specific techniques: convolutional layers, LSTM, etc. 2) one data augmentation scheme can be designed for a broad set of tasks (for example, object recognition, segmentation, localisation) and a broad family of data (natural images) and it will naturally apply to many types of architectures, hyperparameters, amount of training data, tasks, data sets, etc, as supported in our experiments.\n\nIn no place do we claim that data augmentation can “universally” replace regularization. Our results apply only, of course, to the cases where enough data augmentation can be applied. There exists a significantly large corpus of works on financial data and medical imaging, for example, where data augmentation is also applied and we believe that future research will bring more advanced techniques in this direction.\n\nExplicit regularization is well-known only to some extent though and the disadvantages are multiple: notably, the need for specific hyperparameter tuning for every task, every architecture, every amount of training data, etc. A piece of evidence is that new regularization techniques are continuously proposed in the literature.\n\nThus, we propose avoiding explicit regularization when it is unnecessary (when it can be substituted by data augmentation).\n\nb) For CIFAR10, All-CNN and 100 % of the data: only data augmentation (93.55) outperforms data augmentation + regularization (93.08). In the case of WRN, both cases are equivalent (95.47 vs 95.60). Similar results can be found in the rest of the experiments, both on CIFAR10 and CIFAR100 and the difference in favour of only data augmentation increases as fewer examples are used for training. We gladly invite the readers to have a look at Figure 2 and the tables of the appendix to check these results.\n\nAs a final remark, we find important to note that in order to enable fair comparisons, we haven’t optimized the learning rate (nor other training hyperparameters) of our proposed way of training (only data augmentation, no explicit regularization), in contrast to the highly optimized set of hyperparameters provided in a paper that achieved state-of-the-art results and was published in ICLR 2014 (All-CNN). We used their original training hyperparameters, which were optimized for a model that includes weight decay, dropout, and no batch norm, which is highly suboptimal. As a matter of fact, we have observed that if the learning rate is increased, data augmentation alone achieves higher results than the ones we provide in the paper. \n\n" ]
[ 5, 5, 5, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ByJWeR1AW", "iclr_2018_ByJWeR1AW", "iclr_2018_ByJWeR1AW", "iclr_2018_ByJWeR1AW", "rJrLHq3yz", "BJHwtGogM", "BJHwtGogM", "S1KIF7olf" ]
iclr_2018_ByED-X-0W
Parametric Information Bottleneck to Optimize Stochastic Neural Networks
In this paper, we present a layer-wise learning of stochastic neural networks (SNNs) in an information-theoretic perspective. In each layer of an SNN, the compression and the relevance are defined to quantify the amount of information that the layer contains about the input space and the target space, respectively. We jointly optimize the compression and the relevance of all parameters in an SNN to better exploit the neural network's representation. Previously, the Information Bottleneck (IB) framework (\cite{Tishby99}) extracts relevant information for a target variable. Here, we propose Parametric Information Bottleneck (PIB) for a neural network by utilizing (only) its model parameters explicitly to approximate the compression and the relevance. We show that, as compared to the maximum likelihood estimate (MLE) principle, PIBs : (i) improve the generalization of neural networks in classification tasks, (ii) push the representation of neural networks closer to the optimal information-theoretical representation in a faster manner.
rejected-papers
The reviewers are in agreement that while the paper is interesting, both the clarity of presentation and experimental rigor could be improved. The committee feels this paper is not ready for publication at ICLR 2018 inits current form.
train
[ "r1D1-BKgz", "rJYxl_YxG", "SJcOWb5gf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a learning method (PIB) based on the information bottleneck framework.\nPIB pursues the very natural intuition outlined in the information bottleneck literature: hidden layers of deep nets compress the input X while maintaining sufficient information to predict the output Y.\nIt should be noted that the limitations of the IB for deep learning are currently under heavy discussion on OpenReview.\nOptimizing the PIB objective is intractable and the authors propose an approximation that applies to binary valued stochastic networks.\nThey use a variational bound to deal with the relevance term, I(Z_l,Y), and Monte Carlo sampling to deal with the layer-by-layer compression term, I(Z_l,Z_{l+1}).\nThey present results on MNIST aiming to demonstrate that using PIBs improves generalization and training speed.\n\nThis is a timely and interesting topic. I enjoyed learning about the authors’ proposed approach to a practical learning method based on the information bottleneck. However, the writing made it challenging and the experimental protocol raised some serious questions. In summary, I think the paper needs very careful editing for grammar and language and, more importantly, it needs solid experiments before it’s ready for publication. When that is done it would make an exciting contribution to the community. More details follow.\n\n\nComments:\n1. All architectures and objectives (both classic and PIB-based) are trained using a single, fixed learning rate (LR). In my opinion, this is a red flag. The PIB objective is new and different to the other objectives. Do all objectives happen to yield their best performance under the same LR? Maybe so, but we won’t know unless the experimental protocol prescribes a sufficient range of LRs for each architecture. In light of this, the fact that SFNN is given extra epochs in Figure 4 does not mean much.\n2. The batch size for MNIST classification is unusually low (8). Common batch sizes range from 64 to 1K (typically >= 128). Why did the authors make this choice? Is 8 good for architectures A through E?\n3. On a related note, the authors only seem to report results from a single random seed (ie. deterministic architectures are trained exactly once). I would like to see results from a few different random seeds. As a result of comments 1,2,3, even though I do believe in the merit of the intuition pursued and the techniques proposed, I am not convinced about the main claim of the paper. In particular, the experiments are not rigorous enough to give serious evidence that PIBs improve generalization and training speed. \n4. The paper needs some careful editing both for language (cf. following point) but also notation. The authors use notation p_D() in eqn (12) without defining it. My best guess is that it is the same as p_u(), the underlying data distribution, but makes parsing the paper hard. Finally there are a few steps that are not explained: for example, no justification is given for the inequality in eqn (13).\n5. Language: the paper needs some careful editing to correct numerous language/grammar issues. At times it is detrimental to understanding. For example I had to read the text leading up to eqn (8) a number of times.\n6. There is no discussion of computational complexity and wall-clock time comparisons. To be clear, I think that even if the proposed approach were to be slower than the state of the art it would still be very interesting. However, there should be some discussion and reporting of that aspect as well.\n\n\nMinor comments and questions:\n7. Mutual information is typically typeset using a semicolon instead of a comma, eg. I(X;Z).\n8. Why is the mutual information in Figure 3 so low? Are you perhaps using natural logarithms to estimate and plot I(Z;Y)? If this is base-2 logarithms I would expect a value close to 1. ", "This paper presents a new way of training stochastic neural network following an information relevance/compression framework similar to the Information Bottleneck. A new training objective is defined as a sum of mutual informations (MI) between the successive stochastic hidden layers plus a sum of mutual informations between each layer and the relevance variable. \n\nThe idea is interesting and to my knowledge novel. Experiments are carefully designed and presented in details, however assessing the impact of the proposed new objective is not straightforward. It would have been interesting to compare not only with SFNN but also to a model with the same architecture and same gradient estimator (Raiko et al. 2014) using maximum likelihood. This would allow to disentangle the impact of the learning mechanism from the impact of the learning objective. \n\nWhy is it important to maximise I(X_l, Y) for every layer? Does that impact the MI of the final layer and Y? \n\nTo estimate the MI between a hidden layer and the relevance variable, a multilayer generalisation of the variational bound from Alemi et al. 2016. Computation of the bound requires integration over multiple layers (equation 15). How is this achieved in practice? With high-dimensional hidden layers a Monte-Carlo estimate on the minibatch can be very noisy and the resulting estimation of MI could be poor.\n\nMutual information between the successive layers is decomposed as an entropy plus a conditional entropy term (eq 17). How is the conditional entropy term estimated? The entropy term is first bounded by conditioning on the previous layer and then estimated using Monte Carlo sampling with a plug-in estimator. Plug-in estimators are known to be inefficient in high dimensions even using a full dataset unless the number of samples is very large. It thus seems challenging to use mini batch MC, how does the mini batch estimation compare to an estimation using the full dataset? What is the variance of the mini batch estimate?\n\nIn the related work section, the IB problem can also be solved efficiently for meta-Gaussian distribution as explained in Rey et al. 2012 (Meta-gaussian information bottleneck). \n\nThere is a small typo in (eq 5).\n", "# Paper overview:\nThis paper views the learning process for stochastic feedforward networks through the lens of an\niterative information bottleneck process; at each layer an attempt is made to minimise the mutual\ninformation (MI) with the feed-in layer while maximising the MI between that layer and the presumed-endogenous variable, 'Y'.\n\nTwo propositions are made, (although I would argue that their derivations are trivially the consequence\nof the model structure and inference scheme defined), and experiments are run which compare the approach to maximum likelihood estimation for 'Y' using an equivalent stochastic network architecture.\n\n# Paper discussion:\nIn general I like the idea of looking further into the effect of adding network structure on the original\ninformation bottleneck results (empirical and theoretical). I would be interested to see if layerwise\ninput skip connections (i.e. between each network layer L_i and the original input variable 'X') hastened the 'compression' stage of learning e.g. (i.e. the time during which the intermediate layers minimise MI with 'X'). I'm also interested that clear examples of the information bottleneck principle in practice (e.g. CCA) are rarely mentioned.\n\nOn the other hand, I think this paper is not quite ready: it reads like work written in a hurry, and is at times hard to follow as a result. There are several places where I think the terminology does not quite reflect what the authors perhaps hoped to express, or was otherwise slightly clumsy e.g:\n\n* \"...self-consistent equations are highly non-linear and still too abstract to be used for many...\", presumably what was implied was that the original solution to the information bottleneck as expressed by Tishby et al is non-analytic for most practical cases of interest?\n\n* \"Furthermore, we exploit the existing network architecture as variational decoders rather than resort to variational decoders that are not part of the neural network architecture.\" -> The existing network architecture is used to provide a variational inference framework for I(Z,Y).\n\n* \"On average, 2H(X|Z) elements of X are mapped to the same code in Z.\" In an ideal world I would like the assumptions required for this to hold true to be a fleshed out a little here.\n\n* \"The generated bottleneck samples are then used to estimate mutual information\" -> an empirical estimation of I(Z,X) would seem a very high variance estimator; the dimensionality of X is typically large in modern deep-learning problems---do you have any thoughts on how the learning process fares as this varies? Further on you cite that L_PIB is intractable due to the high dimensionality of the bottleneck variables, I imagine that this still yields a high var MC estimator in your approximation (in practice)? Was the performance significantly worse without the Raiko estimator?\n\n* \"In this experiment, we compare PIBs with ....\" -> I find this whole section hard to read, the description of how the models relate to each other is a little difficult to follow at first sight.\n\n* Information dynamics of learning process (Figures 3, 6, 7, 8) -> I am curious as to why you did not run the PIB for the same number of epochs as the SFNN? I would also argue that you did not run either method as long as you should have (both approaches lack the longer term 'compression' stage whereby layers near the input reduce I(X,Z_i) as compared to their starting condition)? This property is visible in I(Z_2,X) for PIB in Figure 3, but otherwise absent.\n\n# Conclusion:\nIn conclusion, while interesting, for me the paper is not yet ready for publication. I would recommend this work for a workshop presentation at this stage.\n" ]
[ 4, 6, 4 ]
[ 4, 4, 4 ]
[ "iclr_2018_ByED-X-0W", "iclr_2018_ByED-X-0W", "iclr_2018_ByED-X-0W" ]
iclr_2018_HJZiRkZC-
Byte-Level Recursive Convolutional Auto-Encoder for Text
This article proposes to auto-encode text at byte-level using convolutional networks with a recursive architecture. The motivation is to explore whether it is possible to have scalable and homogeneous text generation at byte-level in a non-sequential fashion through the simple task of auto-encoding. We show that non-sequential text generation from a fixed-length representation is not only possible, but also achieved much better auto-encoding results than recurrent networks. The proposed model is a multi-stage deep convolutional encoder-decoder framework using residual connections, containing up to 160 parameterized layers. Each encoder or decoder contains a shared group of modules that consists of either pooling or upsampling layers, making the network recursive in terms of abstraction levels in representation. Results for 6 large-scale paragraph datasets are reported, in 3 languages including Arabic, Chinese and English. Analyses are conducted to study several properties of the proposed model.
rejected-papers
This paper presents a method for using byte level convolutional networks for building text-based autoencoders. They show that these models do well compared to RNN-based methods which model text in a sequence. Evaluation is solely based on byte level prediction error. The committee feels that the paper would have been stronger if evaluation was on some actual task (say summarization, Miao and Blunsom, for example) and show that it works as well as RNNs, the paper would have been stronger.
train
[ "BJuY5u9gf", "BkLqBW6lG", "ryQsImE-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nThis paper presents a convolutional auto-encoder architecture for text encoding and generation. It works on the character level and contains a recursive structure which scales with the length of the input text. Building on the recent state-of-the-art in terms of architectural components, the paper shows the feasibility of this architecture and compares it to LSTM, showing the cnn superiority for auto-encoding.\n\nThe authors have decided to encode the text into a length of 1024 - Why? Would different lengths result in a better performance?\n\nYou write \"Minimal pre-processing is applied to them since our model can be applied to all languages in the same fashion.\" Please be more specific. Which pre-processing do you apply for each dataset?\n\nI wonder if the comparison to a simple LSTM network is fair. It would be better to use a 2- or 3-layer network. Also, BLSTM are used nowadays.\n\nA strong part of this paper is the large amount of investigation and extra experiments.\nMinor issues:\nPlease correct minor linguistic mistakes as well as spelling mistakes. In Fig. 3, for example, the t of Different is missing.\n\nAn issue making it hard to read the paper is that most of the figures appear on another page than where they are mentioned in the text.\n\nthe authors have chosen to cite a work from 1994 for the vanishing gradient problem. Note, that many (also earlier) works have reported this problem in different ways. A good analysis of all researches is performed in Hochreiter, S., Bengio, Y., Frasconi, P., and Schmidhuber, J. (2001) \"Gradient flow in recurrent nets: the difficulty of learning long-term dependencies\".", "The authors propose autoencoding text using a byte-level encoding and a convolutional network with shared filters such that the encoder and decoder should exhibit recursive structure. They show that the model can handle various languages and run various experiments testing the ability of the autoencoder to reconstruct the text with varying lengths, perturbations, depths, etc.\n\nThe writing is fairly clear, though many of the charts and tables are hard to decipher without labels (and in Figure 8, training errors are not visible -- maybe they overlap completely?).\n\nMain concern would be the lack of experiments showing that the network learns meaningful representations in the hidden layer. E.g. through semi-supervised learning experiments or experiments on learning semantic relatedness of sentences. Obvious citations such as https://arxiv.org/pdf/1511.06349.pdf and https://arxiv.org/pdf/1503.00075.pdf are missing, along with associated baselines. Although the experiment with randomly permuting the samples is nice, would hesitate to draw any conclusions without results on downstream tasks and a clearer survey of the literature.", "The paper aims to illustrated the representation learning ability of the convolutional autoencoder with residual connections is proposed by to encode text at the byte level. The authors apply the proposed architecture to 3 languages and run comparisons with an LSTM. Experimental results with different perturbation of samples, pooling layers, and sample lengths are presented.\n\nThe writing is fairly clear, however the presentation of tables and figures could be done better, for example, Fig. 2 is referred to in page 3, Table 2 which contains results is referred to on page 5, Fig 4 is referred to in page 6 and appears in page 5, etc.\n\nWhat kind of minimal preprocessing is done on the text? Are punctuations removed? Is casing retained? How is the space character encoded?\n\nWhy was the encoded dimension always fixed at 1024? What is the definition of a sample here?\n\nThe description of the various data sets could be moved to a table/Appendix, particularly since most of the results are presented on the enwiki dataset, which would lead to better readability of the paper. Also results are presented only on a random 1M sample selected from these data sets, so the need for this whole page goes away.\n\nComparing Table 2 and Table 3, the LSTM is at 67% error on the test set while the proposed convolutional autoencoder is at 3.34%. Are these numbers on the same test set? While the argument that the LSTM does not generalize well due to the inherent memory learnt is reasonable, the differences in performance cannot be explained away with this. Can you please clarify this further?\n\nIt appears that the byte error shoot up for sequences of length 512+ (fig. 6 and fig. 7) and seems entirely correlated with the amount of data than recursion levels.\n\nHow do you expect these results to change for a different subset selection of training and test samples? Will Fig. 7 and Fig. 6 still hold?\n\nIn Fig, 8, unless the static train and test error are exactly on top of the recursive errors, they are not visible. What is the x-axis in Fig. 8? Please also label axes on all figures. \n\nWhile the datasets are large and would take a lot of time to process for each case study, a final result on the complete data set, to illustrate if the model does learn well with lots of data would have been useful. A table showing generated sample text would also clarify the power of the model.\n\nWith the results presented, with a single parameter setting, its hard to determine what exactly the model learns and why." ]
[ 7, 5, 5 ]
[ 4, 3, 5 ]
[ "iclr_2018_HJZiRkZC-", "iclr_2018_HJZiRkZC-", "iclr_2018_HJZiRkZC-" ]