paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
iclr_2018_SyhcXjy0Z
APPLICATION OF DEEP CONVOLUTIONAL NEURAL NETWORK TO PREVENT ATM FRAUD BY FACIAL DISGUISE IDENTIFICATION
The paper proposes and demonstrates a Deep Convolutional Neural Network (DCNN) architecture to identify users with disguised face attempting a fraudulent ATM transaction. The recent introduction of Disguised Face Identification (DFI) framework proves the applicability of deep neural networks for this very problem. All the ATMs nowadays incorporate a hidden camera in them and capture the footage of their users. However, it is impossible for the police to track down the impersonators with disguised faces from the ATM footage. The proposed deep convolutional neural network is trained to identify, in real time, whether the user in the captured image is trying to cloak his identity or not. The output of the DCNN is then reported to the ATM to take appropriate steps and prevent the swindler from completing the transaction. The network is trained using a dataset of images captured in similar situations as of an ATM. The comparatively low background clutter in the images enables the network to demonstrate high accuracy in feature extraction and classification for all the different disguises.
rejected-papers
Reviewers are unanimous that this is a reject. A "class project" level presentation. Errors in methodology and presentation. No author rebuttal or revision
train
[ "Hk2HjIfxG", "r11aaNYez", "BJE2bF3lM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper is relatively clear to follow, and implement. \n\nThe main concern is that this looks like a class project rather than a scientific paper. For a class project this could get an A in a ML class!\n\nIn particular, the authors take an already existing dataset, design a trivial convolutional neural network, and report results on it. There is absolutely nothing of interest to ICLR except for the fact that now we know that a trivial network is capable of obtaining 90% accuracy on this dataset.", "\nAs one can see by the title, the originality (application of DCNN) and significance (limited to ATM domain) is very limited. If this is still enough for ICLR, the paper could be okay. However, even so one can clearly see that the architecture, the depth, the regularization techniques, and the evaluation are clearly behind the state of the art. Especially for this problem domain, drop-out and data augmentation should be investigated.\n\nOnly one dataset is used for the evaluation and it seems to be very limited and small. Moreover, it seems that the same subjects (even if it is other pictures) may appear in the training set and test set as they were randomly selected. Looking into the referece (to get the details of the dataset - from a workshop of the IEEE International Conference on Computer Vision Workshops (ICCVW) 2017) reveals, that it has only 25 subjects and 10 disguises. This makes it even likely that the same subject with the same disguise appears in the training and test set.\n\nA very bad manner, which unfortunately is often performed by deep learning researchers with limited pattern recognition background, is that the accuracy on the test set is measured for every timestamp and finally the highest accuracy is reported. As such you perform an optimization of the paramerter #iterations on the test set, making it a validation set and not an independent test set. \n\nMinor issues:\nmake sure that the capitalization in the references is correct (ATM should be capital, e.g., by putting {ATM} - and many more things).", "This paper is an application paper on detecting when a face is disguised, however it is poorly written and do not contribute much in terms of novelty of the approach. The application domain is interesting, however it is simply a classification problem\n\nThe paper is written clearly (with mistakes in an equation), however, it does not contribute much in terms of novelty or new ideas.\n\nTo make the paper better, more empirical results are needed. In addition, it would be useful to investigate how this particular problem is different than a binary classification problem using CNNs.\n\nNotes:\nEquation 2 has a typo, '*'" ]
[ 1, 2, 3 ]
[ 5, 4, 5 ]
[ "iclr_2018_SyhcXjy0Z", "iclr_2018_SyhcXjy0Z", "iclr_2018_SyhcXjy0Z" ]
iclr_2018_HkGcX--0-
Auxiliary Guided Autoregressive Variational Autoencoders
Generative modeling of high-dimensional data is a key problem in machine learning. Successful approaches include latent variable models and autoregressive models. The complementary strengths of these approaches, to model global and local image statistics respectively, suggest hybrid models combining the strengths of both models. Our contribution is to train such hybrid models using an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder. In contrast, prior work on such hybrid models needed to limit the capacity of the autoregressive decoder to prevent degenerate models that ignore the latent variables and only rely on autoregressive modeling. Our approach results in models with meaningful latent variable representations, and which rely on powerful autoregressive decoders to model image details. Our model generates qualitatively convincing samples, and yields state-of-the-art quantitative results.
rejected-papers
To ensure that a VAE with a powerful autoregressive decoder does not ignore its latent variables, the authors propose adding an extra term to the ELBO, corresponding to a reconstruction with an auxiliary non-autoregressive decoder. This does indeed produce models that use latent variables and (with some tuning of the weight on the KL term) perform as well as the underlying autoregressive model alone. However, as the reviewers pointed out, the paper does not demonstrate the value of the resulting models. If the goal is learning meaningful latent representations, then the quality of the representations should be evaluated empirically. Currently it is not clear whether that the proposed approach would yield better representations than a VAE with a non-autoregressive decoder or a VAE with an autoregressive decoder trained using the "free bits" trick of Kingma et al. (2016). This is certainly an interesting idea, but without a proper evaluation it is impossible to judge its value.
test
[ "rJuuHjLEG", "Bk9xqcOgG", "rJZx5rBNG", "B1LwiG9gz", "BylKxYolM", "HJGc1y3XG", "ByqjxYB-z", "HkwL0GWfM", "BJu573sZM", "r1zYWb9bG", "BkLUDaPZz", "ryc07KSZG", "rJTcZYS-M", "HJZ4bKSZM", "Byjzhwnlf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "author", "author", "author", "public" ]
[ "My main problem is still that it's not clear what this model has to offer. The model is neither able to improve density estimation over PixelCNNs (while adding complexity), nor has it been shown to learn better representations (none of the evaluations seem appropriate to evaluate representations). Nevertheless, I slightly revised my score.\n\n> First, the second component p2 is assumed to have poorer log-likelihood than p1. The mixture is therefore a suboptimal solution, in particular a better solution would be obtained by just using the first mixture component p1.\n\nThis reasoning is incorrect. The log-likelihood of p1 + p2 may be minimally better than p1's, even if p1 has a much better log-likelihood than p2. Note that the log-density of the mixture is approximately the maximum of the individual log-densities (see log-sum-exp as approximation to maximum), so using p2 generally comes at little cost to the model. Log-likelihood does not even create a big incentive to have a small weight on p2.\n\n> Our quantitative experimental results in terms of likelihood on held-out data (the bpd metric) improve over earlier latent variable models in the literature.\n\nDensity estimation using latent variable models is not a well defined goal – a PixelCNN and in fact any autoregressive model can be equivalently formulated/viewed as a latent variable model (and if it weren't, we can always mix in a latent variable model at near-zero cost to the log-likelihood) – nor is it clear why it is a desirable goal.", "Summary:\n\nThis paper attempts to solve the problem of meaningfully combining variational autoencoders (VAEs) and PixelCNNs. It proposes to do this by simultaneously optimizing a VAE with PixelCNN++ decoder, and a VAE with factorial decoder. The model is evaluated in terms of log-likelihood (with no improvement over a PixelCNN++) and the visual appearance of samples and reconstructions.\n\nReview:\n\nCombining density networks (like VAEs) and autoregressive models is an unsolved problem and potentially very useful. To me, the most interesting bit of information in this paper was the realization that you can weight the reconstruction and KL terms of a VAE and interpret it as variational inference in a generative model with multiple copies of pixels (below Equation 7). Unfortunately the authors were unable to make any good use of this insight, and I will explain below why I don’t see any evidence of an improved generative model in this paper.\n\nAs the paper is written now, it is not clear what the goal of the authors is. Is it density estimation? Then the addition of the VAE had no measurable effect on the PixelCNN++’s performance, i.e., it seems like a bad idea due to the added complexity and loss of tractability. Is it representation learning? Then the paper is missing experiments to support the idea that the learned representations are in any way an improvement. Is it image synthesis (not a real application by itself), then the paper should have demonstrated the usefulness of the model on a real task and probably involve human subjects in a quantitative evaluation.\n\nMuch of the authors’ analysis is based on a qualitative evaluation of samples. However, samples can be very misleading. A lookup table storing the training data generates samples containing objects and perfect details, but obviously has not learned anything about either objects or the low-level statistics of natural images. \n\nIn contrast to the authors, I fail to see a meaningful difference between the groups of samples in Figure 1.\n\nThe VAE samples in Figure 3b) look quite smooth. Was independent Gaussian noise added to the VAE samples or are those (as is sometimes done) sampled means? If the former, what was sigma and how was it chosen?\n\nOn page 7, the authors conclude that “the pixelCNN clearly takes into account the output of the VAE decoder” based on the samples. Being a mixture model, a PixelCNN++ could easily represent the following mixture:\n\np(x | z) = 0.01 \\prod_i p(x_i | x_{<i}) + 0.99 \\prod_i p(x_i | z)\n\nThe first term is just like a regular PixelCNN++, ignoring the latent variables. The second term is just like a variational autoencoder with factorial decoder. The samples in this case would be dominated by the VAE, which depends on the latent state. The log-likelihood would be dominated by the first term and would be minimally effected (see Theis et al., 2016). Note that I am not saying that this is exactly what the model has learned. I am merely providing a possible counter example to the notion that the PixelCNN++ has learned to use of the latent representation in a meaningful way.\n\nWhat happens if the KL term is simply downweighted but the factorial decoder is not included? This seems like it would be a useful control to include.\n\nThe paper is well written and clear.", "Hello Everyone, \n\nI read the updated version of the paper and went through the discussion here in the comment section. As mentioned before: the paper does not significantly push the state of the art for density modelling and the empirical results do not outperform pure autoregressive approaches. \n\nNevertheless, I think there is a active community of researches interested in combining latent variable models with autoregressive decoders for various reasons (e.g. sampling runtime performance; using latent representations for related tasks; etc. ). I agree with Reviewer2 that this paper does not solve this issue, but I think it contributes to \nthe ongoing discussion in the field. And compared to [Chen et al. '17, Kolesnikov & Lampert '17, Reed et. al. 17, etc.] it does provide a new perspective: the interpretation of a generative model for replicated pixels. \n\nI think this perspective deserves to be heard and I will therefore maintain my rating (good paper, accept)", "The proposed approach is straight forward, experimental results are good, but don’t really push the state of the art. But the empirical analysis (e.g. decomposition of different cost terms) is detailed and very interesting. ", "The authors present Auxiliary Guided Autoregressive Variational autoEncoders (AGAVE), a hybrid approach that combines the strengths of variational autoencoders (global statistics) and autorregressive models (local statistics) for improved image modeling. This is done by controlling the capacity of the autorregressive component within an auxiliary loss function.\n\nThe proposed approach is a straightforward combination of VAE and PixelCNN that although empirically better than PixelCNN, and presumably VAE, does not outperform PixelCNN++. Provided that the authors use PixelCNN++ in their approach, quantitively speaking, it is difficult to defend the value of adding a VAE component to the model. The authors do not describe how \\lambda was selected, which is critical for performance, provided the results in Figure 4. That being said, the contribution from the VAE is likely to be negligible given the performance of PixelCNN++ alone.\n\n- The KL divergence in (3) does more than simply preventing the approximation q() from becoming a point mass distribution.", "Dear reviewers,\n\nThe paper has been updated to take into account major remarks made. The appendix now contains a presentation of why it is always optimal for the auto-regressive decoder to use the latent variables no matter its expressivity, an additional visualization where the auxiliary representation is fixed and multiple images are sampled from the auto-regressive component, and a control experiment in which the auxiliary loss is removed after pre-training. \n\nAgain, thank you for your feedback and time spent reviewing this paper.", "Dear reviewers, thank you for the constructive feedback. We discuss the concerns raised in separate answers to each review, aiming for brevity and clarity in this first response. Please don’t hesitate to let us know if you feel certain points should be discussed in more detail. We will revise the pdf based on your feedback over the coming days.\n\nUpdate: \nThe paper has been updated to take into account major remarks made by the reviewers. The appendix now contains a presentation of why it is always optimal for the auto-regressive decoder to use the latent variables no matter its expressivity, an additional visualization where the auxiliary representation is fixed and multiple images are sampled from the auto-regressive component, and a control experiment in which the auxiliary loss is removed after pre-training. \n", "Thank you for your update, we now answer the additional points raised. \n\nLet us briefly recall the argument of [Theis et al., 2016] on possible mismatches between sample quality and log-likelihood scores of models. Theis et al, present a hypothetical mixture model p(x) = a p1(x) + (1-a) p2(x), where p1 attains a good likelihood but samples of poor visual quality, and p2 gives a poor likelihood but samples of good visual quality. With small a, e.g. a=0.001, almost all samples will look good. Yet the model can still attain a good likelihood since log(p(x)) >= log(a) + log(p1(x)), and for high dimensional x the log(a) term will be negligible compared to the log-likelihood term log(p1(x)). \nApplied to our setting, it could be that we obtain good likelihood scores with such a mixture, in which p1 would be a pixelCNN ignoring the latent variable, and p2 would generate samples that correlate with our intermediate latent variable representation f(z) (see Fig 2 in the paper).\n\nLet us first make a general remark about the mixture example of Theis et al. Although such a hypothetical mixture can in principle obtain a “good” likelihood score (relatively close to that of p1), there is an incentive in the training objective not to converge to such solutions. First, the second component p2 is assumed to have poorer log-likelihood than p1. The mixture is therefore a suboptimal solution, in particular a better solution would be obtained by just using the first mixture component p1. Second, assume that we do have a solution that includes the mixture component with good samples, despite being detrimental to the likelihood. Given that it is non-trivial to obtain a model that generates samples with good visual quality, finding such a solution would require another training signal, which we do not have in our model.\n\nIn addition, we now explain why in our approach, no matter how expressive the autoregressive decoder is, to minimize the loss it is optimal to use the latent variables. Therefore, in the hypothetical mixture of Theis et al, p1 should generate samples that correlate with the latent variable, unless it is also suboptimal in terms of the loss.\n\nFor the standard VAE loss, it has been shown that given a sufficiently expressive decoder, it is optimal to ignore the latent variables. See for example [Variational Lossy Autoencoder, Chen et al] for a theoretical justification based on bits-back coding. The main argument is that, under the evidence lower bound objective function of eq (2) in our paper, for a model without latent variables the KL penalty can be trivially set to zero since in that case p(z|x) = p(z) and we can choose q(z)=p(z), while this is not the case for a model that does exploit latent variables for which p(z|x) can be arbitrarily complex depending on the relation between x and z. Therefore, models without latent variables are favored, as long as they obtain a log-likelihood that is the same as a model with latent variables, or worse up the the KL cost of the latent variable model.\n\nOur setting is different: the intermediate auxiliary loss ensures a meaningful latent variable since the factored decoder can only model variable dependencies through the latent variables. This is what underlies any non-degenerate variational autoencoder model with a decoder of limited capacity. Given that the factored decoder induces a certain non-zero KL “cost” in eq (2), there is no longer an advantage in ignoring the latent variable for the autoregressive decoder. Indeed, since the factored decoder renders x and z dependent, the uncertainty on x is reduced by conditioning on z. Therefore, it is optimal for the autoregressive decoder to exploit the information on x carried by the posterior on the latent variable q(z|x), \n\nWe will clarify these points in the paper.\n\n“ What happens if the KL term is simply down-weighted but the factorial decoder is not included? This seems like it would be a useful control to include.” \n\nSimply down-weighting the KL term (to less than 1) would indeed encourage the decoder to use the latent variables. The reconstruction quality would clearly improve, however in that case the loss would no longer be a lower-bound on the log likelihood of the model. We have performed a control similar to what you suggest: we trained our model to convergence, then removed the auxiliary loss and tried fine-tuning from there. This strong initialization could point the model towards good use of the latent variables. When doing this, however, the encoder posterior immediately collapses to the prior and the pixel CNN samples become independent of the latent variables. This shows that the auxiliary loss is necessary to enable the use of the latent variables by the expressive prior. We will include visualizations of this control in a revised version of the paper shortly.\n", "Hi,\n\nYou are welcome, don't hesitate to ask. \n\n\" From what I understand, the vae stream is essentially the same as the other 2 streams, with the difference that the convolutions are not masked. Does this stream have as many layers as the other 2 ? In other words, does this stream also have 6 layers of n resnet blocks? What kernel size was used for the non masked convolutions ? \"\n\nThe non masked convolutions use a kernel of size 3x3. You are correct, it is exactly the same design as the two other streams: everywhere there is a block for the two others, there is one for the new stream, and down sampling is done at the same time. The main reason for this is simplicity, and that's why we preferred it over other means of conditioning a pixelCNN that are used in other papers.\n\n\"Is this connection similar to how the left stream takes input from the up stream ?\"\nYes, it is done exactly in the same way. Also, the conditioning stream and the one looking up are given to the one looking left by first concatenating them.\n\nGood luck for the challenge", "Hi, \n\nthanks for taking the time to answer my questions, it is much appreciated. I have a few follow up questions if you don't mind : \n\nFrom what I understand, the vae stream is essentially the same as the other 2 streams, with the difference that the convolutions are not masked. Does this stream have as many layers as the other 2 ? In other words, does this stream also have 6 layers of n resnet blocks? What kernel size was used for the non masked convolutions ?\n\nSecond, you connect the vae stream with the rest of the network by letting the stream looking left take input from the vae stream. Is this connection similar to how the left stream takes input from the up stream (a non-linearity followed by network-in-network (1x1 conv) connection) ?\n\nThanks again", "Hello,\n\nYour questions are very relevant, and will be helpful to improve reproducibility, thank you for your interest. We will release the code if the paper is accepted. In the meantime, we'll do our best to answer your questions. Do not hesitate to ask if you need more information.\n\n\"We allow each layer of the pixel-CNN to take additional input using non-masked convolutions from the feature stream based on the VAE output\". How is this implemented exactly ? Looking at the PixelCNN++ implementation, we see that every layer is composed of 2 streams of (n=5) resnet blocks, specifically one downwards and one downward+rightward stream. In AGAVE, do you feed the vae output to every resnet block in both streams ? or do you you feed it to the top resnet block at every layer for both streams ?\"\n\nIn Agave we use separate stream for the conditional information: the coarse image is given as input to that new stream at the first layer, then this new stream is used as input to the stream looking left. This is different from other methods of conditioning used with pixelCNN and pixelCNN++, we find it quite natural.\nThe new stream does not take other inputs, so convolutions do not have to be masked. You could chose to also give other streams as input to the new one, provided that the convolutions across the new inputs are masked, and that if the stream looking to the left is used as input to the conditioning, then the conditioning is not used as input to the stream looking up (that would 'look into the future').\n\n\"Second, you mention that you condition on the \"VAE decoder output f(z), or possibly an upsampled (downsampled?) version if y has a lower resolution than x\". How exactly do you perform downsampling ? do you use strided convolutions ? If so, do you use the same downsampled vae output for the kth and the (K-k)th layer of the PixelCNN, as they have the same resolution?.\"\n\nThe choice of downsampling method (max pooling, average pooling, strided convolution) is not critical for log-likelihood performance, the results reported all use average pooling. The second part of your question is answered in our previous answer: $y$ is given as input at the top of the stream. The new stream is downsampled at the same time as the other ones.\n\n\"Third, did you use the default hyperparameters proposed on the repositories of IAF/VAE and PixelCNN++ ? If not, what modifications did you make? Did you reduce the size of the networks so that they both fit on a single GPU ? What kind of initialisation was performed on the weights ?\"\n\nThe hyperparameters are modified as little as possible. Changing the depth of the pixelCNN is an option that we explored. It slightly hurts performance for instance by using 3 resnet blocks instead of 5 we end up with 2.96 bpd, but does bring the size of the model down. One of our goal was to show that our method can be used without restraining the autoregressive component, though, so this was done more as a sanity check to confirm that it is indeed useful to use the full model. If you are working with the VAE at a downsampled scale, the memory cost of this component is greatly reduced. If size on the GPU is an issue (it was for us, as we trained with 1 GPU only) you have two options: pretrain each component separately without compromise, then finetune the two together with batch sizes small enough to fit your GPU, or train both components together with a reduced batch size to begin with. Weight normalisation was used in our experiments, and we used the associated initialisation.\n\n\"Lastly, for how long were the PixelCNN and IAF/VAE models pretrained ? Do you have any other advice/specifications for people aiming to reproduce your results ?\"\n\nApproximately 2 days for the VAE, and 3 days for the pixelCNN when pretraining sequentially. If you pretrain the pixelCNN on the downsampled GT, training time of the pixelCNN will be greatly reduced though we did not do so in our final experiments to not overly complicate the training procedure. If you are working at a reduced scale for the VAE, a day is more than enough. General advice could include: displaying a lot of curves (breakdown of the different cost functions), visualizing intermediate representations and intermediate targets as well as final ones, and having a minimal version of the architecture to debug, as pixelCNN++ is a heavy model.\n\nWe hope this help, don't hesitate if you need more information or if something is unclear.\n\n\n", "Thank you for your detailed remarks and analysis of our work. We now answer your main concerns, if you want more information do not hesitate to ask. \n\nAnnonReviewer2: \"To me, the most interesting bit of information in this paper was the realization that you can weight the reconstruction and KL terms of a VAE and interpret it as variational inference in a generative model with multiple copies of pixels (below Equation 7). Unfortunately the authors were unable to make any good use of this insight [...]\"\n\nThis insight is indeed one of the cornerstones of our contribution. Combining a factorial VAE model over the pixels with an conditional autoregressive model over another copy of the pixels naturally leads to two setting of lambda (see eq. 7 and paragraph just below), and shows that larger values of lambda also lead to valid lower bounds on the combined loss. Based on this observation, we explore models trained with different values of lambda, which improves performance from 3.2 bpd (lambda=1) to 2.92 bpd (lambda=12). The latter sets a new state-of-the-art bpd level among generative models with a non-degenerate latent variable structure. We also provide quantitative and qualitative analysis of the effect different choices of balance have on the model. In that sense, we believe the insight has been put to good use. \n\n\nAnnonReviewer2: “As the paper is written now, it is not clear what the goal of the authors is. Is it density estimation? Then the addition of the VAE had no measurable effect on the PixelCNN++’s performance, i.e., it seems like a bad idea due to the added complexity and loss of tractability. Is it representation learning? Then the paper is missing experiments to support the idea that the learned representations are in any way an improvement. Is it image synthesis (not a real application by itself), then the paper should have demonstrated the usefulness of the model on a real task and probably involve human subjects in a quantitative evaluation.”\n\nThanks for this valuable input, that will help us to clarify the message of our paper. Our goal is to learn generative models (i.e. density estimation) with latent variable models (i.e. representation learning), for the reason stated above in response to AnnonReviewer3. Our quantitative experimental results in terms of likelihood on held-out data (the bpd metric) improve over earlier latent variable models in the literature. The images sampled from our model are used as a secondary qualitative form of evaluation. The examples in Figure 3 show that our model indeed learns a meaningful latent variable structure that is conditioning the autoregressive decoder (see also below).\n\nAnnonReviewer2: “The VAE samples in Figure 3b) look quite smooth. Was independent Gaussian noise added to the VAE samples or are those (as is sometimes done) sampled means? If the former, what was sigma and how was it chosen?”\n\nThe images produced by the VAE and shown in Figure 3b) are indeed the means of the output distribution, which is why no 'salt and pepper noise' can be seen. We will clarify the text in this respect. The variance is a learned constant per color channel used across all spatial positions, and independent of the latent variable.\n\nAnnonReviewer2: “On page 7, the authors conclude that “the pixelCNN clearly takes into account the output of the VAE decoder” based on the samples. Being a mixture model, a PixelCNN++ could easily represent the following mixture: p(x | z) = 0.01 \\prod_i p(x_i | x_{“\n\nThe end of this argument was unfortunately cut off. Can you please send an update with the complete text? \n\nIn the meantime, let us respond as follows. The samples in figures 1c, 3 and 6 show the intermediate representation f(z) that is computed by the VAE decoder, together with samples from the pixelCNN decoder that is conditioned on f(z), see also Figure 2 for schematic overview. \nLet us suppose, contrary to our claim, that the pixelCNN decoder ignores the output of the VAE decoder, i.e. that the pixelCNN output is independent of the VAE output. In this case the image pairs of VAE decoder output and PixelCNN sample should not correlate at all in figures 1c, 3 and 6. Yet, we observe in each single example a clear correspondence between the pixelCNN sample and the conditioning VAE output. The latter looks like a smoothed version of the former. Therefore, we conclude that the latter is not independent of the former, and that pixelCNN does take into account the VAE output, i.e. we succeed in conditioning the pixelCNN on the VAE output. \nWe hope this clarifies our statement, and we will update the text accordingly. \n", "Thank you for your appreciation of our analysis and empirical evaluation. \n\nAnnonReviewer1: “The proposed approach is straight forward, experimental results are good, but don’t really push the state of the art. But the empirical analysis (e.g. decomposition of different cost terms) is detailed and very interesting.”\n\n We provide justifications of why we believe that our work significantly pushes the state of the art in latent variable density modeling in our other answers, and hope these arguments are satisfying. \n", "Thank you for your constructive review of our work. We will address the main concerns raised, please do not hesitate to ask for more detail if needed. \n\nAnnonReviewer3: “ [...] empirically better than PixelCNN, and presumably VAE, does not outperform PixelCNN++.” “ Provided that the authors use PixelCNN++ [...] it is difficult to defend the value of adding a VAE component to the model”\n\nOur main contribution is a method based on an auxiliary loss to learn generative models that combine a non-degenerate latent variable structure with expressive autoregressive decoders. \n\nAmong latent variable models, our model sets a new state-of-the-art result of 2.92 bpd. That is the same score as obtained by pixelCNN++ which does not learn a latent variable representation. Among VAE models with factored observation model of p(x|z), VAE-IAF obtains the best quantitative score of 3.11 bpd on CIFAR10. Our performance of 2.92 bpd represents an important improvement. The improvement over Lossy-VAE (2.95 bpd), former best model with latent variables, is 0.03 bpd. Our auxiliary loss allows us to use a more powerful autoregressive decoder, which allows us to improve over the Lossy-VAE result. The numbers that support these claims can be found in Table 1. We will improve the presentation of Table 1 to highlight which models use latent variables and/or autoregressive decoders, to more easily appreciate our contribution in terms of the quantitative evaluation results.\n\nUnlike autoregressive models such as pixelCNN++, latent variable models learn data representations which are useful for tasks such as semi-supervised learning, see e.g. (Kingma et al., NIPS 2014). Therefore we believe that our work makes an important contribution to generative representation learning. \n\nAnnonReviewer3: “The authors do not describe how lambda was selected, which is critical for performance [...]” \n\nThe results used when comparing with the state of the art are reported using our best configuration with lambda equal to 12. We will clarify this in the text. In figures 3, 4 and 5 qualitative and quantitative results are reported, as indicated, over a range of lambda values. The choice of lambda is important, but not critical, provided it is taken to be bigger than two: Figure 4 shows that beyond the first drop of 0.20 bpd when going from lambda =1 to lambda = 2, the bpd further decreases monotonically to down to an improvement of 0.26 bpd for lambda=12. \n", "Hi, I'm a Master's student in Computer Science taking part in the ICLR reproducibility challenge, and I have a few questions regarding implementation. \n\nFirst, you mention that \"We allow each layer of the pixel-CNN to take additional input using non-masked convolutions from the feature stream based on the VAE output\". How is this implemented exactly ? Looking at the PixelCNN++ implementation, we see that every layer is composed of 2 streams of (n=5) resnet blocks, specifically one downwards and one downward+rightward stream. In AGAVE, do you feed the vae output to every resnet block in both streams ? or do you you feed it to the top resnet block at every layer for both streams ?\n\nSecond, you mention that you condition on the \"VAE decoder output f(z), or possibly an upsampled (downsampled?) version if y has a lower resolution than x\". How exactly do you perform downsampling ? do you use strided convolutions ? If so, do you use the same downsampled vae output for the kth and the (K-k)th layer of the PixelCNN, as they have the same resolution?\n\nThird, did you use the default hyperparameters proposed on the repositories of IAF/VAE and PixelCNN++ ? If not, what modifications did you make? Did you reduce the size of the networks so that they both fit on a single GPU ? What kind of initialisation was performed on the weights ?\n\nLastly, for how long were the PixelCNN and IAF/VAE models pretrained ? Do you have any other advice/specifications for people aiming to reproduce your results ? \n\n\nMany thanks" ]
[ -1, 5, -1, 7, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "Bk9xqcOgG", "iclr_2018_HkGcX--0-", "B1LwiG9gz", "iclr_2018_HkGcX--0-", "iclr_2018_HkGcX--0-", "iclr_2018_HkGcX--0-", "iclr_2018_HkGcX--0-", "Bk9xqcOgG", "r1zYWb9bG", "BkLUDaPZz", "Byjzhwnlf", "Bk9xqcOgG", "B1LwiG9gz", "BylKxYolM", "iclr_2018_HkGcX--0-" ]
iclr_2018_SkxqZngC-
A Bayesian Nonparametric Topic Model with Variational Auto-Encoders
Topic modeling of text documents is one of the most important tasks in representation learning. In this work, we propose iTM-VAE, which is a Bayesian nonparametric (BNP) topic model with variational auto-encoders. On one hand, as a BNP topic model, iTM-VAE potentially has infinite topics and can adapt the topic number to data automatically. On the other hand, different with the other BNP topic models, the inference of iTM-VAE is modeled by neural networks, which has rich representation capacity and can be computed in a simple feed-forward manner. Two variants of iTM-VAE are also proposed in this paper, where iTM-VAE-Prod models the generative process in products-of-experts fashion for better performance and iTM-VAE-G places a prior over the concentration parameter such that the model can adapt a suitable concentration parameter to data automatically. Experimental results on 20News and Reuters RCV1-V2 datasets show that the proposed models outperform the state-of-the-arts in terms of perplexity, topic coherence and document retrieval tasks. Moreover, the ability of adjusting the concentration parameter to data is also confirmed by experiments.
rejected-papers
The paper proposes a BNP topic model that uses a stick-breaking prior over document topics and performs VAE-style inference over them. Unfortunately, the novelty of this work is limited, as VAE-like inference for LDA-like models, inference with stick-breaking priors for VAEs, and placing a prior on the concentration parameter in a non-parametric topic model have all been done before (see e.g. Srivastava & Sutton (2017), Nalisnick & Smyth (2017), and Teh, Kurihara & Welling (2007) respectively). There are also concerns about the correctness of treating topics as parameters (as opposed to random variables) in the proposed model. The authors' clarification regarding this point was helpful but not sufficient to show the validity of the approach.
train
[ "ByaL9g21M", "SyhzVlKez", "rJfo7HsxG", "SySKU3i7f", "r1ceZuQmf", "rJreJd77M", "ryyiTwS-f", "S1qr6DS-M", "SJCGAPH-f", "HyKCvwrWM", "HkykYDB-M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "\"topic modeling of text documents one of most important tasks\"\nDoes this claim have any backing?\n\n\"inference of HDP is more complicated and not easy to be applied to new models\" Really an artifact of the misguided nature of earlier work. The posterior for the $\\vec\\pi$ of a elements of DP or HDP can be made a Dirichlet, made finite by keeping a \"remainder\" term and appropriate augmentation. Hughes, Kim and Sudderth (2015) have avoided stick-breaking and CRPs altogether, as have others in earlier work. Extensive models building on simple HDP doing all sorts of things have been developed.\n\nVariational stick-breaking methods never seemed to have worked well. I suspect you could achieve better results by replacing them as well, but you would have to replace the tree of betas and extend your Kumaraswamy distribution, so it may not work. Anyway, perhaps an avenue for future work.\n\n\"infinite topic models\" I've always taken the view that the use of the word \"infinite\" in machine learning is a kind of NIPSian machismo. In HDP-LDA at least, the major benefit in model performance comes from fitting what you call $\\vec\\pi$, which is uniform in vanilla LDA, and note that the number of topics \"found\" by a HDP-LDA sampler can be made to vary quite widely by varying what you call $\\alpha$, so any statement about the \"right\" number of topics is questionable. So the claim in 3rd paragraph of Section 2, \"superior\" and \"self-determined topic number\" I'd say are misguided. Plenty of experimental work to support this.\n\nIn Related Work, you seem to only mention HDP for non-parametric topic models. More work exists, for instance using Pitman-Yor distributions for modelling words and using Gibbs samplers that are efficient and don't rely on the memory hungry HCRP.\n\nGood to see a prior is placed on the concentration parameter. Very important and not well done in the community, usually. \nADDED: Originally done by Teh et al for HDP-LDA, and subsequently done\nby several, including Kim et al 2016. Others stress the importance of this. You need to\ncite at least Teh et al. in 5.4 to show this isn't new and the importance is well known.\n\nThe Prod version is a very nice idea. Great results. This looks original, but I'm not expert enough in the huge masses of new deep neural network research popping up.\n\nYou've upped the standard a bit by doing good experimental work. Oftentimes this is done poorly and one is left wondering. A lot of effort went into this.\nADDED: usually like to see more data sets experimented with\n\nWhat code is used for HDP-LDA? Teh's original Matlab HCRP sampler does pretty well because at least he samples hyperparameters and can scale to 100k documents (yes, I tried). The comparison with LDA makes me suspicious. For instance, on 20News, a good non-parametric LDA will find well over 400 topics and roundly beat LDA on just 50 or 200. If reporting LDA, or HDP-LDA, it should be standard to do hyperparameter fitting and you need to mention what you did as this makes a big difference.\nADDED: 20News results still poor for HPD, but its probably the implementation used ... their\n online variational algorithm only has advantages for large data sets \n\nPros: \n* interesting new prod model with good results\n* alternative \"deep\" approach to a HDL-LDA model\n* good(-ish) experimental work\nCons:\n* could do with a competitive non-parametric LDA implementation\n\nADDED: good review responses generally\n", "The paper constructs infinite Topic Model with Variational Auto-Encoders (iTM-VAE) by combining stick-breaking variational auto-encoder (SB-VAE) of Nalisnick & Smyth (2017) with latent Dirichlet allocation (LDA) and several inference techniques used in Miao et al. (2016 & 2017). A main difference from Autoencoded Variational Inference For Topic Model (AVITM) of Srivastava & Sutton (2017), which had already applied VAE to LDA, is that the Dirichlet-distributed topic distribution vector for each document is now imposed with a stick-breaking prior. To address the challenge of reparameterizing the beta distributions used in stick-breaking, the paper follows SB-VAE to use the Kumaraswamy distributions to approximate the beta distributions.\n\nThe novelty of the paper does not appear to be significant, considering that most of the key techniques used in the paper had already appeared in several related papers, such as Nalisnick & Smyth (2017), Srivastava & Sutton (2017), and Miao et al. (2016 & 2017). \n\nWhile experiments show the proposed models outperform the others quantitatively (perplexity and coherence), the paper does not provide sufficient justifications on why ITM-VAE is better. In particular, it provides little information about how the two baselines, LDA and HDP, are implemented (e.g., via mean-field variational inference, VAE, or MCMC?) and how their perplexities and topic coherences are computed. In addition, achieving the best performance with about 20 topics seem quite surprising for 20news and RCV-v2. It is hard to imagine 20 news, which consists of articles in 20 different newsgroups, can be well characterized by about 20 different topics. Is there a tuning parameter that significantly impacts the number of topics inferred by iTM-VAE?\n\nAnother clear problem of the paper is that the “Bayesian nonparametric” generative procedure specified in Section 4.1 is not correct in theory. More specifically, independently drawing the document-specific pi vectors from the stick-breaking processes will lead to zero sharing between the atoms of different stick-breaking process draws. To make the paper theoretically sound as a Bayesian nonparametric topic model that uses the stick-breaking construction, please refer to Teh et al. (2006, 2008) and Wang et al. (2011) for the correct construction that ties the document-specific pi vectors with a globally shared stick-breaking process. ", "The paper proposes a VAE inference network for a non-parametric topic model.\n\nThe model on page 4 is confusing to me since this is a topic model, so document-specific topic distributions are required, but what is shown is only stick-breaking for a mixture model.\n\nFrom what I can tell, the model itself is not new, only the fact that a VAE is used to approximate the posterior. In this case, if the model is nonparametric, then comparing with Wang, et al (2011) seems the most relevant non-deep approach. Given the factorization used in that paper, the q distributions are provably optimal by the standard method. Therefore, something must be gained by the VAE due to a non-factorized q. This would be best shown by comparing with the corresponding non-deep version of the model rather than LDA and other deep models.", "Dear Area Chair and Reviewers,\n\nWe have revised our submission according to your valuable comments. Here we list the changes made to the original submission for your convenience. \n\n -1. [AR1] In Table 1, we have clarified that the results of HDP are indeed based on Wang, et al (2011).\n\n -2. [AR2] In the last paragraph of Section 4.1 in the revision, we explain why our model does not need a global base distribution. And we also explain that the topics (atoms) do be shared across documents in different stick-breaking draws by iTM-VAE naturally. If, as AR2 worried, our model leads to “zero sharing” of topics, the model will learn nothing. In contrast, our model learns quite meaningful topics (c.f. Table 4 in the revision). We are confident that our method is theoretical correct. Please refer to Q1 of our rebuttal to AR2 for more details.\n\n -3. [AR2] In Section 5.4, we add a paragraph (the 2nd paragraph) to emphasize the novelty of iTM-VAE-G model, which imposes a prior over the concentration parameter, and analyze the benefits of iTM-VAE-G model. We also add an experiment to show the advantage of iTM-VAE-G over other commonly used tricks for VAE-based models to increase the adaptive power, such as KL annealing and regularizing the decoder in Table 2 and the 3rd and 4th paragraph of Section 5.4. \n\n -4. [AR2] In Section 7.1 of the revision, we provide more justifications on why iTM-VAE is better. We compare the sparsity and the TSNE of the representations of iTM-VAE and ProdLDA in Figure 4. We also list the topics learned by ProdLDA in Table 5. Compared to Table 4, there are a lot of redundant topics in Table 5.\n\n -5. [AR2 and AR3] In Table 1, we have clarified that the results of LDA and HDP are taken from Miao, et al. (2017). Since we use the exactly same datasets as Miao, et al. (2017), we can take the results of LDA and HDP directly. According to Miao et al. (2017), they use Hoffman et al., (2010) for LDA and Wang et al. (2011) for HDP, which are both based on variational inference.\n\n -6. [AR3] In the 3rd paragraph of Section 1, we have removed the claim “Inference of HDP is more complicated and not easy to be applied to new models …”\n\n -7. [AR3] In the 3rd paragraph of Section 2, we have removed \"superior\" and \"self-determined topic number\". \n\n -8. [AR3] In Related Work, we have added more references, such as Kim & Sudderth (2011); Archambeau et al. (2015) and Lim et al. (2016). \n", "Dear Reviewer,\n\nThanks very much for your review!\n\nWe received the review at Dec 2nd and we have posted the rebuttal and the revision at Dec 6th. In the rebuttal, we explained why iTM-VAE does not need a globally shared sticking-breaking process and the documents DO shared the topics of the model. We also clarified the novelty of the paper and addressed some concerns of the reviewer. \n\nWe hope that our rebuttal has addressed your concerns. And we are still waiting for your feedback. We are looking forward to hearing from you. Many thanks!\n\n", "Dear Reviewer, \n\nThanks very much for your review!\n\nWe received the review at Dec 2nd and we posted the rebuttal and the revision at Dec 6th. We hope that our rebuttal has addressed your concerns. And we are still waiting for your feedback. We are looking forward to hearing from you.\n\n", "Q2. “The novelty of the paper does not appear to be significant, considering that most of the key techniques used in the paper had already appeared in several related papers, such as Nalisnick & Smyth (2017), Srivastava & Sutton (2017), and Miao et al. (2016 & 2017). ”\n\n\nWe respectfully beg to differ that the novelty of the paper is not significant. \n1) Although the paper shares the similar motivation with Srivastava & Sutton (2017) that using neural network to do topic modeling tasks, the architecture of iTM-VAE is quite different with AVITM. Moreover, our model is a kind of nonparametric model while AVITM is not. At last, the performance (perplexity, topic coherence) of our model is much better than AVITM.\n\n2) Compared with Nalisnick & Smyth (2017), which proposes to replace the normal prior with a stick breaking prior for traditional VAE, our model is a kind of topic model for discrete text data. Moreover, we propose to place a prior on the concentration parameter such that the model is able to adjust the concentration parameter to data automatically. As commented by AR3, this technique is “very important and not well done in the community, usually”. We have demonstrated the adaptive power of iTM-VAE-G in Section 5.4. \n\nFurthermore, iTM-VAE-G can alleviate another commonly mentioned problem of the optimization under VAE framework: the latent representation will tend to collapse to the prior, which might leads to poor adaptive power in our model if the decoder is strong. iTM-VAE-G can increase the adaptive power of the model in an elegant way. Please refer to the newly added discussion in the second paragraph of Section 5.4 and Table 2 in the revision for details. \n\n3) The common point of our model with Miao et al. (2017) is that both of them can adapt the number of topics to data automatically. Miao et al. (2017) uses a heuristic indicator to instruct the growing of the topic number. While in our model, the adaptation of topic number is done in a natural Bayesian way. Placing a prior and carrying out corpus-level variational inference on the concentration parameter is an elegant way of adapting model power to datasets. \n\nPlease also refer to Q3 of AR1 for the discussion of the novelty. Thanks very much!\n\nQ3. “While experiments show the proposed models outperform the others quantitatively (perplexity and coherence), the paper does not provide sufficient justifications on why ITM-VAE is better.”\n\nGood question and thanks for this comment. According to our observation, the advantage of iTM-VAE lies in that the model adjusts the number of topics according to the data and the stick-breaking prior encourages the the model to learn sparse representations. Hence, the topics learned by iTM-VAE are usually diverse and of high quality, and the latent representations of documents are usually more discriminative. To show this point, we illustrate the topics learned by AVITM in Table 5 in Appendix 7.1 when K is set to 50. We can see that there are a lot of redundant topics. However, the topics learned by our model are diverse, which is shown in Table 4. We also show the TSNE of the latent representations of document learned by AVITM in Figure 4-(c) for comparison.\n", "Q4. “In particular, it provides little information about how the two baselines, LDA and HDP, are implemented (e.g., via mean-field variational inference, VAE, or MCMC?) and how their perplexities and topic coherences are computed.”\n\nThanks very much for this comment. We take the results of LDA and HDP from Miao et al. (2017), since we use the exactly same datasets as them (Yes, we are appreciated that Miao kindly provides the exactly same datasets to us privately.). According to Miao et al. (2017), they use Hoffman et al. , (2010) for LDA and Wang et al. (2011) for HDP, which are both based on variational inference. The LDA results in Figure 1 are also based on Hoffman et al. , (2010) (http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.LatentDirichletAllocation.html). \nWe have listed the equation for perplexity in Section 5.1, and we use the code provided by Lau et al (2014) (the code is at https://github.com/jhlau/topic_interpretability/ , which is also used by Srivastava & Sutton (2017)) to compute topic coherence. We make this point clearer in the revision. \n\nQ5. “It is hard to imagine 20 news, which consists of articles in 20 different newsgroups, can be well characterized by about 20 different topics.”\n\nYes, it seems amazing that about 20 different topics can explain 20News dataset quite well. However, according to the description of 20News (http://qwone.com/~jason/20Newsgroups/), some of the newsgroups are very closely related to each other. Since 20News is a quite small dataset and many groups are very closely related (e.g. comp.graphics, comp.sys.mac.hardware, comp.windows.x), it is reasonable to explain the data with a small number of topics. Table 5 in the revision shows that among 50 topics learned by AVITM, there are many redundant topics. Moreover, the curves in Figure 1-(a) also confirms that about 20 topics is a good choice for the other topic models. \n\nQ6. “Is there a tuning parameter that significantly impacts the number of topics inferred by iTM-VAE?”\n\nThe concentration parameter $\\alpha$ is the one that significantly impacts the number of topics inferred by iTM-VAE. Since it is a very important parameter, we place a prior on $\\alpha$ which helps the model to adjust $\\alpha$ to data automatically. Section 5.4 demonstrates the effectiveness of the prior. If we do not place the prior, iTM-VAE-Prod with strong decoder cannot adapt well to different sub-sampled subsets of 20News dataset. Common training techniques, e.g. KL annealing, decoder regularization, do not alleviate this problem significantly. While iTM-VAE-G has better adaptive ability w.r.t dataset size. This is also one of the key contributions of the paper.\n\nWe hope that this rebuttal can address the concerns and some misunderstandings of the reviewer. Please let us know whether the reviewer has other comments. We are looking forward to hearing from you.\n", "We thank the reviewer for the valuable and detailed comments. The main concerns raised by the reviewer are:\nAbout the correctness of the model: Independently drawing the document-specific pi vectors from the stick-breaking processes will lead to zero sharing between the atoms of different stick-breaking process draws.\nAbout the novelty: The novelty of the paper is not significant.\nAbout the experiment details: The paper provides little information about how the two baselines, LDA and HDP, are implemented (e.g., via mean-field variational inference, VAE, or MCMC?) and how their perplexities and topic coherences are computed. \nAbout the learned topic numbers: It seems surprising that 20News dataset can be characterized by about 20 different topics.\n\nIn this rebuttal, we have addressed all these concerns raised by the reviewer, and the paper has been modified accordingly. \n\nQ1. “Independently drawing the document-specific pi vectors from the stick-breaking processes will lead to zero sharing between the atoms of different stick-breaking process draws.”\n\nThere might be some misunderstandings. In fact, the atoms of different stick-breaking process draws are shared in our model. Actually, iTM-VAE does not need a base distribution to guarantee the sharing of topics across documents. Let us explain this point in 3 aspects:\n\na. Why do traditional nonparametric Bayesian topic models, such as HDP, require a globally shared base distribution?\n\nIndeed, traditional nonparametric topic models, such as HDP, require a globally shared base distribution. Note that, the main reason is that these models assume the “topics” are random variables and drawn from a distribution. As a result, people have to use a globally base distribution to generate a set of countably infinite topics, such that these candidate topics are shared, otherwise the drawn topics will not be shared. (c.f. Section 6.1 of [Teh et al. 2006], https://people.eecs.berkeley.edu/~jordan/papers/hdp.pdf). Thus, a globally shared based distribution is used to generate a countably infinite set of topics that can be shared by different documents.\n\nb. Why the global base distribution is not required in iTM-VAE to make the atoms of different stick-breaking process draws shared by documents?\n\nDifferent with traditional nonparametric topic models, the topics in iTM-VAE are NOT drawn from a distribution, but are treated as part of the parameters that are optimized by the model directly. Specifically, in Section 4.1, we use $\\Phi$ to denote the corresponding parameters. This key difference indicates that we do not need an additional base distribution to generate the countably infinite candidate topics, since they are parameters. Consequently, they are shared across all documents naturally. Treating the topics as parameters are also adopted by LDA (c.f. the beta matrix in Section 3 of [Blei, et al. 2003] http://www.jmlr.org/papers/volume3/blei03a/blei03a.pdf), and the difference of our model with LDA is that the number of topics is potentially unlimited and is adapted with the data.\n\nc. What if a nonparametric topic model does not share topic between documents?\t\n\nAnother evidence that the model is correct is that, if the topics (atoms) are not shared by different stick-breaking processes, we cannot learn meaningful topics at all. However, the fact is that the topic coherence of our model is higher than other strong baselines. Please check the Section 7.1 in the revision to see the learned topics. We will release the code on github for you to check the correctness of the model. (The Section 7.1 in the revision corresponds to Section 7.3 in the original version. We add more experimental results and move it to the top of the Appendix such that readers can visualize the learned topics easily. )\n\nThanks for this comment. We have added this discussion in the revision.\n", "We thank the reviewer for the valuable comments. The main concerns raised by the reviewer are: \n1) About the model: There are no document-specific topic distributions, but only a stick-breaking process for a mixture model; \n2) About the novelty: The difference with Wang, et al (2011) is that VAE is used to approximate the posterior. Hence the model should compare with Wang, et al (2011) since it is the most relevant non-deep approach. \n\nWe address all these concerns raised the reviewer as follows:\nQ1: “The model on page 4 is confusing to me since this is a topic model, so document-specific topic distributions are required, but what is shown is only stick-breaking for a mixture model.”\n\nThere might be some misunderstandings. We do have the document-specific topic distributions on page 4, which is $\\pi$ in the generative process. The difference with LDA is that our model samples the document-specific topic distributions from a GEM distribution, while LDA samples them from a Dirichlet distribution. Actually, the generative procedure of iTM-VAE of Section 4.1 is similar to LDA, i.e. 1) sample a document-specific topic distributions $pi$; Then, for each word $w_i$ in the document: 2) draw a topic $\\hat{\\theta}_i$; 3) draw $w_i$ from $Cat(\\hat{\\theta}_i)$. \n\nMoreover, according to the section 3.2 of ( Blei2003, JMLR, http://www.jmlr.org/papers/volume3/blei03a/blei03a.pdf), LDA itself is also a mixture model. \n\nQ2. “In this case, if the model is nonparametric, then comparing with Wang, et al (2011) seems the most relevant non-deep approach.”, “This would be best shown by comparing with the corresponding non-deep version of the model rather than LDA and other deep models.”\n\nWe agree with the reviewer that it would be best to compare with Wang, et al (2011) to see the gain from VAE. Actually, the HDP in Table 1 is taken from Miao, et al(2017), which is actually based on Wang, et al (2011). We have clarified this point in the revision. Thanks very much for the suggestion!\n\nQ3. About the novelty of the paper.\n\nWe would like to clarify the novelty of the paper:\n1) As is pointed out by the reviewer, we introduce a global inference net to model the variational posterior of BNP topic models, and carry out optimization under the VAE framework. To our best knowledge, this is the first time that BNP topic models are combined with AEVB. This technique brings 2 benefits for BNP topic models. (1) No further variational updates are needed on the test data but only a feed-forward pass on inference net. Hence, the model is very efficient. (2) The optimization of VAE framework is very general. The generative model can be adjusted and enhanced without additional mathematic derivation. Hence, it is very flexible.\n\n2) We propose iTM-VAE-G, in which a prior is added on the concentration parameter. This technique helps the model to adjust the concentration parameter to data automatically, and we have shown the effect of the prior in Section 5.4. To further demonstrate the advantage of iTM-VAE-G, we compare it with iTM-VAE-Prod which does not have the prior over the concentration parameter. This newly added experiments are shown in Table 2 in the revision. We can see that when the decoder is strong, the number of effective topics learned by iTM-VAE-Prod(without the prior) cannot adapt well to different sub-sampled subsets of 20News dataset. Common training techniques, e.g. KL annealing, decoder regularization, do not help much. iTM-VAE-G can increase the adaptive power of the model in an elegant way. After the prior is added, the restriction on the latent representation is relaxed, the model will learn an appropriate and highly-confident corpus-level posterior for the concentration parameter, and can adapt its power according to the dataset size, even if the decoder is strong. As commented by AR3, this technique is “very important and not well done in the community, usually”.\n\n3) The experimental results confirm the advantage of the model.\n\nPlease let us know whether the rebuttal solves the concerns of the reviewer. We are looking forward to hearing from you. Thanks very much!\n", "We thank the reviewer for the insightful and valuable comments, and we will consider seriously the avenue of future work you point out to us. We address the concerns of the reviewer as follows.\n\nQ1. “Does this claim have any backing? ’Inference of HDP is more complicated and not easy to be applied to new models‘ …”\n\nThanks very much for this comment. We agree with you that the posterior of DP or HDP can be made to a Dirichlet by keeping a remainder term and appropriate augmentation, which makes the model easier to be optimized. By saying “Inference of HDP is more complicated and not easy to be applied to new models even with small changes in the generative process”, we mean that, unlike the black-box inference based models, people might need to redesign the inference methods when there are some changes in the generative process of HDP, which is not quite flexible. We have made this point clearer in the revision. \n\nQ2. “So the claim in 3rd paragraph of Section 2, \"superior\" and \"self-determined topic number\" I'd say are misguided. ”\n\nThanks for the suggestion that makes our paper more rigorous. We have modified the sentence to “The Bayesian nonparametric topic models, such as HDP, potentially have infinite topic capacity and are able to adapt the topic number to data.”\n\nQ3. “In Related Work, you seem to only mention HDP for non-parametric topic models. More work exists, for instance using Pitman-Yor distributions for modelling words and using Gibbs samplers that are efficient and don't rely on the memory hungry HCRP.”\n\nThanks for the suggestion! We have added more references for nonparametric topic model in the related work of the revision, such as Kim & Sudderth (2011); Archambeau et al. (2015) and Lim et al. (2016). \n\nQ4. “What code is used for HDP-LDA? “\n\nThanks for this comment. We take the results of HDP and LDA from Miao et al. (2017), since we use the exactly same datasets as Miao et al. (2017) (We are appreciated that Miao provides the exactly same datasets to us privately, hence we can take the results directly). According to Miao et al. (2017), the HDP is based on Wang et al. (2011), which is an online variational inference algorithm, and the LDA is based on the online variational inference model of Hoffman et al., (2010). The results of LDA in Figure 1 are also based on Hoffman et al., (2010), where we use the implementation from http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.LatentDirichletAllocation.html. We have made it clearer in the revision." ]
[ 7, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 2, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkxqZngC-", "iclr_2018_SkxqZngC-", "iclr_2018_SkxqZngC-", "iclr_2018_SkxqZngC-", "SyhzVlKez", "rJfo7HsxG", "SyhzVlKez", "SyhzVlKez", "SyhzVlKez", "rJfo7HsxG", "ByaL9g21M" ]
iclr_2018_SJSVuReCZ
SHADE: SHAnnon DEcay Information-Based Regularization for Deep Learning
Regularization is a big issue for training deep neural networks. In this paper, we propose a new information-theory-based regularization scheme named SHADE for SHAnnon DEcay. The originality of the approach is to define a prior based on conditional entropy, which explicitly decouples the learning of invariant representations in the regularizer and the learning of correlations between inputs and labels in the data fitting term. We explain why this quantity makes our model able to achieve invariance with respect to input variations. We empirically validate the efficiency of our approach to improve classification performances compared to standard regularization schemes on several standard architectures.
rejected-papers
The proposed conditional variance regularizer looks interesting and the results show some promise. However, as the reviewers pointed out, the connection between the information-theoretic argument provided and the final form of the regularizer is too tenuous in its current form. Since this argument is central to the paper, the authors are urged to either provide a more rigorous derivation or motivate the regularizer more directly and place more emphasis on its empirical evaluation.
train
[ "HyloFODVM", "By-4qhFeM", "SJ3gWWsxM", "Hkd4Dn2eG", "B1zvHR7mf", "S1KszjW7f", "SJyBX4gmM", "HJwva4tzM", "B17OvXYfz", "ByJAL7tzf", "ByGVIQFfz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors propose a particular variance regularizer on activations and connect it to the conditional entropy of the activation given the class label. They also present some competitive results on CIFAR-10 and ImageNet.\n\nDespite some promising results, I found some issues with the paper. The main one is that the connection between conditional entropy and the proposed variance regularizer seems tenuous. The chain of reasoning is as follows:\n\n- Estimation of H(Y|C) is difficult for two reasons: 1) when the number of classes is large, the number of samples needed calculate the entropy are high, and 2) naive estimators -- even when the number of classes are small -- have high variance. To solve these issues, the authors propose:\n\na) Introduce a latent code Z such that H(Y|C) = H(Y|Z). This solves problem 1).\n\nb) Use a variance upper bound on H(Y|Z). This solves problem 2).\n\nMy issue is with the reasoning behind a). H(Y|C) = H(Y|Z) relies on the assumption that I(Y;C) = I(Y;Z). The authors present a plausibility argument, but the argument was not sufficiently convincing to me to overcome my prior that I(Y;C) =/= I(Y;Z).\n\nApart from this, I found some other issues. \n\n* In the second paragraph of 2.2, the acronym LME in \"LME estimator\" was not defined, so I checked the reference provided. That paper did not mention a LME estimator, but did present a \"maximum likelihood estimator\" with the same convergence properties as those mentioned in the SHADE paper. Since the acronym LME was used twice, I'm assuming this was not a typo. Perhaps this is a bug in the reference?\n\n* In section 4.4, it's hard to know if the curves actually show that \"SHADE produces less class-information filtering\". The curves are close throughout and are nearly identical at epoch 70. It is entirely possible that the difference in curves is due to optimization or some other lurking factor.\n\n* The final form of the regularization makes it look like a more principled alternative to batchnorm. It would have been nice if the authors more directly compared SHADE to BN.\n\n* There are two upper bounds here: that H(Y_l | C) <= \\sum_i H(Y_{l, i} | C), and the variance upper bound. The first one does not seem particularly tight, especially at the early layers where the representation is overcomplete. I understand that the authors argue that the upper bound is tight in footnote 3, but it is only plausible for later laters.\n\nMy Occam's razor explanation of this paper is that it forces pre-nonlinearity activations (and hence post-nonlinearity activations) to be binary, without having to resort to sigmoid or tanh nonlinearities. This is a nice property, but whether the regularizer connects to H(Y|C) still remains unsolved.", "This paper proposes another entropic regularization term for deep neural nets. The key idea can be stated as follows: Let X denote the observed input, C the hidden class label taking values in a finite set, and Y the representation computed by a neural net. Then C -> X -> Y is a Markov chain. Moreover, assuming that the mapping X -> Y is deterministic (as is the case with neural nets or any other deterministic representations), we can write down the mutual information between X and Y as\n\nI(X;Y) = H(Y) - H(Y|X) = H(Y).\n\nA simple manipulation shows that H(Y) = I(C;Y) + H(Y|C). The authors interpret the first term, I(C;Y), as a data fit term that quantifies the statistical correlations between the class label C and the representation Y, whereas the second term, H(Y|C), is the amount by which the representation Y can be compressed knowing the class label C. The authors then propose to 'explicitly decouple' the data-fit term I(C;Y) from the regularization penalty and focus on minimizing H(Y|C). In fact, they replace this term by the sum of conditional entropies of the form H(Y_{i,k}|C), where Y_{i,k} is the activation of the ith neuron in the kth layer of the neural net. The final step is to recognize that the conditional entropy may not admit a scalable and differentiable estimator, so they use the relation between a quantity called entropy power and second moments to replace the entropic penalty with the conditional variance penalty Var[Y_{i,k}|C]. Since the class-conditional distributions are unknown, a surrogate model Q_{Y|C} is used. The authors present some experimental results as well.\n\nHowever, this approach has a number of serious flaws. First of all, if the distribution of X is nonatomic and the mapping X -> Y is continuous (in the case of neural nets, it is even Lipschitz), then the mutual information I(X;Y) is infinite. In that case, the representation of I(X;Y) in terms of entropies is not valid -- indeed, one can write the mutual information between two jointly distributed random variables X and Y in terms of differential entropies as I(X;Y) = h(Y) - h(Y|X), but this is possible only if both terms on the right-hand side exist. This is not the case here, so, in particular, one cannot relate I(X;Y) to I(C;Y). Ironically, I(C;Y) is finite, because C takes values in a finite set, so I(C;Y) is at most the log cardinality of the set of labels. One can start, then, simply with I(C;Y) and express it as H(C) - H(C|Y). Both terms are well-defined Shannon entropies, where the first one does not depend on the representation, whereas the second one involves the representation. But then, if the goal is to _minimize_ the mutual information between I(C;Y), it makes sense to _maximize_ the conditional entropy H(C|Y). In short, the line of reasoning that leads to minimizing H(Y|C) is not convincing. Moreover, why is it a good idea to _minimize_ I(C;Y) in the first place? Shouldn't one aim to maximize it subject to structural constraints on the representation, along the lines of InfoMax?\n\nThe next issue is the chain of reasoning that leads to replacing H(Y|C) with Var[Y|C]. One could start with that instead without changing the essence of the approach, but then the magic words \"Shannon decay\" would have to disappear altogether, and the proposed method would lose all of its appeal.", "the paper adapts the information bottleneck method where a problem has invariance in its structure. specifically, the constraint on the mutual information is changes to one on the conditional entropy. the paper involves a technical discription how to develop proper estimators for this conditional entropy etc.\n\nthis is a nice and intuitive idea. how it interacts with classical regularizers or if it completely dominates classical regularizers would be interesting for the readers.", "Summary:\n\nThe paper presents an information theoretic regularizer for deep learning\nalgorithms. The regularizer aims to enforce compression of the learned\nrepresentation while conditioning upon the class label so preventing the\nlearned code from being constant across classes. The presentation of the Z\nlatent variable used to simplify the calculation of the entropy H(Y|C) is \nconfusing and needs revision, but otherwise the paper is interesting.\n\nMajor Comments:\n\n- The statement that I(X;Y) = I(C;Y) + H(Y|C) relies upon several properties\n of Y which are not apparent in the text (namely that Y is a function of X,\nso I(X;Y) should be maximal, and Y is a smaller code space than X so it should \nbe H(Y)). If Y is a larger code space than X then it should still be true, but\nthe logic is more complicated.\n\n- The latent code for Z is unclear. Given the use of ReLUs it seems like Y\n will be O or +ve, and Z will be 0 when Y is 0 and 1 otherwise, so I'm\nunclear as to when the value H(Y|Z) will be non-zero. The data is then\npartitioned within a batch based on this Z value, and monte carlo sampling is\nused to estimate the variance of Y conditioned on Z, but it's really unclear\nas to how this behaves as a regularizer, how the z is sampled for each monte\ncarlo run, and how this influences the gradient. The discussion in Appendix C\ndoesn't mention how the Z values are generated.\n\n- The discussion on how this method differs from the information bottleneck is\n odd, as the bottleneck is usually minimising the encoding mutual information\nI(X;Y) minus the decoding mutual information I(Y;C). So directly minimising\nH(Y|C) is similar to the IB, and also minimising H(Y|C) will affect I(C;Y) as\nI(C;Y) = H(Y) - H(Y|C).\n\n- The fine tuning experiments (Section 4.2) contain no details on the\n parameters of that tuning (e.g. gradient optimiser, number of epochs,\nbatch size, learning rates etc).\n\n- Section 4.4 is obvious, and I'd consider it a bug if regularising with label\n information performed worse than regularising without label information.\nEssentially it's still adding supervision after you've removed the\nclassification loss, so it's natural that it would perform better. This\nexperiment could be moved to the appendix without hurting the paper.\n\n- In appendix A an upper bound is given for the reconstruction error in terms\n of the conditional entropy. This bound should be related to one of the many\nupper bounds (e.g. Hellman & Raviv) for the Bayes rate of a predictor, as\nthere is a fairly wide literature in this area.\n\nMinor Comments:\n\n- The authors do not state what kind of input variations they are trying to\n make the model invariant to, and as it applies to CNNs there are multiple\ndifferent kinds, many of which are not amenable to a regularization based\nsystem for inducing invariance.\n\n- The authors should remind the reader once that I(X;Y) = H(Y) - H(Y|X) = H(X) -\n H(X|Y), as this fact is used multiple times throughout the paper, and it may\nnot necessarily be known by readers in the deep learning community.\n\n- Computing H(Y|C) does not necessarily require computing c separate\n entropies, there are multiple different approaches for computing this\nentropy.\n\n- The exposition in section 3 could be improved by saying that H(X|Y) measures\n how much the representation compresses the input, with high values meaning\nlarge amounts of compression, as much of X is thrown away when generating Y.\n\n- The figures are difficult to read when printed in grayscale, the graphs\n should be made more readable when printed this way (e.g. different symbols,\ndashed lines etc).\n\n- There are several typos (e.g. pg 5 \"staking\" -> \"stacking\").\n", "- Regarding the variance bound:\nWe do not claim that the inequality holds for any neural network layer output as we point out just below eq 6 \"this bound holds for any continuous distributions\". However a common assumption in computer vision is that the input space variable is a quantization of an underlying continuous distribution, justifying the use of the variance as an objective function in our optimization problem.\n\n- Regarding the word \"decoupled\":\nAs you surely understood, our use of the word \"decoupled\" is informal and means that we keep H(Y|C) because when minimized it won't minimize the term I(Y,C), unlike when minimizing I(X,Y). Sure, they are still coupled, but in the opposite direction of the initial one that was problematic. This is really being picky about a word that is here to simply explain the intuition. But sure, we can change the phrasing to be more formally correct.\n\nIn general, we regret the fact that you refuse to discuss the main ideas, originality, development and experiments of our paper; and choose instead to focus on irrelevant and not constructive details, do not admit any misreading on your part (like asking why we want to minimize I(Y,C)) and keep with the aggressive tone (\"an even bigger lack of understanding of information theory than I had originally imagined\")", "The latent coding doesn't require the use of ReLU despite there is a clear link between SHADE intuition and the popular activation functions. Precisely, the fact that a neuron (pre-activation) encodes a binomial information (active or not), with a (soft) threshold at 0, which is coherent with the idea behind all usual activation functions (ReLU, sigmoid, ...) used with neural networks. Advanced activation functions like LeakyReLU try to adjust how gradient is backpropagated while SHADE only applies a layerwise regularization on the weights according to the latent variable. Both have different goals but it would be interesting, indeed, to study the impact of different activation functions, such as LeakyReLU, on the latent variable distribution. \n\nIn section 2.3, the monte carlo sampling approximating (8) with (9) is done on the variable Y over \\mathcal{Y} only. It simulates the distribution of the activations Y for the input data distribution on \\mathcal{X}. We therefore apply monte-carlo with mini-batches sampling, i.e. the same thing that is done on any loss to optimize a neural net. Remains an integral over \\mathcal{Z} which becomes a sum over the 2 modes of Z (Z=0, Z=1) and the expectancy is estimated with a moving average. We are going to make this fact clearer in the paper. \n\nWe argue that the class information is implicitely contained in the latent variable Z. This point is not obvious. This is why we are trying to demonstrate it in experiment 4.4, using the fact that after training, each layer contains enough class information (with a forward pass) to predict the labels with a good accuracy. However it is possible that this information is not accurate enough to predict the correct class (the label in the case of training). In fact the training brings the latent variable closer to a sufficient statistic of X for C, making it more powerful to predict the class. \n", "> we believe that your review is hasty since you only base your criticism on wrong assumptions about our paper (among others, you say “the distribution of X is nonatomic” while it is atomic ...\n\nActually, my assumption that the distribution of X is nonatomic gave you the maximum benefit of the doubt. If the distribution of X is atomic, then so are the distributions of the activations of all the neurons in the net, for any choice of the weights. But, in that case, the variance bound of Eq. (6) is not valid because it holds only for distributions with well-defined differential entropies, i.e., precisely those distributions that are nonatomic. The variance-based upper bound does not apply to the usual Shannon entropy. So, by insisting that I have misread your paper, you are showing an even bigger lack of understanding of information theory than I had originally imagined. \n\n> However in order not to deteriorate I(C,Y) during the minimization of I(X,Y) we focus on the decoupled term H(Y|C).\n\nBy definition, I(C;Y) = H(Y) - H(Y|C). Hence, I(C;Y) and H(Y|C) are anything but decoupled.\n\nI rest my case.\n\n\n\n", "So the latent coding Z relies upon the network using ReLUs? As if you model Z as sigmoid(Y) then using a sigmoid activation function will reduce that operation to the identity. This leads to a further thought, which is that the SHADE regulariser is essentially incorporating information about how negative the value was before the ReLU, and so isn't it a form of leaky ReLU? Given how closely this regulariser appears tied to the activation function, it would benefit the paper to compare against other approaches which try to improve upon the ReLU.\n\nThe text in section 2.3 does not make explicit the switch from a variance estimated using monte carlo, to a deterministic estimator, so could do with a little revision to make this clear.\n\nAlso in the comment about Section 4.4 the paper states it did not use any labels, but the text in section 4.4 talks about how this regulariser incorporates label information by modelling it with Z. Hence my comment in the original review about a regulariser that knows of the existence of labels performing better than one without.", "Thanks for your input, however, we believe that your review is hasty since you only base your criticism on wrong assumptions about our paper (among others, you say “the distribution of X is nonatomic” while it is atomic; and “why is it a good idea to _minimize_ I(C;Y) in the first place” while we say the opposite in the introduction) and end with a very aggressive and unfounded comment about our illegitimate use of Shannon entropy. You will find our detailed answers below.\n\n> “If the distribution of X is nonatomic and the mapping X -> Y is continuous [...] then the mutual information I(X;Y) is infinite. In that case, the representation of I(X;Y) in terms of entropies is not valid. [...]” \n\nIn fact, for images, X is in a finite input space taking values from {0, 1, 2, …, 255}^(H*W*3). H being the height of the image and W its width. X is therefore atomic and this whole remark does not stand.\n\n> “if the goal is to _minimize_ the mutual information between I(C;Y), it makes sense to _maximize_ the conditional entropy H(C|Y). In short, the line of reasoning that leads to minimizing H(Y|C) is not convincing. Moreover, why is it a good idea to _minimize_ I(C;Y) in the first place? Shouldn't one aim to maximize it subject to structural constraints on the representation, along the lines of InfoMax?”\n\nWe never claim that we want to minimize I(C;Y). In fact it is important to get this value as high as possible, because we want to be able to determine C using Y to classify the sample correctly. Following the IB framework the value that we intend to minimize is I(X,Y) (= I(C,Y) + H(Y|C)). However in order not to deteriorate I(C,Y) during the minimization of I(X,Y) we focus on the decoupled term H(Y|C). Moreover we show that the term H(Y|C) is directly related to the invariance of the model and is interesting to be minimized. \n\n> “The next issue is the chain of reasoning that leads to replacing H(Y|C) with Var[Y|C]. One could start with that instead without changing the essence of the approach, but then the magic words \"Shannon decay\" would have to disappear altogether, and the proposed method would lose all of its appeal.”\n\nHaving regularizations that can be theoretically interpreted is important in order to understand its effect and to be able to improve and adapt it. We do not think the appeal of our method comes from the magic word “Shannon” and this unfounded comment along with the title of the review “an obvious idea supported by flawed reasoning” shows a clearly aggressive and biased review.", "Thank you for your review. Regarding the relation with usual regularizers would be interesting. Complementary study in this direction could indeed be the subject of future work.\nAs of now, we already have two points in that direction:\n- In Sec. 3 we present a link between a baseline of our regularizer (H(Y) instead of H(Y|C)) and weight decay.\n- In the experiment in Sec 4.1 we empirically study the complementarity of SHADE and Dropout.\n", "Thank you for your feedback. In your review, we noticed one important comment questioning the fundamental difference between our model and IB. However, we noticed a misunderstanding concerning many sections of the development of our SHADE modeling. We try to clarify these points in our answers below. We also made the corresponding changes in the paper to rephrase some paragraphs and improve their clarity (mostly the latent description on page 3 and Sec 4.4). \n\n# Regarding the link with IB\n\n> “[...] how this method differs from the [IB] is odd [...] minimising H(Y|C) is similar to the IB, and minimising H(Y|C) will affect I(C;Y) as I(C;Y) = H(Y) - H(Y|C)”\n\nThe IB framework propose to minimize I(X,Y) at constant I(Y,C). Shamir et al. (2010) propose to use I(X;Y) as regularization criterion. However, in the development of I(X,Y) = H(Y) = I(Y,C) + H(Y|C) we identify the term I(Y,C) that we do not want to minimize (but indirectly maximize it since it corresponds to a sort of classification loss), so minimizing I(X,Y) would impact I(Y,C) in an undesired and uncontrolled way. This is why we only minimize the second term H(Y|C) (that will affect I(C,Y) in the desired way by maximizing it). This is mentioned in the introduction and in Sec. 3.\n\n# Other comments about the details of the method\n\n> “I(X;Y) = I(C;Y) + H(Y|C) relies upon several properties of Y which are not apparent [...].\nThe only required properties for the development of this equality are described in paragraphs 2 and 4 of the introduction: “Considering an input variable X, label C and its deep representation *Y = h(X)*, IB regularizes the training by minimizing the mutual information I(X, Y) at constant mutual information I(C, Y). [...] For a *deterministic* model, we have I(X, Y) = I(C, Y) + H(Y | C).” The detailed development is presented thoroughly in Sec. 3. The fact that H(Y|X) = 0 in that case has been added in the introduction.\n\n> “The latent code for Z is unclear. Given the use of ReLUs it seems like Y will be O or +ve, and Z will be 0 when Y is 0 and 1 otherwise, so I'm unclear as to when the value H(Y|Z) will be non-zero.\n> “[...] monte carlo sampling is used to estimate the variance of Y conditioned on Z, but it's really unclear as to how this behaves as a regularizer, how the z is sampled [...], how this influences the gradient.”\n\nThe intuition behind the variable Z, is that a neuron is responsible for detecting the presence of an attribute on the picture (e.g. the presence of a wheel, Z corresponding to the variable “there is a wheel” which is binomial). The value Y of the neuron *before* the ReLU represents the confidence in the detection. If its activation is high it is very likely that the attribute is present on the picture (Y >> 0 ↔ Z = 1), if it is low it is likely that the attribute is absent (Y << 0 ↔ Z = 0).\n\nZ is a random variable of chosen distribution (below equation 9) that depends on Y but not a deterministic mapping of Y, that is why H(Z|Y) is non-zero. Since we know the distribution p(Z|Y) we can compute everything without sampling on Z, cf Eq. 10 and Algorithm 1. Appendix C describe how the regularization affects the gradients without the need for sampling.\n\n> “Section 4.4 is obvious, and I'd consider it a bug if regularising with label information performed worse than regularising without label information”\n\nWe do not use any labels for this as indicated in the paper: “Without the classification loss, the network performance obviously declines as we do not provide information about the labels”. We apply only regularizers, which are Var(Y) and Var(Y|Z). This is here to illustrate the fact that our regularizer is less aggressive toward class information and the importance of conditional entropy described in Sec. 3.\n\n# Comments about minor issues\n\nRegarding the fine tuning experiments, we did not include the details of the training because they are very standard (SGD, lr=1e-5, batches of 16, 8 epochs). They are now in appendix E.\n\nRegarding appendix A and its comparison to the literature, it is simply a theoretical illustration of the discussion in Sec. 3 and not a real contribution, therefore its comparison to the literature is far from the scope of our paper.\n\nRegarding the kind of variations we want to be invariant to, indeed no input variations are specified in this paper as we do not target invariance to particular transformation. In fact the information theoretic framework enables to be agnostic to the type of transformation that is managed by the mapping functions which is a advantage our case. Indeed, modeling rigorously the transformations that the models should be invariant to is very difficult. \n\nRegarding the computation of H(Y|C), there is in fact many ways to estimates a conditional entropy, but few ones fit the gradient descent methodology without making strong assumptions about the distributions." ]
[ 4, 5, 7, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 3, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJSVuReCZ", "iclr_2018_SJSVuReCZ", "iclr_2018_SJSVuReCZ", "iclr_2018_SJSVuReCZ", "SJyBX4gmM", "HJwva4tzM", "B17OvXYfz", "ByGVIQFfz", "By-4qhFeM", "SJ3gWWsxM", "Hkd4Dn2eG" ]
iclr_2018_SJ3dBGZ0Z
LSH Softmax: Sub-Linear Learning and Inference of the Softmax Layer in Deep Architectures
Log-linear models models are widely used in machine learning, and in particular are ubiquitous in deep learning architectures in the form of the softmax. While exact inference and learning of these requires linear time, it can be done approximately in sub-linear time with strong concentrations guarantees. In this work, we present LSH Softmax, a method to perform sub-linear learning and inference of the softmax layer in the deep learning setting. Our method relies on the popular Locality-Sensitive Hashing to build a well-concentrated gradient estimator, using nearest neighbors and uniform samples. We also present an inference scheme in sub-linear time for LSH Softmax using the Gumbel distribution. On language modeling, we show that Recurrent Neural Networks trained with LSH Softmax perform on-par with computing the exact softmax while requiring sub-linear computations.
rejected-papers
The authors propose an efficient LSH-based method for computing unbiased gradients for softmax layers, building on (Mussmann et al. 2017). Given the somewhat incremental nature of the method, a thorough experimental evaluation is essential to demonstrating its value. The reviewers however found the experimental section weak and expressed concerns about the choice of baselines and their surprisingly poor performance.
train
[ "rkQC_Rwlz", "Hy2-5bqeG", "S1FN4XcgM", "SJsvyvAzz", "rJ44ywRMz", "H1jyJw0Mf", "SyV2Rwtyf", "ByGORwtyz", "rkHB-57Jf", "BJQpqNNAZ", "BkN1Uh2R-", "Hy6mJ4qCZ", "r1CPEtQAZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "author", "public", "public", "public" ]
[ "The paper proposes to use LSH to approximate softmax, which greatly speeds up classification with large output space. The paper is overall well-written. However, similar ideas have been proposed before, such as \"Deep networks with large output spaces\" by Vijayanarasimhan et. al. (ICLR 2015). And this manuscript does not provide any comparison to any of those similar methods.\n\nA few questions about the implementation,\n(1) As stated in the manuscript, the proposed method contains three steps, hashing, lookup and distance. GPU is not good at lookup, so the manuscript proposes to do lookup on CPU. Does that mean the data should go back and forth between CPU and GPU? Would this significantly increase the overhead?\n(2) At page 6, the LSH structure returns m list of C candidates. Is it a typo? C is the total number of classes. And how do you guarantee that each LSH query returns the same amount of candidates?\n\nExperiment-wise, the manuscript leaves something to be desired.\n(1) More baselines be evaluated and compared. In this manuscript, only IS and NS are compared. And pure negative sampling is actually rarely used in language modeling. In addition to Vijayanarasimhan's LSH method, there are also a few other methods out there, such as hierarchical softmax, NCE, D-sothat ftmax (\"Strategies for Training Large Vocabulary Neural Language Models\" by Chen et. al. ACL 2016), adaptive softmax (\"Efficient softmax approximation for GPUs\" by Grave et. al).\n(2) The results of the proposed method is not impressive. D-softmax and adaptive softmax can achieve 147 ppl on text 8 with 512 hidden units as described in other paper, while the proposed method can only achieve 224 ppl with 650 hidden units. Even the exact softmax have large difference in ppl. It looks like the authors do not tune the hyper-parameters well. With this suboptimal setting, it is hard to judge the significance of this manuscript.\n(3) Why one billion word dataset is used in eval but not used for training? It is one of the best datasets to test the scalability of language models.\n(4) We can see, as reported in the manuscript, that NS has bigger speedup than the proposed method. So it would be nice to show ppl vs time curve for all methods. Eventually, what we want is the best model given a fixed amount of training time. With the same amount of epochs, NS loses the advantage of being faster.", "In this paper, the authors propose a new approximation of the softmax, based on approximate nearest neighbors search and sampling.\nMore precisely, they propose to approximate to partition function (which is the bottleneck to compute the softmax and its gradient), by using:\n- the top-k classes (retrieved using LSH) ;\n- uniform samples (to account for the tail of the distribution).\nThey describe how this technique can be used for learning, by performing sparse updates for the gradient (corresponding to the elements used to compute the partition function), and re-hashing the updated element of the softmax layers.\nIn section 5, they show how this method can be implemented on GPU, using standard operations available in neural networks framework such as TensorFlow or PyTorch.\nFinally, they compare their approach to importance sampling and negative sampling, using language modeling as a benchmark.\nThey use 3 standards datasets to perform the evaluations: penn treebank, text8 and wikitext-2.\n\nPros:\n - well written and easy to read paper\n - interesting theoretical guarantees of the approximation\nCons:\n - a bit incremental\n - weak empirical evaluations\n - no support for the claim of efficient GPU implementation\n\n== Incremental ==\n\nWhile the theoretical justification of the methods are interesting, these are not a contribution of the paper (but of previous work by Mussmann et al.).\nIn fact, the main contribution of this paper is to show how to apply the technique of Mussmann et al. in the setup of neural network.\nThe main difference with Mussmann et al. is the necessity of re-hashing the updated elements of the softmax at each step.\nOther previous works have also proposed to use LSH to speed up computations in neural network, but are not discussed in the paper (see list of references).\n\n== Weak evaluations ==\n\nI believe that the empirical evaluation of section 6 are a bit weak.\nFirst, there is a large gap between the perplexity obtained using the proposed method and the exact softmax (e.g. 97 v.s. 83 on ptb, 115 v.s. 95 on wikitext-2).\nThus, I do not believe that the experiments support the claim that the proposed method \"perform on-par with computing the exact softmax\".\nMoreover, these numbers are pretty far from what other papers have reported on these datasets with similar models (I am wondering if the gap would be even larger with SOTA models).\nSecond, the authors do not report any runtime numbers for their method and the baselines on GPUs.\nI believe that it would be more fair to plot the learning curves (Fig. 1) using the runtime instead of the number of epochs.\n\n== Efficient implementation ==\n\nIn section 5, the authors claims that their approach can be efficiently implemented on GPUs.\nHowever, several of the operations used by their approach are inefficient, especially when using mini-batches.\nThe authors state that only step 2 is inefficient, but I also believe that step 3 is (compared to sampling approaches).\nIndeed, for their method, each example of a mini-batch uses a different set of elements to approximate the partition function (while for other sampling methods, the same set is used for the whole batch).\nThus a matrix-matrix multiplication is replaced by n matrix-vector multiplication (n is the batch size).\nWhile these can be performed in parallel, it is much less efficient than a matrix-matrix multiplication.\nFinally, the only runtime numbers provided by the authors comparing their approach to sampling is for a CPU implementation with a batch of size 1.\nThis setting is super favorable to their approach, but a bit unrealistic for most practical settings.\n\n== Missing references ==\n\nScalable and Sustainable Deep Learning via Randomized Hashing\nRyan Spring, Anshumali Shrivastava\n\nA New Unbiased and Efficient Class of LSH-Based Samplers and Estimators for Partition Function Computation in Log-Linear Models\nRyan Spring, Anshumali Shrivastava\n\nDeep networks with large output spaces\nSudheendra Vijayanarasimhan, Jonathon Shlens, Rajat Monga & Jay Yagnik", "Authors present LSH Softmax - a fast, approximate nearest neighbor search based, approach for computing softmax that utilizes the Gumbel distribution and it relies on an LSH implementation of the maximum inner product search.\n\nIn general the work presented in this paper is very interested and the proposed method is very appealing especially on large datasets. For the most part it draws from a previous work which is my main concern. It is very much inline with the previous work by Mussmman et al. and authors don’t really do a good job in emphasizing the relationship with this work which uses two datasets for their empirical analysis. This in turn gives the overall impression that their work is a simple addition to it. \n\nWith this in mind, my other concern is that their empirical analysis are only focused on a single task from the NLP domain (language modeling). \nIt would be good to see how well does the model generalizes across tasks in other domains outside of NLP. \nHow do the different softmax approaches perform across different model configurations? It appears that the analysis were performed using a single architecture. \nWhat about a performance comparison on an extrinsic task?\nAuthors should discuss the performance of LSH Softmax on the PTB train set. It appears that it outperforms the exact (i.e. “full”) Softmax or perhaps it’s an overlook on my end. \n\nOverall it feels that the paper was written really close to the conference deadline. Given the fact that the work is mostly based on the previous work by Mussmman et al. what would make the paper stronger and definitely ready to be presented at this conference is more in-depth performance analysis that would answer some of the above questions. \n\nLSH is typically an abbreviation for “Locality Sensitive” rather than “Locally-Sensitive” Hashing. At least this is the case with the original LSH approach.\n\nFor better clarity try rephrasing or splitting the first sentence in the second paragraph of the introduction. \n\nI think the authors spent too much time in background section of the paper where they give an overview of concepts that should be well known to the reader (NNs and Softmax). \n\nTheorem 3: Second sentence should be rephrased - “...and $\\mathcal{T}$, and $l$ uniform samples from…”\nTheorem 3: $\\epsilon$ and $\\delta$ should be formally introduced. \nSection 5: pretty much covers well known concepts related to GPU implementations. Authors should spent more time focusing on the empirical analysis of their approach. \n\nSection 6.1: “...to benchmark language modeling models...” should be rephrased.\nHow were the number of epochs chosen across the 3 collections? \n\nSection 6.1.1: “...note that the final perplexities are evaluated with using the full softmax…” - This sentence is very confusing and it should be rephrased.\n", "We first and foremost want to thank you for your time and valuable comments.\n\nComparison with Vijayanarasimhan et. al.:\nIt is true that the method from this work is similar in spirit to ours. However, we wish to emphasize two key points. First of all, their method is encompassed in ours by simply setting l=0. Secondly, as shown in Mussmann et al. (UAI 2017), using the top-k largest values leads to highly biased gradients and significantly worse performance (Figure 4 and 5 of Mussman et al.).\n\nImplementation\n(1) It is important to note that the weight vectors are never copied over to CPU. \n\nThe data (i.e. the weight vectors for the classes) *never* needs to be copied over to CPU. Our method only requires copying to CPU the *hashed* batch of hidden states. This consists of a bit matrix of shape (batch_size x (k * L)). This is a small matrix and thus the copying overhead is minimal.\nWhen copying back to GPU, one must simply copy the *indices* of the weight vectors for the gather operation, which is a small matrix (batch size x number of candidates).\n\n(2) It is a typo. We fixed that in the text, thank you for pointing it out. We guarantee a fixed number of candidates by padding with uniform samples.\n\nExperiment-wise\n(1) We added several baselines in the text, namely another (unbiased) version of Importance Sampling and NCE. We decided against comparing against Hierarchical Softmax methods (such as D-Softmax and adaptive softmax) as these requires domain-knowledge and hand-engineering. Furthermore, in contrast, they additionally enjoy no theoretical guarantees.\n(2) On Text8, our models were trained for 3 epochs, whereas the cited methods were trained for 5 or 10 epochs. Our hyperparameters were chosen from the literature for good performance with exact softmax and not tuned additionally for the approximate softmaxes.\n(3) We did not evaluate the One Billion Word dataset due to computational constraints but provided a computational comparison to show how our method could perform on even larger datasets.\n\nThank you once again for your time and comments, we hope this addresses your concerns and that you will reconsider your rating in light of this.", "We first and foremost want to thank you for your time and valuable comments.\n\nThank you for the additional references, we added those in the text along with a discussion.\n\n== ``Incremental\" and related work==\n\nIn addition to updating the MIPS structure with the updated weight vectors, we go above and beyond the experimental setup of Mussmann et al. 2017. Indeed, while their experiments support their theoretical results, they are far from being close to a real-world setting and usable on a large-scale task. Building on this, we extend their theoretical results and introduce LSH Softmax, which is usable in a real-world setting and on a widespread task: language modeling.\n\nRegarding the additional references: we added those in the text but we wanted to emphasize the following points:\n- Regarding (Vijayanarasimhan et al.), their method is encompassed in ours by simply setting l=0. Furthermore, as shown in Mussmann et al. (UAI 2017), using the top-k largest values leads to highly biased gradients and significantly worse performance (Figure 4 and 5 of Mussman et al.). \n- Regarding Spring et al. (KDD), there are several significant differences. First of all, their paper provides no theoretical guarantees which is a major difference with our work. Secondly, their paper focuses on reducing memory footprint which is not the aim of our work.\n- Regarding Spring et al. (arXiv), their estimator is indeed unbiased and efficient but, in contrast, provides no concentration guarantees. As with most importance sampling technique, their variance can get arbitrarily bad. Finally, the results reported on PTB are worse than those of LSH Softmax and the ones for Text8 are comparable whilst being trained for 10 epochs (theirs) compared to 3 epoch (ours).\n\n== Weak evaluations ==\n\nThe gap between exact and LSH is always within 20% whilst enjoying speed-ups up to 4.1x. Regarding the exact softmax implementation on PTB, we used the hyperparameters provided by the standard PyTorch implementation. While more complex models (HyperNetworks, PointerSentinel, Variational Dropout etc...) can provide better perplexity, our baseline (79.8 on the test set) is not weak by any mean (See [1] for a thorough evaluation of various models on PTB).\n\n== Efficient Implementation ==\n\nIt is true that we do not provide a GPU comparisons as our implementation is not yet competitive with TensorFlow IS and NCE implementations. However, since all of our operations are parallelizable, we posit that given professional engineering attention (which is the case for the TensorFlow IS and NCE) it should be competitive, especially given the theoretical runtime.\n\nThe CPU evaluation is meant to provide us with a reasonable FLOPS estimate; on that basis, we significantly outperform competing methods.\n\nWe hope that this addresses your comments and that you will reconsider your rating in light of these.\n\n[1] Regularizing and Optimizing LSTM Language Models. Merity S. et al. 2017", "We first and foremost want to thank you for your time and valuable comments.\n\nWe updated the draft to address your comments and provide more specific answers below.\n\n== Relationship with Mussmann et al. 2017 ==\n\nWhile Mussmann et al. 2017 provides the theoretical grounding for our work, it is important to note that their experimental setup is very constrained. While their experiments support their theoretical results, they are far from being close to a real-world setting and usable on a large-scale task. Building on this, we extend their theoretical results and introduce LSH Softmax, which is usable in a real-world setting and on a widespread task: language modeling.\n\n== Tasks from different domain than NLP ==\n\nWe want to emphasize that our method is not a all domain-specific and conserves theoretical guarantees across domains. We evaluate our method on NLP task for two reasons: 1) they are particularly well-suited for evaluating our method (naturally large output spaces) 2) we did not dispose of the computational resources to tackle tasks from other domains such as vision (e.g. Flickr100M) which requires hundreds of GPUs for weeks. We briefly touched on that point in the introduction of Section 6.\n\n== Architecture and hyperparameters cross-validation ==\n\nFirst of all, it is important to note that our theoretical guarantees hold regardless of architecture, hyperparameters etc... Secondly, we wanted to show that our technique performed well without further parameter tuning; to that end, we tuned all of our models for the EXACT softmax. We then evaluated the approximate softmaxes by simply swapping them in without further tuning. In our opinion, ease of tuning makes these methods used in practice.\n\n== PTB Train set ==\n\nThe hyperparameters (and thus regularization strength) were heavily cross-validated for performance on PTB with the EXACT softmax. It thus makes sense that the generalization gap be as small as possible in that case; it is not clear how LSH Softmax interacts with those multiple regularization schemes and thus we did not pay particular attention to that lower training perplexity.\n\nThank you once again for your valuable comments and feedback, we hope to have addressed your concerns, and we hope you will reconsider your rating in light of this.", "Thanks for those suggestions, will include in the next version.\n\nRegarding (Vijayanarasimhan et al.), It is also important to note that their method is encompassed in ours by simply setting l=0. Furthermore, as shown in Mussmann et al. (UAI 2017), using the top-k largest values leads to highly biased gradients and significantly worse performance (Figure 4 and 5 of Mussman et al.). \n\nRegarding Spring et al., there are several significant differences. First of all, their paper provides no theoretical guarantees which is a major difference with our work. Secondly, their paper focuses on reducing memory footprint which is not the aim of our work.\n\nWhile it is true that we could hand-engineer more effective distributions for estimating the tail, our method is meant to stay as general as possible. Indeed, using uniform sampling allows our method to enjoy guarantees with no assumptions on the output distribution. Even though we only evaluated it on NLP tasks, it is effectively applicable to any domain (vision, genomics…) and thus we did not to craft an NLP-specific variant.\n\nIt is true that we do not provide a GPU comparisons as our implementation is not yet competitive with TensorFlow IS and NCE implementations. However, since all of our operations are parallelizable, we posit that given professional engineering attention (which is the case for the TensorFlow IS and NCE) it should be competitive, especially given the theoretical runtime and FLOPS estimates.", "We agree that NCE is a standard softmax approximation, however we decided (at first) not to include it because of its similarity with importance sampling (as exhibited in Jozefowicz et al. 2016) and thus it seemed redundant. We have now done the experiments and we still outperform NCE in all the cases. (see reported numbers at the end)\n\nThank you for the link to the TensorFlow implementation. Our models were implemented in PyTorch, but our implementation follows the one presented in Jean et al. (2014) which proposes a biased partition function estimator based on a sub-sampling of the vocabulary to facilitate the matrix-matrix multiplication; this does not require reweighting of the probabilities. However, we have re-evaluated those baselines using the TensorFlow implementation, you can see the reported numbers at the end of this message. We outperform the IS TensorFlow baseline in 2 out of 3 cases and are within 10% in the third case.\n\nRegarding the remark on tuning learning rate for each method: we tuned all of our models for the EXACT softmax. Indeed, our reasoning was that we want to evaluate the approximate softmaxes by simply swapping them in without further tuning. In our opinion, ease of tuning makes these methods used in practice. This thus makes the comparison completely fair.\n\nTo address the aggressive halving of the learning rate, following the standard PyTorch LM example, we halve the learning rate every time the validation loss increases (starting with lr=20); when looking at the curve, we can observe that, in no case, the learning rate get prohibitively low, which is not a reason why in some cases, plateauing could kill some methods.\n\nAfter your suggestions, we ran IS and NCE baselines using the TensorFlow RNNLM and their learning rate schedule (which halve at fixed epochs instead of based on validation ppl). We hereby report the results (i.e. test ppl):\nPTB:\n- IS: 114.33\n- NCE: 115.30\n- Ours (reported in paper): 92.91\nWikiText-2:\n- IS: 128.384\n- NCE: 122.041\n- Ours (reported in paper): 115.11\nText8:\n- IS: 205.94\n- NCE: 386.87\n- Ours (reported in paper): 224.42\n\nWe see that LSH Softmax still outperforms those in almost all cases, thus hopefully addressing your questions about the baselines.", "This approach draws heavily from previous LSH + Deep Learning approaches.\nYet, these papers are not cited. There should be a comparison of how this approach is novel.\n\n * (Vijayanarasimhan et al.) Deep Networks With Large Output Spaces (ICLR 2015)\n * (Spring et al.) Scalable and Sustainable Deep Learning via Randomized Hashing (KDD 2017)\n\nFrom my understanding, this approach uses LSH to find the largest values like the previous approaches, but uses uniform random sampling to account for the tail. However, the distribution of words naturally follows the power law distribution. Shouldn't sampling from a log-uniform distribution be more computationally efficient, while providing roughly the same performance?\n\nGrave et al. (2017) shows that \"Sampled Softmax\" achieves 166 perplexity on the Text8 dataset, using a single-layer LSTM with 512 units.\n * (Grave et al.) Efficient softmax approximation for GPUs. (ICML 2017)\n\nThe paper offers details on an efficient GPU implementation of their approach. However, there is not a wall-clock running time comparison against other approaches (i.e. Tensorflow Sampled Softmax) on a GPU platform. The paper compares the approaches on the basis of CPU FLOPs, which is not an accurate indicator of real-world GPU performance.", "Thank you for the reference, we were not aware of this very recent work. We will certainly add the citation in the next version.\n\nWhile related, their work only considers the case of decoding, i.e. MAP inference for a trained model and is not applicable to either learning or sampling. As we discussed in the introduction of Section 4, MAP inference is considerably easier to handle with LSH.", "I also have some questions and comments about the baselines/experiments.\n\n1. On text8, you report a perplexity of 190 with an exact softmax and 224 with an LSH softmax, after 3 epochs of training. The adaptive softmax paper, which you cite, reports a perplexity of 144 on text8 with an exact softmax, and 147 with an adaptive softmax, after 5 epochs of training. The adaptive softmax also reports a larger speed up ratio on text8. I was wondering what your justification was for not comparing against the adaptive softmax (or at least mentioning this result somewhere)? \n\nGrave et al. (2016). Efficient softmax approximation for GPUs. arXiv:1609.04309.\n\n2. It is probably worth considering some datasets with larger vocabularies. The 44k vocab on text8 is comparatively quite small. If you do not have the computational resources to do the One Billion Word benchmark then maybe WikiText-103 or EuroParl would be reasonable choices.\n\n", "LSH Softmax seems like an interesting idea but I'm not sure what to make of the experimental results because of the weak baselines and the brittle adaptive learning rate annealing schedule.\n\nNegative Sampling was never meant for training language models and is not an appropriate baseline here. NS is a simplification of Noise Contrastive Estimation designed for learning word representations, a task that doesn't require accurate word probabilities. Please either replace the NS results with the NCE ones or remove them altogether.\n\nYour description of Importance Sampled softmax is incorrect, or at least incomplete, as there's more to it than just sub-sampling the vocabulary using the unigram distribution, and the reported IS results are surprisingly poor. How do you take the effect of the sampling distribution into account? What exactly is the input to the softmax? Note that both IS softmax and NCE are implemented in Tensorflow and are easy to use: https://www.tensorflow.org/versions/r0.12/api_docs/python/nn/candidate_sampling. To me, NS outperforming IS suggests that something might be seriously amiss with the experiments.\n\nThe results also seem to be highly sensitive to the learning rate annealing schedule, which is adjusted by monitoring validation error. Figure 1 suggests that one method can outperform another simply by plateauing earlier and thus inducing an earlier learning rate reduction. How was the initial learning rate chosen? For a fair comparison, you really need to do a search over the initial learning rates and annealing schedules for each method.", "It's great that you were able to get this to work and give you a speed-up. Obviously, the amount of the speed-up you get will depend on the vocabulary size. I would be interested in seeing more analysis of this relationship.\n\nThere was a paper from Xing Shi and Kevin Knight at ACL 2017 called \"Speeding Up Neural Machine Translation Decoding by Shrinking Run-time Vocabulary\" where they claim to be the first to do LSH softmax. Perhaps your implementation is better than theirs though because they didn't get a speedup. Regardless, they probably deserve some credit for trying it first." ]
[ 5, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJ3dBGZ0Z", "iclr_2018_SJ3dBGZ0Z", "iclr_2018_SJ3dBGZ0Z", "rkQC_Rwlz", "Hy2-5bqeG", "S1FN4XcgM", "rkHB-57Jf", "Hy6mJ4qCZ", "iclr_2018_SJ3dBGZ0Z", "r1CPEtQAZ", "iclr_2018_SJ3dBGZ0Z", "iclr_2018_SJ3dBGZ0Z", "iclr_2018_SJ3dBGZ0Z" ]
iclr_2018_BJRxfZbAW
The Context-Aware Learner
One important aspect of generalization in machine learning involves reasoning about previously seen data in new settings. Such reasoning requires learning disentangled representations of data which are interpretable in isolation, but can also be combined in a new, unseen scenario. To this end, we introduce the context-aware learner, a model based on the variational autoencoding framework, which can learn such representations across data sets exhibiting a number of distinct contexts. Moreover, it is successfully able to combine these representations to generate data not seen at training time. The model enjoys an exponential increase in representational ability for a linear increase in context count. We demonstrate that the theory readily extends to a meta-learning setting such as this, and describe a fully unsupervised model in complete generality. Finally, we validate our approach using an adaptation with weak supervision.
rejected-papers
The paper proposes augmenting Neural Statistician with a meta-context variable that specifies the partitioning of the latent context into the per-dataset and per-datapoint dimensions. This idea makes a lot of sense but the reviewers found the experimental section clearly insufficient to demonstrate its effectiveness convincingly. Also introducing only the unsupervised version of the model, which looks challenging to train, but performing all the experiments with the less interesting semi-supervised version makes the paper both less compelling and harder to follow.
val
[ "BkzesZcxG", "BkJ3NH2lM", "B1CLys4bM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose an extension to the Neural Statistician which can model contexts with multiple partially overlapping features. This model can explain datasets by taking into account covariate structure needed to explain away factors of variation and it can also share this structure partially between datasets.\n\nA particularly interesting aspect of this model is the fact that it can learn these context c as features conditioned on meta-context a, which leads to a disentangled representation.\nThis is also not dissimilar to ideas used in 'Bayesian Representation Learning With Oracle Constraints' Karaletsos et al 2016 where similar contextual features c are learned to disentangle representations over observations and implicit supervision.\n\nThe authors provide a clean variational inference algorithm to learn their model. However, a key problem is the following: the nature of the discrete variables being used makes them hard to be inferred with variational inference. The authors mention categorical reparametrization as their trick of choice, but do not go into empirical details int heir experiments regarding the success of this approach. In fact, it would be interesting to study which level of these variables could be analytically collapsed (such as done in the Semi-Supervised learning work by Kingma et al 2014) and which ones can be sampled effectively using a form of reparametrization.\n\nThis also touches on the main criticism of the paper: While the model technically makes sense and is cleanly described and derived, the empirical evaluation is on the weak side and the rich properties of the model are not really shown off. It would be interesting if the authors could consider adding a more illustrative experiment and some more empirical results regarding inference in this model and the marginal structures that can be learned with this model in controlled toy settings.\nCan the model recover richer structure that was imposed during data generation? How limiting is the learning of a?\nHow does the likelihood of the model behave under the circumstances?\nThe experiments do not really convey how well this all will work in practice.\n\n", "This paper introduces a conditional variant of the model defined in the Neural Statistician (https://arxiv.org/abs/1606.02185). The generative model defines the process that produces the dataset. This model is first a mixture over contexts followed by i.i.d. generation of the dataset with possibly some unobserved random variable. This corresponds to a mixture of Neural Statisicians. The authors suggest that such a model could help with disentangling factors of variation in data. In the experiments they only consider training the model with the context selection variable and the data variables observed.\n\nUnfortunately there is minimal quantitative evaluation (visualizing 264 MNIST samples is not enough). The only quantitative evaluation is in Table 1, and it seems the model is not able to generalize reliably to all rotations and all digits. Clearly, we can't expect perfect performance, but there are some troubling results: 5.2 accuracy on non-rotated 0s, 0.0 accuracy on non-rotated 6s. Every digit has at least one rotation that is not well classified, so this section could use more discussion and analysis. For example, how would this metric classify VAE samples with contexts corresponding only to digit type (no rotations)? How would this metric classify vanilla VAE samples that are hand labeled? Moreover, the context selection variable \"a\" should be considered part of the dataset, and as such the paper should report how \"a\" was selected.\n\nThis model is a relatively simple extension of the Neural Statistician, so the novelty of the idea is not enough to counterbalance the lack of quantitative evaluation. I do think the idea is well-motivated, and represents a promising way to incorporate prior knowledge of concepts into our training of VAEs. Still, the paper as it stands is not complete, and I encourage the authors to followup with more thorough quantitative empirical evaluations.\n", "This paper proposes a model for learning to generate data conditional on attributes. Demonstrations show that the model is capable of learning to generate data with attribute combinations that were not present in conjunction at training time.\n\nThe model is interesting, and the results, while preliminary, suggest that the model is capable of making quite interesting generalizations (in particular, it can synthesize images that consist of settings of features that have not been seen before).\n\nHowever, this paper is mercilessly difficult to read. The most serious problems are the extensive discussion of the fully unsupervised variant (rather than the semisupervised variant that is evaluated), poor use of examples when describing the model, nonstandard terminology (“concepts” and “context” are extremely vague terms that are not defined precisely) and discussions to vaguely related work that does not clarify but rather obscures what is going on in the paper.\n\nFor the evaluation, since this paper proposes a technique for learning a posterior recognition model, it would be extremely interesting to see if the model is capable of recognizing images appropriately that combine “contexts” that were not observed during training. The experiments show that the generation component is quite effective, but this is an obvious missing step.\n\nAnyway, some other related work:\nLample et al. (2017 NIPS). Fader Networks. I realize this work is more ambitious since it seeks to be a fully generative model including of the contexts/attributes. But I mostly bring it up because it is an impressively clear presentation of a model and experimental set up." ]
[ 6, 4, 4 ]
[ 5, 3, 4 ]
[ "iclr_2018_BJRxfZbAW", "iclr_2018_BJRxfZbAW", "iclr_2018_BJRxfZbAW" ]
iclr_2018_HkPCrEZ0Z
Combining Model-based and Model-free RL via Multi-step Control Variates
Model-free deep reinforcement learning algorithms are able to successfully solve a wide range of continuous control tasks, but typically require many on-policy samples to achieve good performance. Model-based RL algorithms are sample-efficient on the other hand, while learning accurate global models of complex dynamic environments has turned out to be tricky in practice, which leads to the unsatisfactory performance of the learned policies. In this work, we combine the sample-efficiency of model-based algorithms and the accuracy of model-free algorithms. We leverage multi-step neural network based predictive models by embedding real trajectories into imaginary rollouts of the model, and use the imaginary cumulative rewards as control variates for model-free algorithms. In this way, we achieved the strengths of both sides and derived an estimator which is not only sample-efficient, but also unbiased and of very low variance. We present our evaluation on the MuJoCo and OpenAI Gym benchmarks.
rejected-papers
The paper has some potentially interesting ideas but it feels very preliminary. The experimental section in particular needs a lot more work.
train
[ "BkOz8MSxG", "rkiek__xz", "r1ftQIqgz", "Hk1nV1A7G", "Syhp8-b-M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public" ]
[ "The paper studies a combination of model-based and model-free RL. The idea is to train a forward predictive model which provides multi-step estimates to facilitate model-free policy learning. Some parts of the paper lack clarity and the empirical results need improvement to support the claims (see details below). \n\nClarity \n- The main idea of the proposed method is clear. \n- Some notations and equations are broken. For example: \n(1) The definition of \\bar{A} in Section 4 is broken. \n(2) The overall objective in Section 5 is broken. \n(3) The computation of w in Algorithm 2 is problematic. \n- Some details of the experiments/methods are confusing. For example: \n(1) The step number k is dynamically determined by a short line search as in Section 4 ``Dynamic Rollout’’, but later in the experiments (Section 6) the value of k is set to be 2 uniformly. \n(2) Only the policy and value networks specified. The forward models are not specified. \n(3) In algorithm 1, what exact method is used in determining if \\mu is converged or not? \n\nOriginality\nThe proposed method can be viewed as a multi-step version of the stochastic value gradient algorithm. An empirical comparison could be helpful but not provided. \n\nThe idea of the proposed method is related to the classic Dyna methods from Sutton. A discussion on the difference would be helpful. \n\nSignificance\n- The paper could compare against other relevant baselines that combine model-based and model-free RL methods, such as SVG (stochastic value gradient). \n- To make a fair comparison, the results in Table 1 should consider the amount of data used in pre-training the forward models. Current results in Table 1 only compare the amount of data in policy learning. \n- Figure 3 is plotted for just one random starting state. The Figure could have been more informative if it was averaged over different starting states. The same issue is found in Figure 2. It would be helpful if the plots of other domains are provided. \n- In Figure 2, even though the diff norm fluctuates, the cosine similarity remains almost constant. Does it suggest the cosine similarity is not effective in measuring the state similarity? \n- Figure 1, 4 and 5 need confidence intervals or standard errors. \n\nPros:\n- The research direction in combining model-based and model-free RL is interesting.\n- The main idea of the proposed method is clear. \n\nCons:\n- Parts of the paper are unclear and some details are missing. \n- The paper needs more discussion and comparison to relevant baseline methods. \n- The empirical results need improvement to support the paper’s claims. \n", "The main idea of the paper is to improve off-policy policy gradient estimates using control variates based on multi-step rollouts, and reduce the variance of those control variates using the reparameterization trick. This is laid out primarily in Equations 1-5, and seems like a nice idea, although I must admit I had some trouble following the maths in Equation 5. They include results showing that their method has better sample efficiency than TRPO (which their method also uses under the hood to update value function parameters).\n\nMy main issue with this paper is that the empirical section is a bit weak, for instance only one run seems to be shown for both methods, there is no mention of hyper-parameter selection, and the measure used for generating Table 1 seems pretty arbitrary to me (how were those thresholds chosen?). In addition, one thing I would have liked to get out of this paper is a better understanding of how much each component helps. This could have been done via empirical work, for instance:\n- Explore the effect of the planning horizon, and implicitly compare to SVG(1), which as the authors point out is the same as their method with a horizon of 1.\n- Show the effect of the reparameterization trick on estimator variance.\n- Compare the bias and variance of TRPO estimates vs the proposed method.", "This paper presents a model-based approach to variance reduction in policy gradient methods. The basic idea is to use a multi-step dynamics model as a \"baseline\" (more properly a control variate, as the terminology in the paper uses, but I think baselines are more familiar to the RL community) to reduce the variance of a policy gradient estimator, while remaining unbiased. The authors also discuss how to best learn the type of multi-step dynamics that are well-suited to this problem (essentially, using off-policy data via importance weighting), and they demonstrate the effectiveness of the approach on four continuous control tasks.\n\nThis paper presents a nice idea, and I'm sure that with some polish it will become a very nice conference submission. But right now (at least as of the version I'm reviewing), the paper reads as being half-finished. Several terms are introduced without being properly defined, and one of the key formalisms presented in the paper (the idea of \"embedding\" an \"imaginary trajectory\" remains completely opaque to me. Further, the paper seems to simply leave out some portions: the introduction claims that one of the contributions is \"we show that techniques such as latent space trajectory embedding and dynamic unfolding can significantly boost the performance of the model based control variates,\" but I see literally no section that hints at anything like this (no mention of \"dynamic unfolding\" or \"latent space trajectory embedding\" ever occurs later in the paper).\n\nIn a bit more detail, the key idea of the paper, at least to the extent that I understood it, was that the authors are able to introduce a model-based variance-reduction baseline into the policy gradient term. But because (unlike traditional baselines) introducing it alone would affect the actual estimate, they actually just add and subtract this term, and separate out the two terms in the policy gradient: the new policy gradient like term will be much smaller, and the other term can be computed with less variance using model-based methods and the reparameterization trick. But beyond this, and despite fairly reasonable familiarity with the subject, I simply don't understand other elements that the paper is talking about.\n\nThe paper frequently refers to \"embedding\" \"imaginary trajectories\" into the dynamics model, and I still have no idea what this is actually referring to (the definition at the start of section 4 is completely opaque to me). I also don't really understand why something like this would be needed given the understanding above, but it's likely I'm just missing something here. But I also feel that in this case, it borders on being an issue with the paper itself, as I think this idea needs to be described much more clearly if it is central to the underlying paper.\n\nFinally, although I do think the extent of the algorithm that I could follow is interesting, the second issue with the paper is that the results are fairly weak as they stand currently. The improvement over TRPO is quite minor in most of the evaluated domains (other than possibly in the swimmer task), even with substantial added complexity to the approach. And the experiments are described with very little detail or discussion about the experimental setup.\n\nNor are either of these issues simply due to space constraints: the paper is 2 pages under the soft ICLR limit, with no appendix. Not that there is anything wrong with short papers, but in this case both the clarity of presentation and details are lacking. My honest impression is simply that this is still work in progress and that the write up was done rather hastily. I think it will eventually become a good paper, but it is not ready yet.", "Thank you very much for your reviews...\nWe acknowledge that the experiments section in the current version of this paper is not strong enough. \nAs all the authors agreed, that we should submit a revised version of this paper to a later venue, adding more experimental numbers. \n\nTo reply your questions:\n\nReview 3: Let me briefly explain the terminology here. Latent space trajectory embedding means that, given a real-world trajectory which is generated by the environment and the policy, we can \"embed\" the trajectory into an imaginary trajectory that is generated by the model and the policy. We keep the latent variables of the policy fixed so that if the model perfectly matches the environment, the imaginary trajectory perfectly matches the real trajectory, so this is what we called \"trajectory embedding\" in the paper. The term \"dynamic unfolding\" means that we unfold the forward dynamics model for multiple times, and the actual steps of the unfolding are done dynamically. Namely, we do the unfolding of the dynamics model \"on the fly\". Roughly speaking, we try to find the best number of timesteps for unfolding by runtime evaluation.\n\nReviewer1: Thank you. We will address your problem in the next version. \n\nReviewer2: Thank you. We will address your problem in the next version. \n\n", "1) It would be very interesting to see if any improvements in sample efficiency can been seen in not so toy and more high dimensional domain. Starting at least from humanoid walking task. It's not clear at the moment if there are any benefits for really high-dimensional challenging tasks from the proposed algorithm. \n\n2) Also why a baseline for comparison is only TRPO? Not more sample efficient PPO, the same Q-prop mentioned in paper, DDPG is also a good candidate for baseline. Looks like there won't be any advantage in sample efficiency compare to these baselines. TRPO is the most convenient choice for comparison." ]
[ 5, 5, 4, -1, -1 ]
[ 4, 4, 3, -1, -1 ]
[ "iclr_2018_HkPCrEZ0Z", "iclr_2018_HkPCrEZ0Z", "iclr_2018_HkPCrEZ0Z", "iclr_2018_HkPCrEZ0Z", "iclr_2018_HkPCrEZ0Z" ]
iclr_2018_SkZ-BnyCW
Learning Deep Generative Models With Discrete Latent Variables
There have been numerous recent advancements on learning deep generative models with latent variables thanks to the reparameterization trick that allows to train deep directed models effectively. However, since reparameterization trick only works on continuous variables, deep generative models with discrete latent variables still remain hard to train and perform considerably worse than their continuous counterparts. In this paper, we attempt to shrink this gap by introducing a new architecture and its learning procedure. We develop a hybrid generative model with binary latent variables that consists of an undirected graphical model and a deep neural network. We propose an efficient two-stage pretraining and training procedure that is crucial for learning these models. Experiments on binarized digits and images of natural scenes demonstrate that our model achieves close to the state-of-the-art performance in terms of density estimation and is capable of generating coherent images of natural scenes.
rejected-papers
The reviewers agreed that while this is a well-written paper, it is low on novelty and does not make a substantial enough contribution. They also pointed out that although the reported MNIST results are highly competitive, possibly due to the use of a powerful ResNet decoder, the CIFAR10/ImageNet results are underwhelming.
train
[ "r1yDSeCJM", "SkHg5PQxf", "rkp8JGcef", "rJ7n07jGz", "BJ0mHjnlf", "SkaVZ-9xf", "H1gx50Xyf", "BJdtEE8AW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "public" ]
[ "Summary of the paper:\nThe paper proposes to augment a variational auto encoder (VAE) with an binary restricted Boltzmann machine (RBM) in the role of the prior of the generative model. To yield a good initialisation of the parameters of the RBM and the inference network a special pertaining procedure is introduced. The model produces competitive Likelihood results on MNIST and was further tested on CIFAR 10. \n\nClarity and quality: \n\n1. From the description of the pertaining procedure and the appendix B I got the impression that the inference network maps into [0,1] and not into {0,1}. Does it mean, you are not really considering binary latent variables (making the RBM model the values in [0,1] by its probability p(z|h))? \n\n2. on page 2:\nRWS....\"derive a tighter lower bound\": Where does the \"tighter\" refer to? \n\n3. \"multivariate Bernoulli modeled by an RBM\": Note, while in a multivariate Bernoulli the binary variables would be independent from each others, this is usually not the case for the visible variables of RBMs (only in the conditional distribution given the state of the hidden variables).\n\n4. The notation could be improved, e.g.:\n-x_data and x_sample are not explained\n- M is not defined in equation 5. \n\n5. \"this training method has been previously used to produce the best results on MNIST\" Note, that parallel tempering often leads to better results when training RBMs (see http://proceedings.mlr.press/v9/desjardins10a/desjardins10a.pdf) . Furthermore, centred RBMs are also get better results than vanilla RBMs (see: http://jmlr.org/papers/v17/14-237.html).\n\nOriginality and significance:\nAs already mentioned in a comment on open-review the current version of the paper misses to mention one very related work: \"discrete variational auto encoders\". Also \"bidirectional Helmholtz machines\" could be mentioned as generative model with discrete latent variables. The results for both should also be reported in Table 1 (discrete VAEs: 81,01, BiHMs: 84,3). \n\nFrom the motivation the advantages of the model did not become very clear to me. Main advantage seems to be the good likelihood result on MNIST (but likelihood does not improve compared to IWAE on CIFAR 10 for example). However, using an RBM as prior has the disadvantage that sampling from the generative model requires running a Markov chain now while having a solely directed generative model allows for fast sampling. \n\nExperiments show good likelihood results on MNIST. Best results are obtained when using a ResNet decoder. I wondered how much a standard VAE is improved by using such a powerful decoder. Reporting this, would allow to understand, how much is gained from using a RBM for learning the prior. \n\nMinor comments:\npage 1:\n\"debut of variational auto encoder (VAE) and reparametrization trick\" -> debut of variational auto encoders (VAE) and the reparametrization trick\",\npage 2:\n\"with respect to the parameter of p(x,z)\" -> \"with respect to the parameters of p(x,z)\"\n\"parameters in p\" -> \"parameters of p\" \n\"is multivariate Bernoulli\" -> \"is a multivariate Bernoulli\"\n\"we compute them\" -> \"we compute it\" \npage 3:\n\"help find a good\" -> \"help to find a good\"\npage 7:\n\"possible apply\" -> \"possible to apply\"", "While I acknowledge that training generative models with binary latent variables is hard, I'm not sure this paper really makes valuable progress in this direction. The only results that seem promising are those on binarized MNIST, for the non-convolutional architecture, and this setting isn't particularly exciting. All other experiments seem to suggest that the proposed model/algorithm is behind the state of the art. Moreover, the proposed approach is fairly incremental, compared to existing work on RWS, VIMCO, etc.\n\nSo while this work seem to have been seriously and thoughtfully executed, I think it falls short of the ICLR acceptance bar.", "Interesting work, but I’m not convinced by the arguments nor by the experiments. Similar models have been trained before; it’s not clear that the proposed pretraining procedure is a practical step forwards. And quite some decisions seem ad-hoc and not principled. \n\nNevertheless, interesting work for everyone interested in RBMs as priors for “binary VAEs”. \n\n", "We thank reviewers for their valuable feedback. There are several points we would like to clarify.\n\nMotivation: The motivation of the work is to study the effect of a learnable prior for generative models with latent variables. The learnable prior can potentially take forms of any graphical model, while here we focus on using RBM due to its simplicity. This strength over vanilla VAEs has been shown on CIFAR10 and ImageNet64 experiments. In addition, these models can still be quantitatively evaluated in terms of density estimation (as we demonstrate in our work), which is a benefit compared top GAN-like models since they are relatively hard to evaluate quantitatively.\n\nQuantitative performance: Our model performs well on MNIST. The ResNet model uses a similar network as IAF-VAE, which was previously shown to achieve state-of-the-art results using deep convnets (cited in Table 2 and mentioned in the main text). IAF-VAE was about 2 nats better than vanilla VAE using the same ResNet architecture. Thus, the fact that our model performs slightly better than IAF-VAE shows that the learned RBM prior has some merits. The current state-of-the-art models use PixelCNN/PixelRNN-based decoders, which by themselves are very strong density estimation models. PixelCNN/PixelRNN-based decoders can potentially be integrated into our framework as well. \n\nRegarding CIFAR10: Compressing real valued images into binary is a hard problem. In fact, if we increase the dimension of the latent space from 1024 to 2048 and use a 2048-4096 RBM, the performance of our model can be substantially improved, achieving Test NLL of 4.54 bits/dim, the same as that of IWAE. \n\nWe focus on comparing our results with IWAE as it represents a strong baseline that uses discrete latent variables. We also compared with models that use unconditional Bernoulli prior, but these models performed much worse, compared to both, IWAE and our model, in terms of both density estimation and generated samples. \n\n\nClarifications for Reviewer 2\n\nThank you for your detailed feedback. \n\n1. Regarding inference network mapping into [0,1] and not into {0,1}: \n\nAs mentioned at the end of the pretraining section, the “soft-binarization” is removed after pertaining and a sigmoid layer is added to produce valid probabilities. During joint training, we only use samples {0, 1}.\n\n2. Regarding tightness of the lower-bound:\n\nLower bound in Eq.2 is tighter than a single sample bound in Eq.1 (as was originally derived in IWAE paper). We will change the text to make this point clear.\n\n3. Regarding \"multivariate Bernoulli modeled by an RBM\": \n\nThanks for pointing this out. Yes the use of multivariate Bernoulli is not rigorous here. We will change this in revision.\n\n4. Improving notation:\n\nThanks for catching typos. The x_data refers to data points from training set while x_sample is samples from model distribution. M refers to the model distribution. We will check other notations as well.\n\n5. Regarding parallel tempering/centred RBMs:\n\nThanks for pointing this out. Here we mean that this method has produced good RBM results in practice on MNIST in terms of density estimation, and has been widely used in practice (in addition to Persistent CD) We will correct this. \n", "Thank you for your comments \n\n>>> Just to clarify: You pretrain the encoder/decoder pair and pretrain a RBM on their latent representation? And then during joint training (section 3.2) you block direct gradient flow (disable the soft-binarization) and use VIMCO?\n\nYes. The direct gradient flow is blocked during joint training with VIMCO and Contrastive RWS.\n\n\n>>> I’m not convinced I follow the second part of the argument in 3.0: Training SBNs or DBNs with e.g. Gumbel-softmax based methods would allow gradients to flow in a very similar fashion. Doesn’t the “gradient flow” depend on the training method / gradient estimator (in your case soft-binarization) rather than this models structure?\n\nThe difference between our model and SBNs or DBNs is better illustrated in Figure 1. In DBN and SBN, every layer is stochastic and defines its own generative distribution p(z_i+1|z_i). There are N stochastic layers and thus N intermediate generative distributions. Correspondingly, there are N approximate inference distributions (as denoted by multiple upward arrows). The i-th inference layer is trained to approximate the posterior distribution of i-th generative layer. The learning signal for each layer depends locally on the input output states, but not on the gradient information that is propagated from deeper layers. In our model, the decoder and encoder are deterministic and continuous. They define one pair of p(x|z) and q(z|x) and there are no intermediate stochastic layers. Thus multiple layers in encoder/decoder are trained with gradients that flow freely within encoder/decoder.\n\nIf we train DBN or SBN with Gumbel-softmax based methods, the stochastic nodes in each layer are replaced with continuous approximations. In that case the gradient flow between layers is possible but not as freely as in our model because still in DBN or SBN, both the encoder and decoder would still be represented as stochastic layers. Since soft-binarization is only used during pretraining, our model does not contain any continuous relaxation during joint training (our encoder and decoder are continuous and deterministic), which is a major difference from continuous relaxation based methods.\n\n\n>>> From this perspective your model is directly comparable to Discrete VAEs, isn’t it? Where Discrete VAEs introduced a different reparam. based training method.\n\nBoth Discrete VAEs and our model utilize deep continuous encoder/decoder for stronger representation power. Discrete VAEs have a layer of continuous latent variables between bipartite Boltzmann machine prior and encoder/decoder. They project posterior and prior into a continuous space to make the \"autoencoder\" part of the loss be fully differentiable. \n\nIn our method, the \"autoencoder\" is fully differentiable during pretraining. In joint training, encoder and decoder only transmit stochastic states and thus block the gradient flow between them. Another difference in architecture is that Discrete VAEs use bipartite BM with both parts connected to the rest of the model while our models use 2 layer BM (RBM) with first layer connected to the latent space z and the second being fully hidden. In terms of density estimation, our model outperforms Discrete VAE on MNIST(79.58 vs. 81.01, as reported by [1]). We will clarify this point and add additional experimental results comparing our model to Discrete VAEs.\n\n\n>>> Have you tried alternative pretraining methods? E.g. using Gumbel-softmax based instead eqn. (3), or VIMCO trained factorial top layer SBN? Do we have any idea why joint training might be so hard?\n\nWe tried using soft-binarization with Gaussian noise but that introduces strong artifacts in generated images. Pretraining with Gumbel-softmax based methods is an interesting idea. In their original paper [2][3], the methods are tested on models with one or two layers. Thus it is hard to conclude immediately whether that will work as effectively on deep networks. But that will be worth exploring. Pretraining with SBN is more tricky. Note that it still does not solve the problem that deep generative models are very hard to train from scratch with REINFORCE style algorithms and discrete latent variables. Pretraining with factorial prior also adds stronger constrain on the approximate posterior compared with the RBM prior.\n\n\n>>> The title seems very broad - large parts of the paper propose and evaluate a pretraining procedure for a specific two layer DBM architecture.\n\nThank you for pointing this out, we will make the title be more focused. \n\n\n>>> I’m curious: What are typical log Z estimate for your models in table 1?\n\nBetween 265 and 270.\n\n\n[1] Discrete Variational Autoencoders\n[2] Categorical Reparameterization with Gumbel-Softmax\n[3] The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables\n", "Interesting work!\n\nJust to clarify: You pretrain the encoder/decoder pair and pretrain a RBM on their latent representation? And then during joint training (section 3.2) you block direct gradient flow (disable the soft-binarization) and use VIMCO? \n\nI’m not convinced I follow the second part of the argument in 3.0: Training SBNs or DBNs with e.g. Gumbel-softmax based methods would allow gradients to flow in a very similar fashion. Doesn’t the “gradient flow” depend on the training method / gradient estimator (in your case soft-binarization) rather than this models structure? \n\nFrom this perspective your model is directly comparable to Discrete VAEs, isn’t it? Where Discrete VAEs introduced a different reparam. based training method. \n\nHave you tried alternative pretraining methods? E.g. using Gumbel-softmax based instead eqn. (3), or VIMCO trained factorial top later SBN? Do we have any idea why joint training might be so hard?\n\nThe title seems very broad - large parts of the paper propose and evaluate a pretraining procedure for a specific two layer DBM architecture. \n\nI’m curious: What are typical log Z estimate for your models in table 1? ", "Thank you for pointing out this paper! We will include the discussion on Discrete VAE in future revisions.\n\nAlthough both Discrete VAE and our model have discrete latent variables, the detailed architectures are different. In Discrete VAE, the RBM (or bipartite Boltzmann machine) is fully hidden(z) and connects to the encoder-decoder through a set of continuous smoothing variables. Our model uses a latent(z)-hidden(h) RBM. The visible layer of RBM directly connects to encoder-decoder and they only exchange discrete states.\n\nThese two papers also have different focuses. In Discrete VAE, they introduce a method to project posterior and prior into a continuous space so that the discrete variables can be integrated out. This can be seen as “reparameterizing” discrete into continuous to make the autoencoder term fully differentiable. In our paper, we try to answer whether it is possible to train a DBN-inspired model without any reparameterization but with proper learning procedure. This results in a conceptually straightforward model that actually outperforms the Discrete VAE on MNIST (79.58 VS 81.01). We also study how well our method scales to real images, which is not mentioned in many previous works using discrete latent variables.", "What were your reasons for choosing such a general title? It would be understandable if this paper were the first work in this area or if it provided some sort of unifying view of prior of work on such models (DBNs, DBMs, SBNs etc.), but it is not the case.\n\nIt would also be good to discuss how the proposed model is related to Discrete VAEs, which also combine an RBM with a directed mapping." ]
[ 4, 5, 4, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkZ-BnyCW", "iclr_2018_SkZ-BnyCW", "iclr_2018_SkZ-BnyCW", "iclr_2018_SkZ-BnyCW", "SkaVZ-9xf", "iclr_2018_SkZ-BnyCW", "BJdtEE8AW", "iclr_2018_SkZ-BnyCW" ]
iclr_2018_HkbmWqxCZ
The Mutual Autoencoder: Controlling Information in Latent Code Representations
Variational autoencoders (VAE) learn probabilistic latent variable models by optimizing a bound on the marginal likelihood of the observed data. Beyond providing a good density model a VAE model assigns to each data instance a latent code. In many applications, this latent code provides a useful high-level summary of the observation. However, the VAE may fail to learn a useful representation when the decoder family is very expressive. This is because maximum likelihood does not explicitly encourage useful representations and the latent variable is used only if it helps model the marginal distribution. This makes representation learning with VAEs unreliable. To address this issue, we propose a method for explicitly controlling the amount of information stored in the latent code. Our method can learn codes ranging from independent to nearly deterministic while benefiting from decoder capacity. Thus, we decouple the choice of decoder capacity and the latent code dimensionality from the amount of information stored in the code.
rejected-papers
This is a well-written paper that aims to address an important problem. However, all the reviewers agreed that the experimental section is currently too weak for publication. They also made several good suggestions about improving the paper and the authors are encouraged to incorporate them before resubmitting.
train
[ "SkOy0Pokz", "rki0XSHlf", "Sy-QZtjgz", "SJBM0BYXz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "\nSummary\n\nThis paper proposes a penalized VAE training objection for the purpose of increasing the information between the data x and latent code z. Ideally, optimization would consist of maximizing log p(x) - | I(x,z) - M |, where M is the user-specified target mutual information (MI) and I(x,z) is the model’s current MI value, but I(x,z) is intractable, necessitating the use of an auxiliary model r(z|x). Optimization, then, consists of alternating gradient ascent on the VAE parameters and r’s parameters. Experiments on simulations and text data are reported, showing that increasing M has the desired effect of allowing more deviation from the prior. Specifically, this is shown through text generation where the sampled sentences become more varied as M is decreased and better reconstructed as M is increased. \n\n\nEvaluation\n\nPros: I like how this paper formalizes failure in representation learning as information loss in z---although the formulation is not particularly novel, i.e. [Zhao et al., ArXiv 2017]), and constructs an explicit, penalized objective to allow the user to specify the amount of information retained in z. In my opinion, the proposed objective is more transparent than the objectives proposed by related work. For instance, Chen et al.’s (2017) Lossy VAE, while aiming to solve essentially the same problem, does so by parameterizing the prior and using a windowed decoder, but there is no explicit control mechanism as far as I’m aware (except for how many parameters / window size). Perhaps the Beta-VAE’s [Higgins et al., ICLR 2017] KLD weight is similarly interpretable (as beta increases, less information is retained), but I like that M has the clear meaning of mutual information---whereas the beta in the Beta-VAE is just a Lagrangian. In terms of experiments, I like the first simulation; it’s a convincing sanity check. As for the second, I like the spirit of it, but I have some criticisms, as I’ll explain below.\n\nCons: The method requires training an auxiliary model r(z|x) to estimate I(x,z). While I don’t find the introduction of r(z|x) problematic, I do wish there was more discussion and analysis of how well the mutual information is being approximated during training, especially given some of the simplifying assumptions, such as r(z|x)=p(z|x). If the MI estimate is way off, that detracts from the method and makes an alternative like the Beta-VAE---which doesn’t require an auxiliary model---more appealing, since what makes the MAE superior---its principled targeting of MI---does not hold in practice.\n\nAs for the movie review experiment, I find the sentence samples a bit anecdotal. Was the seed sentence (“there are many great scenes of course”) randomly chosen or hand picked? Was this interpolation behavior typical? I ask these questions because I find the plot in Figure 3 all but meaningless. It’s good that we see reconstruction quality go up as M increases, as expected, but the baseline VAE model is a strawman. How does reconstruction percentage look for the Bowman et al. (2015) VAE? What about the Beta-VAE? Or Lossy VAE? Figure 3 would be okay if there were more experiments, but as it is the only quantitative result, more work should have gone in to it. For instance, a compelling result would be if we see one or more of the models above plateau in reconstruction percentage and the MAE surpass that plateau.\n\n\nConclusions\n\nWhile I found aspects of this paper interesting, I recommend rejection primarily for two reasons. The first is that I would like to see how well the mutual information is being estimated during training. If the estimate is way off, this makes the method less appealing as what I like about it---the interpretable MI target---is not really a ‘target’, in practice, and rather, is a rough hyperparameter similar to the Beta-VAE’s beta term (which has the added benefit of no auxiliary model). The second reason is the paper’s weak experimental section. The only quantitative result is Figure 3, and while it shows reconstruction percentage increases with M, there is no way to contextualize the number as the only comparison model is a weak VAE, which gives ~ 0%. Questions I would like to see answered: How good is the MI estimate? How close is the converged VAE to the target? How does the model compare to the Bowman et al. VAE or the Beta-VAE? (It would be quite compelling to show similar or improved performance without the training tricks used by Bowman et al.) Can we somehow estimate the appropriate M directly from data (such as based on the entropy of training or validation set) in order to set the target rigorously? \n\n\n1. S. Zhao, J. Song, and S. Ermon. “InfoVAE: Information Maximizing Variational Autoencoders.” ArXiv 2017.\n2. X. Chen, D. Kingma, T. Salimans, Y. Duan, P. Dhariwal, J. Shulman, I. Sutskever, and P. Abbeel. “Variational Lossy Autoencoder.” ICLR 2017.\n3. I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner. “Beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework.” ICLR 2017\n4. S. Bowman, L. Vilnis, O. Vinyas, A. Dai, R. Jozefowicz, and S. Bengio. “Generating Sentences from a Continuous Space.” CoNLL 2016.", "This paper presents mutual autoencoders (MAE). MAE aims to address the limitation of regular variational autoencoders (VAE) for latent representation learning — VAE sometimes simply ignores the latent code z, especially with a powerful decoding distribution. The idea of MAE is to optimize the VAE objective subject to a constraint on the mutual information between the data x and latent code z: setting the mutual information constraints larger will force the latent code z to learn a meaningful representation of the data. An approximation strategy is employed to approximate the intractable mutual information. Experimental results on both synthetic data and movie review data demonstrate the effectiveness of the MAEs. \n\nOverall, the paper is well-written. The problem that VAEs fail to learn a meaningful representation is a well-known issue. This paper presents a simple, yet principled modification to the VAE objective to address this problem. I do, however, have two major concerns about the paper:\n\n1. The proposed idea to add a mutual information constraint between the data x and latent code z is a very natural fix to the failure of regular VAEs. However, mutual information itself is not a quantity that is easy to comprehend and specify. This is not like, e.g., l2 regularization parameter, for which there exists a relatively clear way to specify and tune. For mutual information, at least it is not clear to me, how much mutual information is “enough” and I am pretty sure it is model/data-dependent. To make it worse, there exist no metrics in representation learning for us to easily tune this mutual information constraint. It seems the only way to select the mutual information constraint is to qualitative inspect the model fits. This makes the method less practical. \n\n2. The approximation to the mutual information seems rather loose. If I understand correctly, the optimization of MAE is similar to that of a regular VAE, with an additional parametric model r_w(z|x) which is used to approximate the infomax bound. (And this also adds an additional term to the gradient wrt \\theta). r_w(z|x) is updated at the same time as \\theta, which means r_w(z|x) is quite far from being an optimal r* as it is intended, especially early during the optimization. Further more, all the derivation following Eq (12-13) are based on r* being optimal, while in reality, it is probably not even close. This makes the whole approximation quite hand-waving. \n\nRelated to 2, the discussion in Section 6 deserves more elaboration. It seems that having a flexible encoder is quite important, yet the authors only mention lightly that they use the approximate posterior from Cremer et al. (2017). Will MAE not work without this? How will VAE (without the mutual information constraint) work with this? A lot of the details seem to be glossed over. \n\nFurthermore, this work is also related to the deep variational information bottleneck of Alemi et al. 2017 (especially in the appendix they derived the VAE objective using information bottleneck principle). My intuition is that using a larger mutual information constraint in MAE is somewhat similar to setting the regularization \\beta to be smaller than 1 — both are making the approximating posterior more concentrated. I wonder if the authors have explored this idea. \n \n\nMinor comments:\n\n1. It would be more informative to include the running time in the presented results. \n\n2. Since the goal of r_w(z | x) is to approximate the posterior p(z | x), what about directly using q(z | x) to approximate it? \n\n3. In Algorithm 1, should line 14 and 15 be swapped? It seems samples are required in line 14 as well. \n\n4. Nitpicking: technically the model in Eq (1) is not a hierarchical model. \n\n", "The authors propose a variational autoencoder constrained in such a way that the mutual information between the observed variables and their latent representation is constant and user specified. To do so, they leverage the penalty function method as a relaxation of the original problem, and a variational bound (infomax) to approximate the mutual information term in their objective.\n\nI really enjoyed reading the paper, the proposed approach is well motivated and clearly described. However, the experiments section is very weak. Although I like the illustrative toy problem, in that it clearly highlights how the method works, the experiment on real data is not very convincing. Further, the authors do not consider a more rigorous benchmark including additional datasets and state-of-the-art modelling approaches for text. \n\n- {\\cal Z} in (1) not defined, same for \\Theta.", "We would like to thank the reviewers for reading the paper so carefully and for their detailed reviews. The main weaknesses of the paper seem to be the experimental section and a missing analysis of how well MI is estimated by our method. We will definitely work on this and submit to a later conference." ]
[ 4, 5, 4, -1 ]
[ 5, 4, 4, -1 ]
[ "iclr_2018_HkbmWqxCZ", "iclr_2018_HkbmWqxCZ", "iclr_2018_HkbmWqxCZ", "iclr_2018_HkbmWqxCZ" ]
iclr_2018_ryb83alCZ
Towards Unsupervised Classification with Deep Generative Models
Deep generative models have advanced the state-of-the-art in semi-supervised classification, however their capacity for deriving useful discriminative features in a completely unsupervised fashion for classification in difficult real-world data sets, where adequate manifold separation is required has not been adequately explored. Most methods rely on defining a pipeline of deriving features via generative modeling and then applying clustering algorithms, separating the modeling and discriminative processes. We propose a deep hierarchical generative model which uses a mixture of discrete and continuous distributions to learn to effectively separate the different data manifolds and is trainable end-to-end. We show that by specifying the form of the discrete variable distribution we are imposing a specific structure on the model's latent representations. We test our model's discriminative performance on the task of CLL diagnosis against baselines from the field of computational FC, as well as the Variational Autoencoder literature.
rejected-papers
The authors propose a hierarchical VAE model with a discrete latent variable in the top-most layer for unsupervised learning of discriminative representations. While the reported results on the two flow cytometry datasets are encouraging, they are insufficient to draw strong conclusions about the general effectiveness of the proposed architecture. Also, as two of the reviewers stated the proposed model is very similar to several VAE models in the literature. This paper seems better suited for a more applied venue than ICLR.
train
[ "SJk7H29xM", "SyangtilG", "BkmqxxDbz", "rJFs4Qh7M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "This paper addresses the question of unsupervised clustering with high classification performance. They propose a deep variational autoencoder architecture with categorical latent variables at the deepest layer and propose to train it with modifications of the standard variational approach with reparameterization gradients. The model is tested on a medical imagining dataset where the task is to distinguish healthy from pathological lymphocytes from blood samples. \n\nI am not an expert on this particular dataset, but to my eye the results look impressive. They show high sensitivity and high specificity. This paper may be an important contribution to the medical imaging community.\n\nMy primary concern with the paper is the lack of novelty and relatively little in the way of contributions to the ICLR community. The proposed model is a simple variant on the standard VAE models (see for example the Ladder VAE https://arxiv.org/abs/1602.02282 for deep models with multiple stochastic layers). This would be OK if a thorough evaluation on at least two other datasets showed similar improvements as the lymphocytes dataset. As it stands, it is difficulty for me to assess the value of this model.\n\nMinor questions / concerns:\n\n- The authors claim in the first paragraph of 3.2 that deterministic mappings lack expressiveness. Would be great to see the paper take this claim seriously and investigate it.\n- In equation (13) it isn't clear whether you use q_phi to be the discrete mass or the concrete density. The distinction is discussed in https://arxiv.org/abs/1611.00712\n- Would be nice to report the MCC in experimental results.", "The authors propose a deep hierarchical model for unsupervised classification by using a combination of latent continuous and discrete distributions.\n\nAlthough, the detailed description of flow cytometry and chronic lymphocytic leukemia are appreciated, they are probably out of the scope of the paper or not relevant for the presented approach.\n\nThe authors claim that existing approaches for clustering cell populations in flow cytometry data are sensitive to noise and rely on cumbersome hyperparameter specifications, which in some sense is true, however, that does not mean that the proposed approach is less sensitive to noise or that that the proposed model has less free-parameters to tune (layers, hidden units, regularization, step size, link function, etc.). In fact, it is not clear how the authors would be able to define a model architecture without label information, what would be the model selection metric to optimize, ELBO?. At least this very issue is not addressed in the manuscript.\n\nIn Figure 1, please use different colors for different cell types. It is not described, but it would be good to stress out that each of the 4 components in Figure 1 right, corresponds to a mixture component.\n\nThe results in Tables 1 and 2 are not very convincing without clarity on the selection of the thresholds for each of the models. It would be better to report threshold-free metrics such as area under the ROC or PR curve. From Figures 3 and 4 for example, it is difficult to grasp the performance gap between the proposed approach and \\beta-VAE.\n\n- FC and CLL are not spelled out in the introduction.\n- Equation (5) is confusing, what is h, y = h or is h a mixture of Gaussians with \\alpha mixing proportions?\n- Equation (6) should be q(z_L|z)\n- Equation (8) is again confusing.\n- Equation (10) is not correct, x can't be conditioned on h, as it is clearly conditioned on z_1.\n- Equation (11) it should be q_\\phi().\n- It is not clear why the probabilities are thresholded at 0.5\n- Figures 3 and 4 could use larger markers and font sizes.", "Summary\n\nThe authors propose a hierarchical generative model with both continuous and discrete latent variables. The authors empirically demonstrate that the latent space of their model separates well healthy vs pathological cells in a dataset for Chronic lymphocytic leukemia (CLL) diagnostics. \n\n\nMain\n\nOverall the paper is reasonably well written. There are a few clarity issues detailed below.\nThe results seem very promising as the model clearly separates the two types of cells. But more baseline experiments are needed to assess the robustness of the results. \n\nNovelty\n\nThe model introduced is a variant of a deep latent Gaussian model, where the top-most layer is a discrete random variable. Furthermore, the authors employ the Gumbel-trick to avoid having to explicitly marginalize the discrete latent variables.\n\nGiven the extensive literature on combining discrete and continuous latent variables in VAEs, the novelty factor of the proposed model is quite weak. \n\nThe authors use the Gumbel-trick in order to avoid explicit marginalization over the discrete variables. However, the number of categories in their problem is small (n=2), so the computational overhead of an explicit marginalization would be negligible. The result would be equivalent to replacing the top of the model p(y) p(z_L|y) by a GMM p_{GMM}(z_L) with two Gaussian components only.\nGive these observations, it seems that this is an unnecessary complication added to the model as an effort to increase novelty. \nIt would be very informative to compare both approaches.\n\nI would perhaps recommend this paper for an applied workshop, but not for publication in a main conference.\n\nDetails:\n\n1) Variable h was not defined before it appeared in Eq. (5). From the text/equations we can deduce h = (y, z_1, …, z_L), but this should be more clearly stated.\n2) It is counter-intuitive to define the inference model before having defined the generative model structure, perhaps the authors should consider changing the presentation order.\n3) Was the VAE in VAE+SVM also trained with lambda-annealing?\n4) How does a simple MLP classifier compares to the models on Table 1 and 2?\n5) It seems that, what is called beta-VAE here is the same model HCDVAE but trained with a lambda that anneals to a value different than one (the value of beta). In this case what is the value it terminates? How was that value chosen?\n6) The authors used 3 stochastic layers, how was that decided? Is there a substantial difference in performance compared to 1 and 2 stochastic layers?\n7) How do the different models behave in terms train vs test set likelihoods. Was there overfitting detected for some settings? How does the choice of the MCC threshold affects train/test likelihoods? \n8) Have the authors compared explicit marginalizing y with using the Gumbel-trick?\n\nOther related work:\n\nA few other papers that have explored discrete latent variables as a way to build more structured VAEs are worth mentioning/referring to:\n\n[1] Dilokthanakul N, Mediano PA, Garnelo M, Lee MC, Salimbeni H, Arulkumaran K, Shanahan M. Deep unsupervised clustering with gaussian mixture variational autoencoders. arXiv preprint arXiv:1611.02648. 2016 Nov 8.\n\n[2] Goyal P, Hu Z, Liang X, Wang C, Xing E. Nonparametric Variational Auto-encoders for Hierarchical Representation Learning. arXiv preprint arXiv:1703.07027. 2017 Mar 21.\n", "We thank the reviewers for their feedback. We chose to respond with a top-level comment as some concerns were shared by the reviewers.\n\nRegarding the novelty of the paper we felt that we achieved good results in two difficult real world data sets which relate to an important real world problem. We tried to present deep generative modeling as a viable solution to a scenario that is all too frequent in most real world settings (i.e. significantly imbalanced data sets). \n\nDuring our experiments we noticed that explicit marginalization over the discrete variable was not in fact able to separate the two manifolds of interest. In fact the model \"overfitted\" the predominant class in the data set, completely ignoring the discrete latent variable. This phenomenon is consistent with previous analysis (please see https://openreview.net/forum?id=rydQ6CEKl and http://ruishu.io/2016/12/25/gmvae/). Thus, introducing the Gumbel-Softmax trick was not at all an effort to introduce artificial novelty but a choice made out of necessity. We agree that a comparison between the two approaches would be illuminating it is not entirely clear why the relaxed continuous density yields better predictive performance than the discrete mass, which seems to remain completely uninformative throughout training. We are still running experiments and considering information theoretic tools to analyze this phenomenon.\n\nThe number of stochastic layers was chosen with the nature of the task at hand in mind, i.e. we tried to maximize predictive performance, and so while 1 stochastic layer of 128 units was enough to achieve around 0.88 for TPR and 0.91 for TNR for the first data set and 0.9 for TPR and 0.93 for TNR, we increased the number of layers to optimal results. A greater number of layers, while still trainable did not yield better performance. We would also like to note that because of the above our model is much less sensitive to noise, as similar configurations of hyperparameters yielded similar results, which is not the case for the clustering baselines we compared it against.\n\nWe included an MLP classifier in the revision of our paper to better highlight the merit of generative modeling and stochastic features against deterministic mappings induced by training with labeled data in scenarios of significant class imbalance. As expected the classifier \"overfitted\" the predominant class in the data set.\n\nThe MCC early stopping criterion puts a limit on generative performance. I.e. if the model is trained for more iterations it can reach better log likelihood scores, however we note that according to our experiments, higher log-likelihood scores do not imply good predictive performance, which is also a reason we omitted them from the paper, since we focus on classification, rather than generative performance. Having said that, the model was not found to overfit the data either in terms of predictive or generative performance.\n\nbeta-VAE proposes the fixing of a \\beta term to a constant value that is greater than 1, in an effort to encourage the learning of more efficient latent codes. I.e. it is not annealed. In our experiments we tried among {5, 25, 50, 100, 250, 500}, however values greater than 5 yielded no improvement in discriminative performance.\n\nThe threshold was chosen to be at 0.5 since we make the assumption that the Bernoulli probability represents the positive case (i.e. 1). This can be easily understood if one thinks about it in terms of a categorical variable with 2 components, each representing a diagnosis outcome (i.e. p(K=k1) = \\theta and p(K=k2) = 1 - \\theta).\n" ]
[ 4, 4, 4, -1 ]
[ 4, 4, 5, -1 ]
[ "iclr_2018_ryb83alCZ", "iclr_2018_ryb83alCZ", "iclr_2018_ryb83alCZ", "iclr_2018_ryb83alCZ" ]
iclr_2018_SkERSm-0-
Preliminary theoretical troubleshooting in Variational Autoencoder
What would be learned by variational autoencoder(VAE) and what influence the disentanglement of VAE? This paper tries to preliminarily address VAE's intrinsic dimension, real factor, disentanglement and indicator issues theoretically in the idealistic situation and implementation issue practically through noise modeling perspective in the realistic case. On intrinsic dimension issue, due to information conservation, the idealistic VAE learns and only learns intrinsic factor dimension. Besides, suggested by mutual information separation property, the constraint induced by Gaussian prior to the VAE objective encourages the information sparsity in dimension. On disentanglement issue, subsequently, inspired by information conservation theorem the clarification on disentanglement in this paper is made. On real factor issue, due to factor equivalence, the idealistic VAE possibly learns any factor set in the equivalence class. On indicator issue, the behavior of current disentanglement metric is discussed, and several performance indicators regarding the disentanglement and generating influence are subsequently raised to evaluate the performance of VAE model and to supervise the used factors. On implementation issue, the experiments under noise modeling and constraints empirically testify the theoretical analysis and also show their own characteristic in pursuing disentanglement.
rejected-papers
The reviewers agreed that the paper was too long (more than twice the recommended page limit not counting the appendix) and difficult to follow. They also pointed out that its central idea of learning the noise distribution in a VAE was not novel. While the shortened version uploaded by the authors looks like a step in the right direction, it was not sufficient to convince the reviewers.
train
[ "Hk-DIMdez", "r1-zOIFgM", "SJcdJ0tez", "SkljvChQz", "S1XZ-gBGG", "HJ3HXnNZM", "rJJokxrMf", "S1wuQeBMf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper studies the importance of the noise modelling in Gaussian VAE. The original Gaussian VAE proposes to use the inference network for the noise that takes latent variables as inputs and outputs the variances, but most of the existing works on Gaussian VAE just use fixed noise probably because the inference network is hard to train. In this paper, instead of using the fixed noise or inference network for the noise, the authors proposed to train the noise using Empirical-Bayes like fashion. The algorithm to train noise level for the single Gaussian decoder and mixture of Gaussian decoder is presented, and the experiments show that fitting the noise actually improves the ELBO and enhances the ability to disentangle latent factors.\n \nI appreciate the importance of noise modeling, but not sure if the presented algorithm is a right way to do it. The proposed algorithm assumes the Gaussian likelihood with homoscedastic noise, but this is not the case for many real-world data (MNIST and Color images are usually modelled with Bernoulli likelihood). The update equations for noises rely on the simple model structure, and this may not hold for the arbitrary complex likelihood (or implicit likelihood case). In my personal opinion, making the inference network for the noise to be trainable would be more principled way of solving the problem.\n \nThe paper is too long (30 pages) and dense, so it is very hard to read and understand the whole stuff. Remember that the ‘recommended’ page limit is 8 pages. The proposed algorithm was not compared to the generative models other than the basic VAE or beta-VAE.", "This paper proposes to modify how noise factors are treated when developing VAE models. For example, the original VAE work from (Kingma and Welling, 2013) applies a deep network to learn a diagonal approximation to the covariance on the decoder side. Subsequent follow-up papers have often simplified this covariance to sigma^2*I, where sigma^2 is assumed to be known or manually tuned. In contrast, this submission suggests either treating sigma^2 as a trainable parameter, or else introducing a more flexible zero-mean mixture-of-Gaussians (MoG) model for the decoder noise. These modeling adaptations are then analyzed using various performance indicators and empirical studies.\n\nThe primary issues I have with this work are threefold: (i) The paper is not suitably organized/condensed for an ICLR submission, (ii) the presentation quality is quite low, to the extent that clarity and proper understanding are jeopardized, and (iii) the novelty is limited. Consequently my overall impression is that this work is not yet ready for acceptance to ICLR.\n\nFirst, regarding the organization, this submission is 19 pages long (*excluding* references and appendices), despite the clear suggestion in the call for papers to limit the length to 8 pages: \"There is no strict limit on paper length. However, we strongly recommend keeping the paper at 8 pages, plus 1 page for the references and as many pages as needed in an appendix section (all in a single pdf). The appropriateness of using additional pages over the recommended length will be judged by reviewers.\" In the present submission, the first 8+ pages contain minimal new material, just various background topics and modified VAE update rules to account for learning noise parameters via basic EM algorithm techniques. There is almost no novelty here. In my mind, this type of well-known content is in no way appropriate justification for such a long paper submission, and it is unreasonable to expect reviewers to wade through it all during a short review cycle.\n\nSecondly, the presentation quality is simply too low for acceptance at a top-tier international conference (e.g., it is full of strange sentences like \"Such amelioration facilitates the VAE capable of always reducing the artificial intervention due to more proper guiding of noise learning.\" While I am sympathetic to the difficulties of technical writing, and realize that at times sufficiently good ideas can transcend local grammatical hiccups, my feeling is that, at least for now, another serious pass of editing is seriously needed. This is especially true given that it can be challenging to digest so many pages of text if the presentation is not relatively smooth.\n\nThird and finally, I do not feel that there is sufficient novelty to overcome the issues already raised above. Simply adapting the VAE decoder noise factors via either a trainable noise parameter or an MoG model represents an incremental contribution as similar techniques are exceedingly common. Of course, the paper also invents some new evaluation metrics and then applies them on benchmark datasets, but this content only appears much later in the paper (well after the soft 8 page limit) and I admittedly did not read it all carefully. But on a superficial level, I do not believe these contributions are sufficient to salvage the paper (although I remain open to hearing arguments to the contrary).", "This paper attempts to improve the beta-VAE (Higgins et al, 2017) by removing the trade-off between the quality of disentanglement in the latent representation and the quality of the reconstruction. The authors suggest doing so by explicitly modelling the noise of the reconstructed image Gaussian p(x|z). The authors assume that VAEs typically model the data using a Guassian distribution with a fixed noise. This, however, is not the case. Since the authors are trying to address a problem that does not actually exist, I am not sure what the contributions of the paper are. \n\nApart from the major issue outlined above, the paper also makes other errors. For example, it suggests using D_KL(q(z)||p(z)) as a measure of disentanglement, with lower values being indicative of better disentanglement. This, however, is incorrect, since one can have tiny D_KL by encoding all the information into a single latent z_i. Such a representation would be highly entangled while still satisfying all of the conditions the authors propose for a disentangled representation. \n\nGiven the points outlined above and the fact that the paper is hard to read and is excessively long, I do not believe it should be accepted.", "(1) we condense our paper from 19 pages to 10 pages. The remain contents are the subset of the original one. \n\n(2) Due to the length limitation, we shift emphasis from noise modeling to the discussion on the intrinsic properties of VAE and troubleshooting. In particular, the noise modeling is viewed as a crucial part of implementation issue. The following issues, also illustrated in the original version, become the focus of this revision paper: \n\n A. Intrinsic dimension Issue: \"Could the VAE learn the intrinsic number of factors underlying the data? \n B. Disentanglement Issue: \"What are need and range induced by word disentanglement?\"\n C. Real Factor Issue: \"Could the VAE learn the real generating factor underlying the data or just some fantasies?\n D. Indicator Issue: \"Could the effectiveness of current disentanglement metric be guaranteed?\"\n E. Implementation Issue: \"Could the aforementioned analysis be instructive in real implementation?'\n\n(3) Some original discussions on the noise modeling algorithms and related work were moved into the appendices to guarantee the reader can still get access to the MoG-noise VAE algorithm.\n\n(4) Some original experiment results, discussing the behaviors of MoG-noise VAE and Network Parameterized Noise VAE, are moved to appendices. The programming on the calculation of indicators is found wrong and we have corrected it and redone the relevant experiments. The details can be found in the appendices. ", "We’d like to thank the reviewer for their making effort to reviewing and providing helpful suggestions although they didn't provide fair assessments of our contribution, especially the important content which appears later that used to reveal some basic facts and behaviors of idealistic VAE as well as our indicators. We have made a number of changes to address them.\n\nA.\tWe condense the original paper into 10 pages. We also try to reduce the number of strange sentences.\n\nB.\tWe weaken our discussion on noise modeling due to the limitation of the paper length and strengthen the theoretical troubleshooting of VAE's properties and they are listed below\n\n 1.\tIntrinsic dimension Issue: \"Could the VAE learn the intrinsic number of factors underlying the data?\nOur paper: Yes, idealistic VAE learns and only learns the intrinsic factor dimension and the VAE objective induced by the Gaussian prior also encourages the information sparsity in dimension which is contributing to the learn the intrinsic dimension.\nBesides, in real implementations, the conclusion is also instructive if the noise is proper modeling and the disentanglement(clarified in our paper) is achieved to some extent.\n\n 2.\tDisentanglement Issue: \"What are need and range induced by word disentanglement?\"\nWe provide the clarification according to information conservation theorem:\nthe learned the factors are close to being independent.\nthe factors incline to generate the oracle signal and to be inferred perfectly from the oracle signal through a continuous procedure/mapping.\n\n 3.\tReal Factor Issue: \"Could the VAE learn the real generating factor underlying the data or just some fantasies?\"\nWe show that idealistic VAE possibly learn any factors set in the equivalence class. Besides, the experiment results also suggest that the VAE's factor equivalence generally exist.\n\n 4.\tIndicator Issue: \"Could the effectiveness of current disentanglement metric be guaranteed?\"\nWe show that the current disentanglement introduced by (beta-VAE) is based on \"simulated factors\" while idealistic VAE possibly learns any factor set in equivalence class induced by the \"simulated factors\". Hence, that metric may work sometimes and suffer instability among different trials.\nWe further introduce some indicator regarding the mutual information I(x;z) and Dkl(q(z)||p(z)) which provide the assessment to the determination of ``used factors\" and to the disentanglement.\n\n 5.\tImplementation Issue: \"Could the aforementioned analysis be instructive in real implementation?'\nWe introduce noise modeling to relax the consideration of the real situation. The experiment results empirically testify the knowledge derived from the idealistic case could be instructive in the real situation. They also demonstrate own characteristic of noise modeling in pursuing the disentanglement.\n\nC.\tDespite the theoretical discussion on the intrinsic properties of VAE, if we just discuss the novelty of noise modeling of VAE alone, we don't think it is limited. If you find different noise assumptions/specifications just significantly influence the disentanglement you will believe it.", "This comment is an illustration of our perspective and a quick response to the reviewers’ counterexample in their second paragraph comment. We are trying to show that counterexample doesn’t exist in the idealistic situation.\n\nExample:\nSuppose the input X follows a 2 dimension independent unit Gaussian. Let us say we want to encode X into only one dimension unit Gaussian latent Z and decode it back to X. \n\nDiscussion:\nTo simplify the situation, we could first consider the idealistic encoding and decoding procedure that the q(z|x) =\\delta(z=f(x)) and p(x|z)=\\delta(x=g(z)) are the two deterministic procedures. [correpsonds to no-information-loss channel case]\n\nThen we have z=f(g(z)) for all z in R^1 ,and x=g(f(x)) for all x in R^2.\n\nIf we further assume, the encoder f and decoder g are both continuous mappings (It's innocuous since the continuity of a mapping is a weak condition and mappings induced by the neural network are typically continuous.), then f and g are Homeomorphism mappings. However, R^1 and R^2 spaces have different topological structure and there thus is no Homeomorphism mapping between those two spaces. This leads to the contradiction. \n\nConclusion:\nIn short, roughly, in the idealistic situation, there is no way to encode 2 dim Gaussian into 1 dim Gaussian and then perfectly decode them back through continuous procedures.\n\nRelation to our paper:\nThe aforementioned argument can also be found in the information conservation theorem in Intrinsic Dimension Issue(section 3) in the latest paper. \n", "We thanks the reviewer for their work. However, we're afraid they may have misunderstood the point of our paper and didn't provide fair assessments of our contribution. We hope our responses below and the comments of the other reviewers may help clarify the scope of our work and its significance.\n\nAccording to your suggestion that paper is too long, we amend the length of our paper from 19 pages to 10. In order to achieve this and keeping it still comprehensive and informative, we have to weaken the discussion on noise modeling and put more emphasis on the intrinsic properties of VAE. We hope our amendment can increase the information channel capacity between the proposed ideal and our readers and provide better and more friendly reading experience.\n\nA.\tWe understand your first consideration that there exist some implementations under other noise assumptions including Bernoulli distribution for two-point valued data and some other more sophisticated ones with specific oriented domain knowledge. Some of them might already enable the parameter of noise to be learned. However, also in many implementations on real-valued data, many papers just simplified Gaussian assumption to be sigma^2 I where the sigma^2 is either assumed to be known or to be manually tuned. In particular, the tutorial on VAE(https://arxiv.org/abs/1606.05908), which is a really respectable work and is also my first contact with VAE model, is also under this noise assumptions. And we personally believe that we are the first one publicly emphasizes and demonstrates on \"noise modeling influences the disentanglement\" though we have also shown some other benefit could be induced by noise modeling in our original paper version.\n\nB.\tWe thank for your intuitive counterexample to our clarification on disentanglement. We have proved several theorems especially the information conservation theorem[e.g. two independent Gaussian and one Gaussian cannot be the generating factor set of each other.] in Intrinsic Dimension issue(section 3 in the latest version) to exclude this counterexample in the idealistic case and our experiment results also turn to support the instruction suggested by the theorem in the real implementation. In order to theoretically illustrate our perspective regarding the counterexample that review proposed, more compactly and informatively, we also add an auxiliary deduction in the latter comment. We will be grateful if the reviewer or someone else can further provide some facts and evidence from the theoretical perspective or experimental perspective to show the existence or the probability of the existence of that counterexample in the real situation.\n\nC.\tYou also mention there might be some other theoretical errors. We are grateful if you can list them. We are open to the opinion and argument from the other side and believe those arguments can improve the direction of scientific research and accelerate our mission to the AI. This will be helpful to improve our work but also good for the whole community.", "We’d like to thank the reviewer for affirmation of noise modeling and their reviews.\n\n1.\tWe do agree that using the implicit generative model such as GANs might be a promising way to learn the noise factors. However, our work focus on the basic framework of VAE and its representation learning properties and capabilities theoretically and practically through noise modeling. Personally, other generative models currently might not be as scalable as Gaussian Prior VAE under proper noise modeling in learning disentangled representation although we do not exclude the possibility of GAN and other implicit generative models could succeed in this subfield in the future. For example, though we did not implement the adversarial loss in the pixel reconstruction domain, we did implement the adversarial loss in the latent space to encourage the disentanglement( they are useful), however, we find even under the same hyperparameter setting their performance can be found really unstable. In contrast, several benefits of VAE are its stability of training model under maximum likelihood principle and the natural inference/encoder capability. By the way, our theoretical analysis on VAE could also be transferable and instructive to the analysis of some other generative models but those theoretical studies and relevant comparison might be left for the future work.\n\n2.\tWe personally believe our noise modeling can be extended to many real-world data and could be better than the Bernoulli likelihood modeling. Admittedly, there could be plenty ways people perceive and imagine the noise and data. We observe that you can view the [0,1] range as the pixel-wise probability but still inherits the \"cross-entropy\" similarity metric to take place of \"E_{q(z|x)}p(x|z)\" for \"E_{q(z|x)}cross-entropy(x||x_z)\". However, this implementation more or less ruins the maximum likelihood principle and the variational procedure, and therefore this implementation disables the usage and intuition of probability knowledge to some extent. In particular, if you want to further discuss the noise on the probability mass, everything could be awkward. As the result, it may need a new theory to enable the knowledge to be accumulated if those assumptions or optimizations are undertaken.\n\n3.\tWe condense the original paper into 10 pages (*excluding* references and appendices). The appendices are necessary to enable the reader to assess our proofs and details of experiments and algorithms. The discussion on the noise modeling is weakened due to the length limitation and emphasis is put on the theoretical discussion of the VAE properties. However, readers can still access the algorithm of MoG-noise VAE in appendices." ]
[ 5, 3, 2, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkERSm-0-", "iclr_2018_SkERSm-0-", "iclr_2018_SkERSm-0-", "iclr_2018_SkERSm-0-", "r1-zOIFgM", "SJcdJ0tez", "SJcdJ0tez", "Hk-DIMdez" ]
iclr_2018_r1kj4ACp-
Understanding Deep Learning Generalization by Maximum Entropy
Deep learning achieves remarkable generalization capability with overwhelming number of model parameters. Theoretical understanding of deep learning generalization receives recent attention yet remains not fully explored. This paper attempts to provide an alternative understanding from the perspective of maximum entropy. We first derive two feature conditions that softmax regression strictly apply maximum entropy principle. DNN is then regarded as approximating the feature conditions with multilayer feature learning, and proved to be a recursive solution towards maximum entropy principle. The connection between DNN and maximum entropy well explains why typical designs such as shortcut and regularization improves model generalization, and provides instructions for future model development.
rejected-papers
The reviewers are in agreement, that the paper is a big hard to follow and incorrect in places, including some claims not supported by experiments.
train
[ "HkBIjt2xz", "SyDSqb6gz", "Sy7fJuCxM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThis paper presents a derivation which links a DNN to recursive application of\nmaximum entropy model fitting. The mathematical notation is unclear, and in\none cases the lemmas are circular (i.e. two lemmas each assume the other is\ncorrect for their proof). Additionally the main theorem requires complete\nindependence, but the second theorem provides pairwise independence, and the\ntwo are not the same.\n\nMajor comments:\n\n- The second condition of the maximum entropy equivalence theorem requires\n that all T are conditionally independent of Y. This statement is unclear, as\nit could mean pairwise independence, or it could mean jointly independent\n(i.e. for all pairs of non-overlapping subsets A & B of T I(T_A;T_B|Y) = 0).\nThis is the same as saying the mapping X->T is making each dimension of T\northogonal, as otherwise it would introduce correlations. The proof of the\ntheorem assumes that pairwise independence induces joint independence and this\nis not correct.\n\n- Section 4.1 makes an analogy to EM, but gradient descent is not like this\n process as all the parameters are updated at once, and only optimised by a\nsingle (noisy) step. The optimisation with respect to a single layer is\nconditional on all the other layers remaining fixed, but the gradient\ninformation is stale (as it knows about the previous step of the parameters in\nthe layer above). This means that gradient descent does all 1..L steps in\nparallel, and this is different to the definition given.\n\n- The proofs in Appendix C which are used for the statement I(T_i;T_j) >=\n I(T_i;T_j|Y) are incomplete, and in generate this statement is not true, so\nrequires proof.\n\n- Lemma 1 appears to assume Lemma 2, and Lemma 2 appears to assume Lemma 1.\n Either these lemmas are circular or the derivations of both of them are\nunclear.\n\n- In Lemma 3 what is the minimum taken over for the left hand side? Elsewhere\n the minimum is taken over T, but T does not appear on the left hand side.\nExplicit minimums help the reader to follow the logic, and implicit ones\nshould only be used when it is obvious what the minimum is over.\n\n- In Lemma 5, what does \"T is only related to X\" mean? The proof states that\n Y -> T -> X forms a Markov chain, but this implies that T is a function of\nY, not X.\n\nMinor comments:\n\n- I assume that the E_{P(X,Y)} notation is the expectation of that probability\n distribution, but this notation is uncommon, and should be replaced with a\nmore explicit one.\n\n- Markov is usually romanized with a \"k\" not a \"c\".\n\n- The paper is missing numerous prepositions and articles, and contains\n multiple spelling mistakes & typos.", "The paper aims to provide a view of deep learning from the perspective of maximum entropy principle. I found the paper extremely hard to follow and seemingly incorrect in places. Specifically:\na) In Section 2, the example given to illustrate underfitting and overfitting states that the 5-order polynomial obviously overfits the data. However, without looking at the test data and ensuring the fact that it indeed was not generated by a 5-order polynomial, I don’t see how such a claim can be made.\nb) In Section 2 the authors state “Imposing extra data hypothesis actually violates the ME principle and degrades the model to non-ME model.” … Statements like this need to be made much clearer, since imposing feature expectation constraints (such as Eq. (3) in Berger et al. 1996) is a perfectly legitimate construct in ME principle.\nc) The opening paragraph of Section 3 is quite unclear; phrases like “how to identify the equivalent feature constraints and simple models” need to be made precise, it is not clear to me what authors mean by this.\nd) I’m not able to really follow Definition 1, perhaps due to unclear notation. It seems to state that we need to have P(X,Y) = P(X,\\hat{Y}), and if that’s the case not clear what more can be accomplished by maximizing conditional entropy H(\\hat{Y}|X). Also, there is a spurious w_i in Definition 1.\ne) Definition 2. Not clear what is meant by notation E_{P(T,Y)}.\nf) Definition 3 uses t_i(x) without defining those, and I think those are different from t_i(x) defined in Definition 2.\n\nI think the paper needs to be substantially revised and clarified before it can be published at ICLR.", "The presentation of the paper is crisp and clear. The problem formulation is explained clearly and it is well motivated by theorems. It is a theoretical papers and there is no experimental section. This is the only drawback for the paper as the claims is not supported by any experimental section. The author could add some experiments to support the idea presented in the paper." ]
[ 2, 3, 6 ]
[ 3, 3, 2 ]
[ "iclr_2018_r1kj4ACp-", "iclr_2018_r1kj4ACp-", "iclr_2018_r1kj4ACp-" ]
iclr_2018_B1X4DWWRb
Learning Weighted Representations for Generalization Across Designs
Predictive models that generalize well under distributional shift are often desirable and sometimes crucial to machine learning applications. One example is the estimation of treatment effects from observational data, where a subtask is to predict the effect of a treatment on subjects that are systematically different from those who received the treatment in the data. A related kind of distributional shift appears in unsupervised domain adaptation, where we are tasked with generalizing to a distribution of inputs that is different from the one in which we observe labels. We pose both of these problems as prediction under a shift in design. Popular methods for overcoming distributional shift are often heuristic or rely on assumptions that are rarely true in practice, such as having a well-specified model or knowing the policy that gave rise to the observed data. Other methods are hindered by their need for a pre-specified metric for comparing observations, or by poor asymptotic properties. In this work, we devise a bound on the generalization error under design shift, based on integral probability metrics and sample re-weighting. We combine this idea with representation learning, generalizing and tightening existing results in this space. Finally, we propose an algorithmic framework inspired by our bound and verify is effectiveness in causal effect estimation.
rejected-papers
The submission provides an interesting way to tackle the so-called distributional shift problem in machine learning. One familiar example is unsupervised domain adaptation. The main contribution of this work is deriving a bound on the generalization error/risk for a target domain as a combo of re-weighted empirical risk on the source domain and some discrepancy between the re-weighted source domain and the target domain. The authors then use this to formulate an objective function. The reviewers generally liked the paper for its theoretical results, but found the empirical evaluation somewhat lacking, as do I. Especially the unsupervised domain adaptation results are very toy-ish in nature (synthetic data), whereas the literature in this field, cited by the authors, does significantly larger scale experiments. I am unsure as to how much I value I can place in the IHDP results since I am not familiar with the benchmark (and hence my lower confidence in the recommendation). Finally, I am not very convinced that this is the appropriate venue for this work, despite containing some interesting results.
train
[ "H1HywYblM", "ByozI_rlG", "ryOA0TKgG", "BkGC4OpQM", "r1pGQ_aQM", "r1HF7_pQM", "Skwvf_pmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper proposes a novel way of causal inference in situations where in causal SEM notation the outcome Y = f(T,X) is a function of a treatment T and covariates X. The goal is to infer the treatment effect E(Y|T=1,X=x) - E(Y|T=0,X=x) for binary treatments at every location x. If the treatment effect can be learned, then forecasts of Y under new policies that assign treatment conditional on X will still \"work\" and the distribution of X can also change without affecting the accuracy of the predictions. \n\nWhat is proposed seems to be twofold:\n- instead of using a standard inverse probability weighting, the authors construct a bound for the prediction performance under new distributions of X and new policies and learn the weights by optimizing this bound. The goal is to avoid issues that arise if the ratio between source and target densities become very large or small and the weights in a standard approach would become very sparse, thus leading to a small effective sample size.\n- as an additional ingredient the authors also propose \"representation learning\" by mapping x to some representation Phi(x). \nThe goal is to learn the mapping Phi (and its inverse) and the weighting function simultaneously by optimizing the derived bound on the prediction performance. \n\nPros: \n- The problem is relevant and also appears in similar form in domain adaptation and transfer learning. \n- The derived bounds and procedures are interesting and nontrivial, even if there is some overlap with earlier work of Shalit et al. \n\nCons:\n- I am not sure if ICLR is the optimal venue for this manuscript but will leave this decision to others. \n- The manuscript is written in a very compact style and I wish some passages would have been explained in more depth and detail. Especially the second half of page 5 is at times very hard to understand as it is so dense. \n- The implications of the assumptions in Theorem 1 are not easy to understand, especially relating to the quantities B_\\Phi, C^\\mathcal{F}_{n,\\delta} and D^{\\Phi,\\mathcal{H}}_\\delta. Why would we expect these quantities to be small or bounded? How does that compare to the assumptions needed for standard inverse probability weighting? \n- I appreciate that it is difficult to find good test datasets for evaluating causal estimator. The experiment on the semi-synthetic IHDP dataset is ok, even though there is very little information about its structure in the manuscript (even basic information like number of instances or dimensions seems missing?). The example does not provide much insight into the main ideas and when we would expect the procedure to work more generally.\n\n\n\n\n\n\n\n\n\n", "This paper proposes a deep learning architecture for joint learning of feature representation, a target-task mapping function, and a sample re-weighting function. Specifically, the method tries to discover feature representations, which are invariance in different domains, by minimizing the re-weighted empirical risk and distributional shift between designs.\nOverall, the paper is well written and organized with good description on the related work, research background, and theoretic proofs. \n\nThe main contribution can be the idea of learning a sample re-weighting function, which is highly important in domain shift. However, as stated in the paper, since the causal effect of an intervention T on Y conditioned on X is one of main interests, it is expected to add the related analysis in the experiment section.", "Summary:\nThis paper proposes a new approach to tackle the problem of prediction under\nthe shift in design, which consists of the shift in policy (conditional\ndistribution of treatment given features) and the shift in domain (marginal \ndistribution of features).\n\nGiven labeled samples from a source domain and unlabeled samples from a target\ndomain, this paper proposes to minimize the risk on the target domain by \njointly learning the shift-invariant representation and the re-weighting \nfunction for the induced representations. According to Lemma 1 and its finite\nsample version in Theorem 1, the risk on the target domain can be upper bounded\nby the combination of 1) the re-weighted empirical risk on the source domain; \nand 2) the distributional discrepancy between the re-weighted source domain and\nthe target domain. These theoretical results justify the objective function\nshown in Equation 8. \n\nExperiments on the IHDP dataset demonstrates the advantage of the proposed\napproach compared to its competing alternatives.\n\nComments:\n1) This paper is well motivated. For the task of prediction under the shift in\ndesign, shift-invariant representation learning (Shalit 2017) is biased even in\nthe inifite data limit. On the other hand, although re-weighting methods are\nunbiased, they suffer from the drawbacks of high variance and unknown optimal\nweights. The proposed approach aims to overcome these drawbacks.\n\n2) The theoretical results justify the optimization procedures presented in\nsection 5. Experimental results on the IHDP dataset confirm the advantage of\nthe proposed approach.\n\n3) I have some questions on the details. In order to make sure the second \nequality in Equation 2 holds, p_mu (y|x,t) = p_pi (y|x,t) should hold as well.\nIs this a standard assumption in the literature?\n\n4) Two drawbacks of previous methods motivate this work, including the bias of\nrepresentation learning and the high variance of re-weighting. According to\nLemma 1, the proposed method is unbiased for the optimal weights in the large\ndata limit. However, is there any theoretical guarantee or empirical evidence\nto show the proposed method does not suffer from the drawback of high variance?\n\n5) Experiments on synthetic datasets, where both the shift in policy and the\nshift in domain are simulated and therefore can be controlled, would better \ndemonstrate how the performance of the proposed approach (and thsoe baseline \nmethods) changes as the degree of design shift varies. \n\n6) Besides IHDP, did the authors run experiments on other real-world datasets, \nsuch as Jobs, Twins, etc?", "We thank Reviewer 1 for their comments.", "Q: In order to make sure the second equality in Equation 2 holds, p_mu (y|x,t) = p_pi (y|x,t) should hold as well. Is this a standard assumption in the literature?\n\nA: This is a version of the standard, so-called covariate shift assumption (see e.g. Shimodaira, 2000) often made in e.g. domain adaptation. This was referred to only as outcomes being \"stationary\" in Section 2, but this has been clarified.\n\nQ: Two drawbacks of previous methods motivate this work, including the bias of representation learning and the high variance of re-weighting. According to Lemma 1, the proposed method is unbiased for the optimal weights in the large data limit. However, is there any theoretical guarantee or empirical evidence to show the proposed method does not suffer from the drawback of high variance?\n\nA: The variance of our estimator due to the weighting is accounted for theoretically in our bound by the factor V_\\mu and controlled in practice by a penalty on the norm of the weights, see Section 5. A more uniform set of weights yield lower variance but increased bias due to design shift (measured by the IPM term). We have also added a synthetic experiment investigating this, see Section 6.1. \n\nQ: Experiments on synthetic datasets, where both the shift in policy and the shift in domain are simulated and therefore can be controlled, would better demonstrate how the performance of the proposed approach (and those baseline \n methods) changes as the degree of design shift varies. \n\nA: We have added a small synthetic experiment to highlight the behavior of our model under varying sample sizes, comparing to methods using importance sampling weights. This is complementary to varying design shift.\n\nQ: Besides IHDP, did the authors run experiments on other real-world datasets, such as Jobs, Twins, etc?\n\nA: The Twins experiment, as used by Louizos et al. 2017, was primarily created to evaluate methods for dealing with hidden confounding. This is not the focus of our method as we assume ignorability. We found that in the setting of weak hidden confounding (small proxy noise), the imbalance between “treatment groups” was relatively small, and additional balancing neither hurt nor helped. We did not run experiments on Jobs.\n", "Q: The manuscript is written in a very compact style and I wish some passages would have been explained in more depth and detail. Especially the second half of page 5 is at times very hard to understand as it is so dense. \n\nA: We have improved clarity throughout the paper. For page 5 (Theory) specifically, we have adding headings and explanatory comments to provide additional context. \n\nQ: The implications of the assumptions in Theorem 1 are not easy to understand, especially relating to the quantities B_\\Phi, C^\\mathcal{F}_{n,\\delta} and D^{\\Phi,\\mathcal{H}}_\\delta. Why would we expect these quantities to be small or bounded? How does that compare to the assumptions needed for standard inverse probability weighting? \n\nA: We have added comments about the implications of Theorem 1. B_\\Phi is determined by the determinant of the Jacobian of the inverse representation \\Psi. For smooth invertible representations and an appropriate IPM, we can expect B to be bounded, but it could well be large. As long as we have sufficient overlap, there exists a weighting function to make the IPM term zero, (regardless of the scale of B). C and D are defined explicitly in the appendix and are standard sample complexity terms. Depending on application, Rademacher complexity, VC dimension, or covering numbers could be used for C. This has been clarified. For inverse probability weighting, our bound reduces to that of Cortes et al (2010) as the IPM term vanishes. For other weightings, the added assumption is that the loss lies in the family of functions determining the IPM. A larger family increases the changes of this being true, but loosens the bound in general.\n\nQ: I appreciate that it is difficult to find good test datasets for evaluating causal estimator. The experiment on the semi-synthetic IHDP dataset is ok, even though there is very little information about its structure in the manuscript (even basic information like number of instances or dimensions seems missing?). The example does not provide much insight into the main ideas and when we would expect the procedure to work more generally.\n\nA: The description of IHDP has been improved. We have also added a more targeted synthetic experiment (see above), that confirms our expectation that the usefulness of our method is largest when sample sizes are small. When sample sizes are large, more complex models can be fit and model misspecification can be reduced, thus reducing the usefulness of weighting methods in general. We have added a synthetic experiment in Section 6.1 to demonstrate this further.\n", "We thank all of the reviewers for their helpful comments and suggestions. Addressing these issues has increased the length of the manuscript, but we are confident that this is justified by the improved quality of the paper. We have responded to the concerns of the reviewers individually below. " ]
[ 5, 8, 7, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_B1X4DWWRb", "iclr_2018_B1X4DWWRb", "iclr_2018_B1X4DWWRb", "ByozI_rlG", "ryOA0TKgG", "H1HywYblM", "iclr_2018_B1X4DWWRb" ]
iclr_2018_HJ4IhxZAb
Meta-Learning Transferable Active Learning Policies by Deep Reinforcement Learning
Active learning (AL) aims to enable training high performance classifiers with low annotation cost by predicting which subset of unlabelled instances would be most beneficial to label. The importance of AL has motivated extensive research, proposing a wide variety of manually designed AL algorithms with diverse theoretical and intuitive motivations. In contrast to this body of research, we propose to treat active learning algorithm design as a meta-learning problem and learn the best criterion from data. We model an active learning algorithm as a deep neural network that inputs the base learner state and the unlabelled point set and predicts the best point to annotate next. Training this active query policy network with reinforcement learning, produces the best non-myopic policy for a given dataset. The key challenge in achieving a general solution to AL then becomes that of learner generalisation, particularly across heterogeneous datasets. We propose a multi-task dataset-embedding approach that allows dataset-agnostic active learners to be trained. Our evaluation shows that AL algorithms trained in this way can directly generalize across diverse problems.
rejected-papers
In general, this seems like a sensible idea, but in my opinion the empirical results do not show a very compelling margin between using *entropy* as an active learning selection criterion vs the proposed methods. The difference is small enough that in practice it is very hard for me to believe that many researchers would choose to use the meta-learning via deep RL method (given that they'd need to train on multiple datasets and tune REINFORCE which is not going to be obviously easy). For that reason I am inclined to reject the paper. In a follow-up version, I would heed the advice of Reviewer 1 and do more ablation analyses to understand the value of myopic vs non-myopic, cross-dataset vs. not, bandits vs RL, on the fly vs not (these are all intermingled issues). The relative lack of such analyses in the paper does not help in terms of it passing the bar.
train
[ "By1MecNBG", "HkaYd1yHM", "H1g6bb9gG", "SJEDvEvez", "rki3FqilM", "rklZUu6Xf", "SydBHdTXM", "rkXXBdTmz", "ByEkBd67G" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Sorry about the confusion, this was our oversight. We will correct the inaccurate sentence. \n\nWe also agree T-LSA is relevant for comparison, and we are running the experiment now and will add it to the final version. To contrast them explicitly, we expect MAP-GAL to perform better: (i) Due to non-myopic RL learning, (ii) Explicit dataset-adaptation mechanism multi-task trained on multiple sources (auxiliary network), rather than simply training a linear model using previous dataset parameters as regulariser. This also means queries are not wasted doing learning on the target problem. (iii) A deep policy rather than linear ensemble weighting.", "Thanks for addressing some of the concerns. The following sentence, however, is factually inaccurate and deserves clarification.\n\n\"learning of cross-dataset generalisation which is not attempted at all in prior work such as Chu and Lin's ALBL\"\n\nI assume that the authors are talking about Hsu and Lin's ALBL, which did not attempt for cross-dataset generalisation. Nevertheless, the other work that is being confused here, Chu and Lin (2016)'s Transfer LSA, improves over ALBL with cross-dataset generalisation, albeit in a sequential setting with a bandit learner rather than an RL learner. In this sense, it is suggested that the authors compare with T-LSA to justify the difference between RL and bandit (and perhaps non-myopic versus myopic), and with ALBL to justify the need of cross-dataset (which is readily done by the authors). Also, T-LSA needs some on-the-fly adaptive learning with bandit on the new data set while the proposed RL approach does not. So it would be interesting to know their differences.\n", "The approach solves an important problem as getting labelled data is hard. The focus is on the key aspect, which is generalisation across heteregeneous data. The novel idea is the dataset embedding so that their RL policy can be trained to work across diverse datasets.\n\nPros: \n1. The approach performs well against all the baselines, and also achieves good cross-task generalisation in the tasks they evaluated on. \n2. In particular, they alsoevaluated on test datasets with fairly different statistics from the training datasets, which isnt very common in most meta-learning papers today, so it’s encouraging that the method works in that regime.\n\nCons: \n1. The embedding strategy, especially the representative and discriminative histograms, is complicated. It is unclear if the strategy is general enough to work on harder problems / larger datasets, or with higher dimensional data like images. More evidence in the paper for why it would work on harder problems would be great. \n2. The policy network would have to output a probability for each datapoint in the dataset U, which could be fairly large, thus the method is computationally much more expensive than random sampling. A section devoted to showing what practical problems could be potentially solved by this method would be useful.\n3. It is unclear to me if the results in table 3 and 4 are achieved by retraining from scratch with an RBF SVM, or by freezing the policy network trained on a linear SVM and directly evaluating it with a RBF SVM base learner.\n\nSignificance/Conclusion: The idea of meta-learning or learning to learn is fairly common now. While they do show good performance, it’s unclear if the specific embedding strategy suggested in this paper will generalise to harder tasks. \n\nComments: There’s lots of typos, please proof read to improve the paper.\n\nRevision: I thank the authors for the updates and addressing some of my concerns. I agree the computational budget makes sense for cross data transfer, however the embedding strategy and lack of larger experiments makes it unclear if it'll generalise to harder tasks. I update my review to 6. ", "This reviewer has found the proposed approach quite compelling, but the empirical validation requires significant improvements:\n1) you should include in your comparison Query-by- Bagging & Boosting, which are two of the best out-of-the-box active learning strategies\n2) in your empirical validation you have (arbitrarily) split the 14 datasets in 7 training and testing ones, but many questions are still unanswered:\n - would any 7-7 split work just as well (ie, cross-validate over the 14 domains)\n - do you what happens if you train on 1, 2, 3, 8, 10, or 13 domains? are the results significantly different? \n\nOTHER COMMENTS:\n- p3: both images in Figure 1 are labeled Figure 1.a\n- p3: typo \"theis\" --> \"this\" \n\nAbe & Mamitsuksa (ICML-1998). Query Learning Strategies Using Boosting and Bagging.", "Overview\n\nThe authors propose a reinforcement learning approach to learn a general active query policy from multiple heterogeneous datasets. The reinforcement learning part is based on a policy network, which selects the data instance to be labeled next. They use meta-learning on feature histograms to embed heterogeneous datasets into a fixed dimensional representation. The authors argue that policy-based reinforcement learning allows learning the criteria of active learning non-myopically. The experiments show the proposed approach is effective on 14 UCI datasets.\n\nstrength\n\n* The paper is mostly clear and easy to follow.\n* The overall idea is interesting and has many potentials.\n* The experimental results are promising on multiple datasets.\n* There are thorough discussion with related works.\n\nweakness\n\n* The graph in p.3 don't show the architecture of the network clearly.\n* The motivation of using feature histograms as embedding is not clear.\n* The description of the 2-D histogram on p.4 is not clear. The term \"posterior value\" sounds ambiguous.\n* The experiment sets a fixed budget of only 20 instances, which seems to be rather few in some active learning scenarios, especially for non-linear learners. Also, the experiments takes a fixed 20K iterations for training, and the convergence status (e.g. whether the accumulated gradient has stabilized the policy) is not clear.\n* Are there particular reasons in using policy learning instead of other reinforcement learning approaches?\n* The term A(Z) in the objective function can be more clearly described.\n* While many loosely-related works were surveyed, it is not clear why literally none of them were compared. There is thus no evidence on whether a myopic bandit learner (say, Chu and Lin's work) is really worse than the RL policy. There is also no evidence on whether adaptive learning on the fly is needed or not.\n* In Equation 2, should there be a balancing parameter for the reconstruction loss?\n* Some typos\n - page 4: some duplicate words in discriminative embedding session\n - page 4: auxliary -> auxiliary\n - page 7: tescting -> testing\n\n", "Thanks for the comments.\n\nHistogram Embedding Motivation: The general idea of a histogram-based embedding was inspired by Romero ICLR'17. We customised it for application to AL here. For example, the representative embedding encodes information about the spread of the dataset itself (unlabelled component) and the spread of the queries so far (labelled component). While the discriminative embedding contains information about how the current leaner certainty varies with the position of an instance within the spread of the dataset (through encoding posterior certainty jointly with position). \n\nConvergence: Yes the policy is stabilising. This is now illustrated in Appendix Sec 7.1, Fig 3.\n\nFixed budget: AL is mainly of interest in settings when a small budget must be carefully spent. We agree extending to budgets of 100 or more is within the interesting range. But we leave this for future work as we didn't have time to run these experiments yet. \n\n2D histogram: To clarify: Each feature dimension of input is encoded by a joint histogram counting: (1) The frequency of instances with a value of that feature within each bin and (2) The frequency of instances with a given posterior probability according to the base classifier so far (IE: binning on the base classifier's probability value between [0,1] that the given instance is class +1 vs class -1). We clarified this in Sec 3.1.\n\nExplain A(Z): This is the MSE of reconstruction of the instances by the autoencoder. Now clarified in Sec 3.2.\n\nQuantitative Comparison. Myopic/ALBL: The reviewer mentioned they would like evidence of RL policy benefit vs myopic bandit learner. We would like to reiterate that the mentioned point here is only one out of two contributions here. We perform non-myopic learning with RL, but the second contribution is the learning of cross-dataset generalisation which is not attempted at all in prior work such as Chu and Lin's ALBL. We have now added ALBL to the experimental comparison, (as well as QBB suggested by R3). We can see that MLP-GAL is clearly better than ALBL in the updated Tables 1 and 2. To understand why: Recall that ALBL performs bandit-based learning within-dataset without knowledge transfer. Its underlying bandit learner is designed for asymptotic performance. In the few-shot scenario of active learning, it suffers critically from the fundamental explore/exploit dilemma. ALBL must use the limited active queries to do exploration for learning. By the time it has explored enough to define a good learner, it has a small active query budget remaining to exploit this knowledge. This is particularly detrimental in our tough setting of only 20 queries per dataset. In contrast by doing cross-dataset transfer, our MLP-GAL completely avoids this limitation. It gets all its learning done on different source datasets and can go straight to \"exploiting\" when tested on a novel dataset. \n\nQuantitative comparison. Adaptive learning on the Fly: We assume the reviewer is referring to the meta-network in this question. The relevant comparison is then between MLP-GAL and Single-RL which is trained in a similar multi-task way, but excludes the meta-network that helps the policy adapt to new target datasets. We can see that MLP-GAL outperform Single-RL in the results (Tab 1, and more clearly in Tab 2). This is despite that the latter adds significantly more expert features in order to make it approximate a non-myopic RL-upgraded version of the expert feature-based supervised learner in Konyushkova et al, NIPS, 2017. \n\nRL Approach chosen: For simplicity we chose PG, as the simplest classic direct policy search method. More advanced methods (actor-critic, TRPO, etc) could be potentially used to further improve performance. We tried DQN-based Q-learning and it did not work as well due to being hard to stabilise the training. We believe Q-learning should also work in principle but may be more sensitive to tuning hyperparameters for good stability.\n\nEq 2: Yes there a potential parameter here that could be tuned to improve performance, but we did not use one. So the weight value is 1. Now clarified.\n\nTypos: Thanks. Corrected.", "Thanks for the feedback.\n\nComputational Expense: We agree our model is more expensive than random sampling at run-time, as are all other non-trivial active learners (Entropy/US, DFF, QUIRE, QBB, ALBL, etc). The standard cost bracket of most active learning methods is O(ND) each iteration for N instances and D dimensions. Ours is in this bracket along with the pervasive Entropy/US, and thus we can address problems of the same size as any standard active learner. DFF, for example, is a qualitatively more costly O(N^2 D), which makes it non-scalable to large problems. \n\nTraining our model is fairly costly, but by training a model capable of cross-dataset transfer, the idea is that retraining is rarely required.\n\nLinear and RBF SVM: In the RBF experiment table (Tab 3, 4 in submission. Tab 2 in revised paper), the results are based on completely retraining the policy network using a RBF-base learner. Since we did not meta-train for base-learner invariance, we would not expect it to generalise across changes of base learner. Meta-training for base learner invariance is an interesting avenue for future work.\n\nBigger Problems: We agree this is an exciting major test for going forward in this line of research. Given the other requested experiments, which required all our GPUs, we did not have time to address this in the available period. We think that the existing experiments, particularly with the revised updates already validates the basic concept. So we leave evaluation of bigger computer vision problems for future work. \n\nTypos: Sorry. We have proofread the paper and corrected them.", "Thanks for the comments.\n\nQBB: We chose the Query-by-Bagging variant to compare as it had slightly better performance in the original QBB paper. We now show QBB results in the empirical evaluation. We can see that QBB is competitive but does not outperform our learned method.\n\nCross-validation and dependence on number of train datasets: We agree these are interesting and important points to investigate. To improve our experiments accordingly, we have now: (1) cross-validated, treating all datasets as train/test in turn. (2) Repeated this for multiple train/test splits including 13/1 (leave-one-out), 7/7, 4/10 and 1/13 (single source training).\n\nCross-validation: The main results in Tab 1 (Lin SVM) and Tab 2 (RBF SVM) are now based on 13/1 leave-one-dataset cross-validation out. We see that the conclusions still hold: MLP-GAL outperforms alternatives on average, when testing across all 14 of the datasets. \n\nDependence on # of source datasets: This is now reported in Fig 2(b). We can see that increasing from 1 up to 13 train sets reduces the training performance (harder to simultaneously overfit to a larger suite of training datasets). Simultaneously it increases the testing performance (being forced to fit a wider suite of training data forces the policy to represent more dataset-agnostic knowledge, and thus improves generalisation). See also the new Appendix Sec 7.2 for further discussion.\n\nTypos: Thanks. Now fixed.", "Dear Reviewers and Chairs,\n\nThanks to the reviewers for their comments.\n\nThe reviewers generally found our idea interesting, but had some questions and suggestions for improvements of the experiments.\n\nWe have revised our paper based on some of the reviewers' suggestions. Besides small clarifications and minor corrections, the main changes are as follows: \n1. Based on R1 and R3's suggestion we have added two new baselines for comparison: ALBL & QBB. \n2. R3 suggested cross-validation to avoid the bias of a fixed train/test split and also asked about the dependence on the number of training datasets. To address this point thoroughly, we: (i) Re-organised the main experiment around leave-one-dataset out cross-validation, rather than a fixed 7/7 split. So now every dataset occurs as both a training and testing set. See updated Tab 1 & 2 in revised version. (ii) Explored the dependence on the number of training datasets. See Fig 2b in the revised version and the new Sec 7.2 for further discussion.\n3. R1 wondered about the convergence process. This is now illustrated in Sec 7.1, Fig 3.\n" ]
[ -1, -1, 6, 7, 6, -1, -1, -1, -1 ]
[ -1, -1, 3, 4, 4, -1, -1, -1, -1 ]
[ "HkaYd1yHM", "rklZUu6Xf", "iclr_2018_HJ4IhxZAb", "iclr_2018_HJ4IhxZAb", "iclr_2018_HJ4IhxZAb", "rki3FqilM", "H1g6bb9gG", "SJEDvEvez", "iclr_2018_HJ4IhxZAb" ]
iclr_2018_SktLlGbRZ
CyCADA: Cycle-Consistent Adversarial Domain Adaptation
Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.
rejected-papers
I concur with two of the reviewers: the work is somewhat incremental in terms of technical novelty (it's effectively CycleGANs for domain adaptation with a couple of effective tricks) and the need/advantage of the cycle consistency loss is not demonstrated sufficiently. The only solid ablation evidence seems to the the SVHN-->MNIST experiment from post-submission; I would personally like to see this kind of empirical proof extended much further (the fact that Shrivastava et al.'s method doesn't work well on GTA-->Cityscapes is not itself proof that cycle consistency is needed). With more empirical evidence I can see this paper being a good candidate for a computer vision conference like CVPR or ICCV.
train
[ "S1Elwq_xf", "SyFscqngM", "S14j0RTxM", "B1upY6WmM", "SJGM5T-Xz", "S1bP5aZQG", "HyQBYTbXz", "SyqyFTWmM", "Sy-YBUn1G", "BJxW87myM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public" ]
[ "This paper proposed a domain adaptation approach by extending the CycleGAN with 1) task specific loss functions and 2) loss imposed over both pixels and features. Experiments on digit recognition and semantic segmentation verify the effectiveness of the proposed method.\n\nStrengths:\n+ It is a natural and intuitive application of CycleGAN to domain adaptation. \n+ Some of the implementation techniques may be useful for the future use of CycleGAN or GAN in other applications, e.g., the regularization over both pixels and features, etc.\n+ The experimental results are superior over the past.\n+ The translated images in Figure 6 are amazing. Could the authors show more examples and include some failure cases (if any)?\n\nWeaknesses:\n- The presentation of the paper could be improved. I do not think I can reproduce the experimental results after reading the paper more than twice. Many details are missing and some parts are confusing or even misleading. As below, I highlight a few points and the authors are referred to the comments by Cedric Nugteren for more suggestions.\n\n-- Equation (4) is incorrect.\n-- In the introduction and approach sections, it reads like a big deal to adapt on both the pixel and feature levels. However, the experiments fail to show that these two levels of adaptation are complementary to each other. Either the introduction is a little misleading or the experiments are insufficient. \n-- What does the “image-space adaptation” mean?\n-- There are three fairly sophisticated training stages in Section 4.2. However, the description of the three stages are extremely short and ambiguous. \n-- What are exactly the network architectures used in the experiments?\n\n- The technical contribution seems like only marginal innovative. \n- The experiments adapting from MNIST to SVHN would be really interesting, given that the MNIST source domain is not as visually rich as the SVHN target. Have the authors conducted the corresponding experiments? How are the results? \n\nSummary:\nThe proposed method is a natural application of CycleGAN to domain adaptation. The technical contribution is only marginal. The results on semantic segmentation are encouraging and may motivate more research along this direction. It is unfortunate that the paper writing leaves many parts of the paper unclear. \n\n=========================================\nPost rebuttal:\n\nThe rebuttal addresses my first set of questions. The revised paper describes more experiment details, corrects equation (4), and clarifies some points about the results. \n\nThis paper applies the cycle consistent GAN to domain adaptation. I still think the technical contribution is only marginally innovative. Nonetheless, I do not weigh this point too much given that the experiments are very extensive. \n\nThe rebuttal does not answer my last question. It would be interesting to see what happens to adapt from MNIST to SVHN, the latter of which contains more complicated background than the former. \n", "This paper essentially uses CycleGANs for Domain Adaptation. My biggest concern is that it doesn't adequately compare to similar papers that perform adaptation at the pixel level (eg. Shrivastava et al-'Learning from Simulated and Unsupervised Images through Adversarial Training' and Bousmalis et al - 'Unsupervised Pixel-level Domain Adaptation with GANs', two similar papers published in CVPR 2017 -the first one was even a best paper- and available on arXiv since December 2016-before CycleGANs). I believe the authors should have at least done an ablation study to see if the cycle-consistency loss truly makes a difference on top of these works-that would be the biggest selling point of this paper. The experimental section had many experiments, which is great. However I think for semantic segmentation it would be very interesting to see whether using the adapted synthetic GTA5 samples would improve the SOTA on Cityscapes. It wouldn't be unsupervised domain adaptation, but it would be very impactful. Finally I'm not sure the oracle (train on target) mIoU on Table 2 is SOTA, and I believe the proposed model's performance is really far from SOTA.\n\nPros:\n* CycleGANs for domain adaptation! Great idea!\n* I really like the work on semantic segmentation, I think this is a very important direction\n\nCons:\n* I don't think Domain separation networks is a pixel-level transformation-that's a feature-level transformation, you probably mean to use Bousmalis et al. 2017. Also Shrivastava et al is missing from the image-level papers.\n* the authors claim that Bousmalis et al, Liu & Tuzel and Shrivastava et al ahve only been shown to work for small image sizes. There's a recent work by Bousmalis et al. (Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping) that shows these methods working well (w/o cycle-consistency) for settings similar to semantic segmentation at a relatively high resolution. Also it was mentioned that these methods do not necessarily preserve content, when pixel-da explicitly accounts for that with a task loss (identical to the semantic loss used in this submission)\n* The authors talk about the content similarity loss on the foreground in Bousmalis et al. 2017, but they could compare to this method w/o using the content similarity or using a different content similarity tailored to the semantic segmentation tasks, which would be trivial.\n* Math seems wrong in (4) and (6). (4) should be probably have a minus instead of a plus. (6) has an argmin of a min, not sure what is being optimized here. In fact, I'm not sure if eg you use the gradients of f_T for training the generators?\n* The authors mention that the pixel-da approach cross validates with some labeled data. Although I agree that is not an ideal validation, I'm not sure if it's equivalent or not the authors' validation setting, as they don't describe what that is.\n* The authors present the semantic loss as novel, however this is the task loss proposed by the pixel-da paper.\n* I didn't understand what pixel-only and feat-only meant in tables 2, 3, 4. I couldn't find an explanation in captions or in text\n\n\n=====\nPost rebuttal comments:\nThanks for adding content in response to my comments. The cycle ablation is still a sticky point for me, and I'm still left not sure if cycle-consistency really offers an improvement. Although I applaud your offering examples of failures for when there's no cycle-consistency, these are circumstantial and not quantitative. The reader is still left wondering why and when is the cycle-consistency loss is appropriate. As this is the main novelty, I believe this should be in the forefront of the experimental evaluation. ", "This paper proposes a natural extension of the CycleGAN approach. This is achieved by leveraging the feature and semantic losses to achieve a more realistic image reconstruction. The experiments show that including these additional losses is critical for improving the models performance. The paper is very well written and technical details are well described and motivated. It would be good to identify the cases where the model fails and comment on those. For instance, what if the source data cannot be well reconstructed from adapted target data? What are the bounds of the domain discrepancy in this case? ", "Thank you for your comments. We have included new experiments and text edits per your suggestion.\n\nHigher performing semantic segmentation models\n======================================\nFirst, we added a new experiment for GTA->CityScapes adaptation with a newer semantic segmentation model. Again, we found that for this experiment, feature space adaptation alone provided a large improvement (21 mIoU -> 31 mIoU), pixel adaptation alone resulted in a substantial improvement (21->37 mIoU) and finally, combining feature space with pixel space adaptation provided the largest performance (21->39). \n\nCycle Ablation\n===========\nWe added a new ablation experiment to the SVHN->MNIST setting where the cycle loss is removed while the semantic loss remains. This version was still susceptible to label flipping and understandably failed at the task of reconstruction (see Figure 3b).\n\nComparison to other Pixel Level Approaches\n==================================\nWe ran Shrivastava et al (see Appendix A.2) in the GTA->CityScapes scenario and found that the model was not able to accurately capture the transfer problem, resulting in performance below the original source model.\n\nWe added a citation to the new Bousmalis et al. (2017a) paper on robotic grasping (pg 1 Introduction). Those images are indeed higher resolution than the prior work, but they still do not match the resolution of the dashcam driving images and have significantly lower variation and complexity. In general, optimizing pixel transfer methods with high resolution images remains a challenging problem. Our approach provides one solution by which additional regularization through the pixel cycle loss encourages transfer. We would like to clarify that the comment we made about prior pixel level approaches which “may not necessarily preserve content” was intended as a potential criticism of pixel based approaches in general, not specifically about Bousmalis et al. (2017b). In fact, in the related work section we explicitly mention that Bousmalis et al. (2017b) uses a content similarity loss on the foreground mask. This is a privileged version of our semantic consistency loss as it requires a known foreground mask on target data. We do not claim to be the first to introduce the use the a task classifier to preserve content. Instead we introduce a model which does pixel transfer through a cycle loss for low level preservation and a semantic loss for preserving semantics in a large domain shift scenario (when all pixels must change significantly). \n\nText Edits\n=======\nThank you for noticing the error in Equation (4). We have updated the text to accurately reflect our description and implementation. In addition, we have added semantic consistency to our new Figure 2 to clarify the use of this objective. \n\nEquation (6) defines the full CyCADA objective and Equation (7) presents the optimization problem.\n\nAppendix A.1 describes architectures, training procedures, and implementation details needed to reproduce our experiments. \n\nWe have revised the method section to clarify the pixel vs feature level transfer which is ablated in the experiments section. In addition the new Figure 2 should offer further clarity.\n", "Thank you for your comments. We have made a number of modifications to our manuscript based on your feedback. First, thank you for noticing the error in Equation 4. We have updated it to accurately reflect our description and implementation (our new figure 2 should also clarify its use). We have modified the explanation of image/pixel space adaptation vs feature space adaptation within the main method description and provided headers to guide the reader. We have also added an appendix with an implementation section specifying the network architectures and describing the training procedures. We will release our code, data and models upon publication. We have also followed many of the suggestions from Cedric Nugteren as you have pointed out (please see our response there for the detailed list of changes). \n\nWe would like to clarify that our results show that independently pixel space and feature space adaptation offer performance improvement over no adaptation across all experiments. When combined they provide anywhere from equivalent (as in USPS<->MNIST) to marginal improvement (GTA->CityScapes), to *significantly* better performance (SVHN->MNIST) than either approach alone. Thus, we propose using both components together.", "Thank you for your positive feedback and suggestion to study the errors from the model. We have included a section to our appendix illustrating the confusion matrices for the largest domain shift of our digit experiments -- SVHN -> MNIST. In this case we find certain error types are resolved after adaptation while others still remain. Confusion between visually similar classes, such as 1s and 7s, is difficult to resolve without target labels.\n\nIn addition, we have included additional experiments and made updates to further clarify details within our manuscript based on the suggestions from the other reviewers. ", "Thank you for your interest and suggestions. We have addressed Cedric’s comments above. In addition, we have made changes to our method section to clarify the distinction between pixel and feature space adaptation. The new Appendix 6.1 discusses the training procedure which indicates which components are updated in each phase. Equation (4) has been fixed. ", "Thank you for your suggestions on how to improve the presentation of our algorithm. We have incorporated them into our revised manuscript. Changes are specified below.\n\n* Figure 2: we include a new version of this figure with semantic consistency and feature level transfer together with explicit discriminator blocks.\n\n* Moved (-) outside the expectation in Equation (1)\n\n* Fixed Equation (4)\n\n*Made explicit that the source model is pretrained and fixed. Also see implementation details in the Appendix which reinforces this.\n\n*Pixel and Feature adaptation are clarified in the method section as well as in the appendix implementation section.\n\n* Indeed, we do assume that the label space remains unchanged before and after transfer. In fact, that is exactly what the semantic consistency loss enforces.\nNew semantic segmentation results with the DRN-26 architecture results in higher performance overall. Our findings remain the same.", "This is a public comment. I agree with the comments of Cedric Nugteren especially for the feature and pixel loss. \n\nIn addition:\n1. In the last paragraph of section 3, it said CyCADA can be viewed as CycleGan augmented with an additional task loss, which I think should be the semantic loss here? But in Table I, CyCADA also covers feature loss and CycleGAN doesn't. \nFrom the Equation 5, the loss should be presented as the feature or the pixel loss explicitly. Otherwise, the training in stages from section 4.2 really makes me confused.\n\n2. In right side of Equation 4, L_task(f_s, X_s, p(fs, Xs)) is not a loss function if f_s is pre-trained. It just outputs a constant.", "\nThis is a not a review of the work, but just a comment with some suggestions to improve the presentation of the work. Currently there are some things unclear and inconsistent in the presentation; I believe improving this can make the contributions of the paper a lot clearer. Here are some comments (not in any particular order):\n\n* Figure 2 (the diagram with images, networks, and losses) is really helpful. However, it would help if the symbols used in the paper (Xs, Xt, Yt, Lgan, Ft, Lcyc, etc.) are added to make it easier to map the equations to the figure. Also, it would be good to extend the figure with the second cycle loss. I understand that that takes extra space, but it might be worth it. Furthermore, it would be good to picture the missing parts as well (Lsem, Fs) for completeness. Finally, perhaps explicitly adding all networks would help clarifying the overall structure (Ds, Dt are missing now).\n\n* In equation 1 (task-loss) it would be clarifying to put large square brackets around the \"-sum()\" term. Now the equation could be read as \"expectation minus the sum of ...\" whereas it should read as \"expectation of the negated sum of ...\".\n\n* Equation 4 has a typo. The left-hand side contains a Gt->s component but it is not on the right-hand side. It would be furthermore helpful to clarify what the two individual components in this equation represent.\n\n* It would be good to make explicit early on that the source model fs has fixed weights throughout the domain adaptation training. Is this also the case in related work?\n\n* At first it is unclear how the \"pixel\" and \"feature\" approaches discussed in the experiment section map to the explanation in section 3 and figure 2. It would be good to clarify this in section 3 and perhaps in a second version of figure 2? There are some unclarities here:\n - Are all loss components trained for the feature case?\n - How are the features obtained? Using the task-model? What if these features are not useful for the target domain (e.g. color information not present in MNIST features but might be useful for SVHN)?\n - Which networks are shared between the pixel and feature approaches?\n - How are the two losses optimized - one after each other? Interleaved? Jointly?\n\n* There seem to be some assumptions on the domain change with respect to the fact that the source labels Ys do not need to be transformed to accommodate changed made on the input data Xs by the transformation Gs->t (e.g. no translation, warping, etc.). It would be nice if this is mentioned explicit and perhaps discussed (is Gs->t constrained in such a way?).\n\n* The first paragraph under the section \"Implementation details\" doesn't seem to be an implementation detail at all, but rather a property of the approach.\n\n* The network architecture used (FCN) is quite old in terms of semantic segmentation (2015). It would be interesting to see how this affects your final accuracy. Is this why the only comparison is against \"FCNs in the wild\", perhaps they use the same architecture? If not, how much of your improvement is related to the architecture change and how much related to the method?\n\n* Table 3 contains some results which are better than the oracle (pole, pedestrian, bicycle). Although possible, it would be good to mention this explicitly to make sure this is not a typo.\n" ]
[ 5, 5, 9, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 5, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SktLlGbRZ", "iclr_2018_SktLlGbRZ", "iclr_2018_SktLlGbRZ", "SyFscqngM", "S1Elwq_xf", "S14j0RTxM", "Sy-YBUn1G", "BJxW87myM", "BJxW87myM", "iclr_2018_SktLlGbRZ" ]
iclr_2018_SyhRVm-Rb
Automatic Goal Generation for Reinforcement Learning Agents
Reinforcement learning (RL) is a powerful technique to train an agent to perform a task. However, an agent that is trained using RL is only capable of achieving the single task that is specified via its reward function. Such an approach does not scale well to settings in which an agent needs to perform a diverse set of tasks, such as navigating to varying positions in a room or moving objects to varying locations. Instead, we propose a method that allows an agent to automatically discover the range of tasks that it is capable of performing in its environment. We use a generator network to propose tasks for the agent to try to achieve, each task being specified as reaching a certain parametrized subset of the state-space. The generator network is optimized using adversarial training to produce tasks that are always at the appropriate level of difficulty for the agent. Our method thus automatically produces a curriculum of tasks for the agent to learn. We show that, by using this framework, an agent can efficiently and automatically learn to perform a wide set of tasks without requiring any prior knowledge of its environment (Videos and code available at: https://sites.google.com/view/goalgeneration4rl). Our method can also learn to achieve tasks with sparse rewards, which pose significant challenges for traditional RL methods.
rejected-papers
In principle, the idea behind the submission is sound: use a generative model (GANs in this case) to learn to generate desirable "goals" (subsets of the state space) and use that instead of uniform sampling for goals. Overall I tend to agree with Reviewer 3 in that the current set of results is not convincing in terms of it being able to generate goals in a high-dimensional state space, which seems to be be whole raison d'etre of GANs in this proposed method. The coverage experiment in Figure 5 seems like a good *illustration* of the method, but for this work to be convincing, I think we would need a more diverse set of experiments (a la Figure 2) showing how this method performs on complicated tasks. I encourage the authors to sharpen the definitions, as suggested by reviewers, and, if possible, provide experiments where the Assumptions being made in Section 3.3 are *violated* somehow (to actually test how the method fails in those cases).
val
[ "S1kxi6OlM", "S1m5kPUrz", "S10H-jEBG", "rJg5hxtgf", "Syx7RZ9eG", "HyhA3Pgmf", "ry2eoPlmG", "ry6s9De7f", "HkhKcvgXf", "BkuuqPlXz", "Sy98cwe7M", "BknhKwg7G", "HJvjFPl7G", "HkecKvemf", "H1TDFDeQG", "HyvUFDxXz", "HyaEYDx7z", "BkvMYveXM", "H1Oi_PgQG", "HyywOvxmM", "B1OrdweQz" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "In general I find this to be a good paper and vote for acceptance. The paper is well-written and easy to follow. The proposed approach is a useful addition to existing literature.\n\nBesides that I have not much to say except one point I would like to discuss:\n\nIn 4.2 I am not fully convinced of using an adversial model for goal generation. RL algorithms generally suffer from poor stability and GANs themselves can have convergence issues. This imposes another layer of possible instability. \n \nBesides, generating useful reward function, while not trivial, can be seen as easier than solving the full RL problem. \nCan the authors argue why this model class was chosen over other, more simple, generative models? \nFurthermore, did the authors do experiments with simpler models?\n\nRelated:\n\"We found that the LSGAN works better than other forms of GAN for our problem.\" \nWas this improvement minor, or major, or didn't even work with other GAN types? This question is important, because for me the big question is if this model is universal and stable in a lot of applications or requires careful fine-tuning and monitoring. \n\n---\nUpdate:\nThe authors addressed the major point of criticism in my review. I am now more convinced in the quality of the proposed work, and have updated my review score accordingly.", "The statement b) is more true. Our method should work with any GAN powerful enough to capture the desired goal distributions. However, we did observe that some of the GAN methods are more stable than others, possibly due to the fact that they have less hyper-parameters to tune. Our GAN hyper-parameter tuning was *not* done in a per environment basis and our tuned hyper-parameters were shared across all the experiments. Therefore, due to computation limit, we chose to report the results with LSGAN that has the least number of hyper-parameters to tune (and in fact the hyper-parameters from the original paper worked fine, and we did not try others). \nMore precisely, when trying the other GAN methods, they would sometimes fit less accurately one \"good goals\" distribution (probably improvable with better initial tuning). This generates more goals with too high or too low rewards, therefore momentarily decreasing the learning efficiency of our algorithm, and taking longer to solve the task. In other words, our approach works reliably with any GAN able to generate some new samples from the distribution it is fitting (all the ones we tried satisfy this - as any proper generative model should), and the performance of the algorithm increases with how well it fits the distributions.", "Thanks for the reply! Could you maybe clarify:\n\n\"WGAN (Arjovsky et al. 2017) led to significantly more stable training than a vanilla GAN (as in Goodfellow et al., 2014), and using an LSGAN improved the training stability even further, but not quite as dramatically\"\n\nI am not so much interested in which GAN works the best, also not interested in minor performance improvements. What I am interested in is how justified my doubts about possible additional instability in the algorithm are. So, let me rephrase a bit the question:\n\n1. When you chose to use a WGAN, was it because a standard GAN did not work reliably? Or are we talking about merely incremental improvements?\n\nIn essence, which of the following two statements is more true:\n\na) When using GAN for automatic goal generation it is essential to tune the hyperparameter (such as which loss function is used (standard, WGAN, etc), for it to be robust. \n\nb) Our approach does work reliably with any type of adversarial method (and you made some good arguments why to use them in this situation). To show the best results, we used WGAN in our experiments. \n\n\n", "Summary:\n\nThis paper proposes to use a GAN to generate goals to implement a form of curriculum learning. A goal is defined as a subset of the state space. The authors claim that this model can discover all \"goals\" in the environment and their 'difficulty', which can be measured by the success rate / reward of the policy. Hence the goal network could learn a form of curriculum, where a goal is 'good' if it is a state that the policy can reach after a (small) improvement of the current policy.\n\nTraining the goal GAN is done via labels, which are states together with the achieved reward by the policy that is being learned.\n\nThe benchmark problems are whether the GAN generates goals that allow the agent to reach the end of a U-maze, and a point-mass task.\n\nAuthors compare GAN goal generation vs uniformly choosing a goal and 2 other methods.\n\nMy overall impression is that this work addresses an interesting question, but the experimental setup / results are not clearly worked out. More broadly, the paper does not address how one can combine RL and training a goal GAN in a stable way.\n\nPro:\n- Developing hierarchical learning methods to improve the sample complexity of RL is an important problem.\n- The paper shows that the U-maze can be 'solved' using a variety of methods that generate goals in a non-uniform way.\n\nCon:\n- It is not clear to me how the asymmetric self-play and SAGG-RIAC are implemented and why they are natural baselines.\n- It is not clear to me what the 'goals' are in the point mass experiment. This entire experiment should be explained much more clearly (+image).\n- It is not clear how this method compares qualitatively vs baselines (differences in goals etc).\n- This method doesn't seem to always outperform the asymm-selfplay baseline. The text mentions that baseline is less efficient, but this doesn't make the graph very interpretable.\n- The curriculum in the maze-case consists of regions that just progress along the maze, and hence is a 1-dimensional space. Hence using a manually defined set of goals should work quite well. It would be better to include such a baseline as well.\n- The experimental maze-setting and point-mass have a simple state / goal structure. How can this method generalize to harder problems?\n-- The entire method is quite complicated (e.g. training GANs can be highly unstable). How do we stabilize / balance training the GAN vs the RL problem?\n-- I don't see how this method could generalize to problems where the goals / subregions of space do not have a simple distribution as in the maze problem, e.g. if there are multiple ways of navigating a maze towards some final goal state. In that case, to discover a good solution, the generated goals should focus on one alternative and hence the GAN should have a unimodal distribution. How do you force the GAN in a principled way to focus on one goal in this case? How could you combine RL and training the GAN stably in that case?\n\nDetailed:\n- (2) is a bit strange: shouldn't the indicator say: 1( \\exists t: s_t \\in S^g )? Surely not all states in the rollout (s_0 ... s_t) are in the goal subspace: the indicator does not factorize over the union. Same for other formulas that use \\union.\n- Are goals overlapping or non-overlapping subsets of the state space? \nDefinition around (1) basically says it's non-overlapping, yet the goal GAN seems to predict goals in a 2d space, hence the predicted goals are overlapping? \n- What are the goals that the non-uniform baselines predict? Does the GAN produce better goals?\n- Generating goal labels is\n- Paper should discuss literature on hierarchical methods that use goals learned from data and via variational methods:\n1. Strategic Attentive Writer (STRAW), V. Mnih et al, NIPS 2016\n2. Generating Long-term Trajectories Using Deep Hierarchical Networks. S.\nZheng et al, NIPS 2016", "This paper proposed a method for automatic curriculum generation that allow an agent to learn to reach multiple goals in an environment with considerable sample efficiency. They use a generator network to propose tasks for the agent accomplish. The generator network is trained with GAN. In addition, the proposed method is also shown to be able to solve tasks with sparse rewards without the need manually modify reward functions. They compare the Goal GAN method with four baselines, including Uniform sampling, Asymmetric Self-play, SAGG-RIAC, and Rejection sampling. The proposed method is tested on two environments: Free Ant and Maze Ant. The empirical study shows that the proposed method is able to improve policies’ training efficiency comparing to these baselines. The technical contributions seem sound, however I find it is slightly difficult to fully digest the whole paper without getting the insight from each individual piece and there are some important details missing, as I will elaborate more below.\n\n1. it is unclear to me why the proposed method is able to solve tasks with sparse rewards? Is it because of the horizons of the problems considered are not long enough? The author should provide more insight for this contribution.\n\n2. It is unclear to me how R_min and R_max as hyperparameters are obtained and how their settings affect the performance.\n\n3. Another concern I have is regarding the generalizability of the proposed method. One of the assumption is “A policy trained on a sufficient number of goals in some area of the goal-space will learn to interpolate to other goals within that area”. This seems to mean that the area is convex. It might be better if some quantitative analysis can be provided to illustrate geometry of goal space (given complex motor coordination) that is feasible for the proposed method.\n\n4. It is difficult to understand the plots in Figure 4 without more details. Do you assume for every episode, the agent starts from the same state? \n\n5. For the plots in Figure 2, is there any explanation for the large variance for Goal GAN? Given that the state space is continuous, 10 runs seems not enough.\n\n6. According to the experimental details, three rollouts are performed to estimate the empirical return. It there any justification why three rollouts are enough?\n\n7. Minor comments\nAchieve tasks -> achieve goals or accomplish/solve tasks\nA variation of to -> variation of \nAllows a policy to quickly learn to reach …-> allow an agent to be quickly learn a policy to reach…\n…the difficulty of the generated goals -> … the difficulty of reaching\n", "We thank the reviewer for these references and we will discuss them in our related work section. None of the referenced literature directly tackles the multi-task problem solved by our proposed method, but they are complementary. Neither of them allows to condition the overall policy on different goals (the “action-plans“ in STRAW or the “macro-goals” in HPN are internals of the policy, not an input that can be changed externally). In fact, HPN is only used in a supervised setting trying to imitate expert trajectories - which is weakly related to our problem where no demonstrations are required. Our trained policy does not have any explicit hierarchy like the ones proposed in these papers, which makes it orthogonal to them - and also complementary! It would be an interesting research to improve our approach by learning a hierarchical policy instead the MLP used in our experiments (as described in Appendix B.5). This goes beyond the scope of the current paper and is left as future work.", "This sentence seems to be incomplete. The reviewer is invited to re-submit this comment if it has not been answered by our response.", "See the below discussion on this topic. One can qualitatively compare between the Goal GAN generated goals in Fig. 2 and the SAGG-RIAC ones in Fig. 9.", "Goals are overlapping subsets of the state space. Thus a single state may be contained in multiple goal sets $S^g$. Our RL agent receives only a single goal as input at a time, so this case does not cause any problems for our method.", "The “union”-like operator in this expression is intended to indicate the OR operation, e.g. the expression in 2 expands to:\nIndicator(s_0 is in S_g OR s_1 is in S_g OR … OR s_T is in S_g)\nWe will make this clear in the final version of our paper.\n", "The purpose of the Goal GAN is to generate all feasible goals within a state space (at the appropriate rate based on the performance of the RL agent). If there are multiple paths through a maze, then the Goal GAN should eventually generate goals at all states along all such paths. For example, see Figure 7 in the appendix, in which an ant in free space learns to move in many possible directions; the generated goals form a circle that grows outward from the initial position of the ant. In such a case, the RL agent is trained to reach each of these different goal locations. Thus, the case in which there are multiple paths to achieve each goal does not present any problems for our method. ", "The training of the Goal GAN and the training of the RL agent is balanced / stabilized through their connected objective, in which the Goal GAN is trained to generate goals for which the RL agent obtains an intermediate level of return (Section 4.1). The Goal GAN is trained using labels indicating, for each goal, whether the RL agent can obtain an intermediate level of return for that goal. These labels are computed empirically from rollouts collected by the RL agent. Thus, if the RL agent’s performance is slowly increasing, then the goals that the Goal GAN produces will remain relatively similar across timesteps, whereas if the performance of the RL agent increases dramatically, then the Goal GAN will quickly adjust the goals that it is generating to generate goals that are at the appropriate level of difficulty for the current policy. The shared objective ensures that the Goal GAN always generates goals that are appropriate for the RL agent at each iteration. ", "In this paper, we evaluate our method compared to existing baselines for the topic of multi-task goal generation and found that our method outperforms previous competing approaches. Our paper thus establishes our method as a promising direction for multi-task goal generation which can be extended to other tasks in future work. Furthermore, GANs have been shown to be a powerful framework to generate samples from considerably higher dimensional and complex distributions, such as images. Therefore, we think our method has more potential than others to properly generalize to harder goal structures.", "This baseline would, unfortunately, only work for this one task, whereas our method is more general and also works for the other tasks shown in our paper (e.g. Free Ant, N-dimensional Point Mass). Another difficulty with this approach would be to choose at what rate to increment the generated goals along the maze (i.e. at what rate to progress the curriculum). In contrast, our method uses the performance of the policy to automatically determine which goals are generated at each time step.", "Our method consistently outperforms the Asymmetric Self-Play baseline. This is not currently properly reflected in our graphs, since the Asymmetric Self-Play baseline requires extra rollouts to train “Alice” that are not currently included in our plots. In the final version of our paper, we will include the Alice rollouts in our plot to make this more clear. Due to the extra rollouts needed to train Alice, our method is much more sample efficient than this baseline.", "The Asymmetric Self-play method is also used as baseline in other task-generation papers (Florensa et al., 2017), where we can find a comprehensive analysis of the generation process of Asymmetric Self-play. We summarize here the most relevant findings in this other work. Asymmetric Self-Play relies on an agent “Alice” proposing goals. However, in a continuous action space, Alice is typically represented as a unimodal Gaussian policy. Thus, rather than proposing a diverse set of goals, Alice will tend to propose goals in a small cluster around the mean of the Gaussian that represents Alice’s policy. In contrast, our Goal GAN can produce goals to match an arbitrary goal distribution, giving our method much more flexibility and leading to improved performance.\n\nFurthermore, because Asymmetric Self-play uses a goal generation agent (“Alice”) that is trained with reinforcement learning, the goal generator can suffer from the problem of sparse rewards when Bob makes a large improvement relative to Alice. This instability is also described in (Florensa et al., 2017).\n\nThe goals generated by SAGG-RIAC can be seen in Figures 9 and 10 in the appendix of our paper. As explained in Section 5.1 of our paper, “SAGG-RIAC maintains an ever-growing partition of the goal-space that becomes more and more biased towards areas that already have more sub-regions, leading to reduced exploration and slowing down the expansion of the policy’s capabilities.”", "The goals are simply points in n-dimensional space. The purpose of this experiment is to evaluate how well our method scales up to goals of higher dimensions. Thus the environment places an n-dimensional point-mass in an n-dimensional space in which the point mass is constrained to move within a small region within this space. The feasible goals are points within this smaller region, and the agent achieves a goal by moving to within epsilon of the goal. The difficulty of this problem for goal-generation is that the goal-generator must learn to discover the bounds of the smaller region within which the agent is constrained to move. Finding this region becomes increasingly challenging as the dimensionality of the state space increases. Our goal generation method bootstraps from states visited by the agent and thus is able to efficiently find this feasible region. ", "Our implementation of “Asymmetric Self-Play” follows directly from the description of their method from their publication. In Asymmetric Self-play, “Alice” proposes goals (exactly what our Goal GAN does) for the agent “Bob” to try to achieve, and Alice and Bob are both trained with reinforcement learning (we use TRPO, with the same parameters as for our method). We use the “repeat” version of asymmetric self-play in which “Bob” must then learn to reach the goal that “Alice” proposed. In the Asymmetric Self-play paper, training is alternated between a “multi-goal” setup and a single “target task” setup. In our case we do not alternate because our “target task” setup is the same as the “multi-goal” one: we desire to train an agent that can achieve many target tasks, which is already done by the multi-goal setup; thus we only need the “multi-goal” training portion of their method. Their multi-goal training method, if successful, would result in a policy in which “Bob” learns to achieve many goals. Since this is also the objective of our method (described in equation 3 of our paper), Asymmetric Self-play is an appropriate baseline for our task. \n\nRegarding SAGG-RIAC, details of our implementation of this method can be found in Appendix E.2. The objective of SAGG-RIAC is the same as the objective of our method, although SAGG-RIAC is usually used to train a model-based agent whereas our method also works with an agent trained in a model-free setting. Regardless, since SAGG-RIAC likewise attempts to train an agent to achieve many goals, it is also a natural baseline to compare against.", "Thank you for recognizing the contribution in this paper. We agree that care must be taken to ensure stability for training the GAN. Still, our experiments show that our method outperforms the competing approaches on this problem. We chose to use a GAN rather than another generative model due to a GAN’s demonstrated ability to generate samples in high-dimensional spaces (such as images), thus giving our method the potential to scale up to high-dimensional goal spaces. We did not experiment with other generative models for these tasks. \n\nRegarding a comparison of different GAN types: in our experiments, using a WGAN (Arjovsky et al. 2017) led to significantly more stable training than a vanilla GAN (as in Goodfellow et al., 2014), and using an LSGAN improved the training stability even further, but not quite as dramatically. We have added these observations in the paper without additional details as it is not the focus of our work. As is stated in Section 4.2, all results shown in our paper, across a number of different environments, use the LSGAN with the original hyperparameters reported in Mao et al. 2017. In general, we’ve found GANs to be much more stable in lower dimensional state spaces than in image spaces, and many of the well known convergence issues did not happen. Therefore, no considerable fine-tuning and monitoring was needed. In future work we hope to extend our model to an even greater number of environments.", "4. Indeed the agent starts from the same state at every rollout. Only the goal (and hence the reward) changes between rollouts. We have updated Fig. 4 to clearly mark the projection of the initial state onto the depicted x-y plane representing the Center of Mass positions. We hope this clarifies the plots. \n\n5. We don’t think any of the methods presented have a significantly larger variance than the others. We agree that averaging over more than 10 random seeds would be desirable, although given time and compute constraints we couldn’t run more. Actually, 10 random seeds is considerably above standard in this field (most RL publications use 3 or 5 random seeds).\n\n6. We apologize for a typo in Appendix B1-B2, where we stated that “For each goal, we estimate the empirical return with three rollouts”. This is only true for the ablation experiment called “Goal GAN true label” shown in Appendix C, Fig. 6. For the “Goal GAN (ours)” method presented throughout the paper we do not sample more rollouts to label the goals; instead we reuse rollouts collected during the TRPO iterations. This means that the goals are labeled with a number of rollouts ranging from two to five (based on the number of times this goal was sampled during RL training). We have run an experiment of sampling 10 additional rollouts to label every goal, and we observe that the performance does not differ significantly from the one already reported with three rollouts.\n\nThank a lot for your additional comments, we have corrected all these typos in the paper.\n\nThis review has been very helpful to improve the clarity of exposition. Please, let us know if any point is still unclear and we will very gladly extend our explanations.", "We thank the reviewer for the thorough analysis and insightful comments. In the following we answer one by one the questions, and we detail the clarifications made in the paper wherever needed.\n\n1. Our proposed method is able to solve tasks with sparse rewards without modifying the reward function by automatically generating a curriculum over tasks. As our Problem Definition (Sec. 3) states in the Overall Objective (Sec. 3.2), we are seeking a policy $\\pi^*(\\cdot | s_t, g)$ that can succeed at many goals $g$, each goal corresponding to a different task with its own sparse reward $r^g(s_t, a_t, s_{t+1})$. But, although all tasks have a sparse reward, they are not all of the same difficulty! In particular, reaching a goal nearby the starting position is very easy, and can be performed even by the randomly initialized policy. Then, once the policy has learned to reach the nearby goals (in our navigation settings, it implies having learned some basic locomotor skills), it can bootstrap this acquired knowledge to attempt more complex (further away) goals. As explained in our Goal Labeling (Sec 4.1), our method strives to sample goals always of “intermediate difficulty” $g: R_{\\min} \\leq R^g(\\pi_i) \\leq R_{\\max}$. This means that our method will always be sampling goals such that training on them is efficient (i.e. our policy is able to receive a sufficient amount of reward such that it can improve its performance), despite their sparse reward structure. If no curriculum is applied, a prohibitively long time-horizon would be needed for the policy to learn to reach the far away goals. Furthermore, many goals are actually infeasible, and no matter the time-horizon they always receive a reward of 0. Our method minimizes wasting rollouts trying to reach such goals because they do not satisfy our condition $R_{\\min} \\leq R^g(\\pi_i)$.\n\n2. The hyperparameters R_min and R_max have a very clear probabilistic interpretation given in Sec. 4.1, based on analyzing Eq. (2). R_min is the minimum success probability required to start training on a particular goal. R_max is the maximum success probability above which we prefer to concentrate training on new goals. In practice, as explained in Sec. 4.3 and Appendix C, we estimate $R^g(\\pi_i)$ with the rollouts collected by our RL algorithm. Therefore, each estimation is an average over two to five binary rewards (whether the rollout succeeded or not), meaning that the lowest numbers it can get are 0 or ⅕ and the highest are ⅘ or 1. In all our experiments we used R_min = 0.1 and R_max = 0.9, but given the above analysis any $R_min \\in ]0, 0.2[$ and $R_max \\in ]0.8, 1[$ would have yield exactly the same result. We have not experimented with values outside this range because it might not be of practical interest to not train on goals that are already achieved more than 20% of the time or have a policy succeeding less than 80% of the time on the goals it is given.\n\n3. Our assumptions do not imply convexity of the goal space. For example, we do provide quantitative analysis for the Ant-Maze environment, where we report an efficient learning of our method despite the geometry of the feasible goal space being U-shaped, as seen in Fig. 4 (we have updated the legend to more clearly identify the feasible goal space). Rather, the interpolation statement refers to the smoothness of the goal space with respect to the policy, i.e. the policy for reaching a specific goal that has not been sampled during training can be inferred from sampling a sufficient number nearby goals in the continuous goal space. The extrapolation statement should be understood along the lines of the explanation given in our point 1. of this rebuttal: “once the training policy is able to reach the nearby goals ... it can bootstrap this acquired knowledge to attempt more complex (further away) goals”. This is a very reasonable assumption in many learning systems, robotics in particular.\n" ]
[ 8, -1, -1, 4, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SyhRVm-Rb", "S10H-jEBG", "H1Oi_PgQG", "iclr_2018_SyhRVm-Rb", "iclr_2018_SyhRVm-Rb", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "rJg5hxtgf", "S1kxi6OlM", "B1OrdweQz", "Syx7RZ9eG" ]
iclr_2018_ryj0790hb
Incremental Learning through Deep Adaptation
Given an existing trained neural network, it is often desirable to learn new capabilities without hindering performance of those already learned. Existing approaches either learn sub-optimal solutions, require joint training, or incur a substantial increment in the number of parameters for each added task, typically as many as the original network. We propose a method called Deep Adaptation Networks (DAN) that constrains newly learned filters to be linear combinations of existing ones. DANs preserve performance on the original task, require a fraction (typically 13%) of the number of parameters compared to standard fine-tuning procedures and converge in less cycles of training to a comparable or better level of performance. When coupled with standard network quantization techniques, we further reduce the parameter cost to around 3% of the original with negligible or no loss in accuracy. The learned architecture can be controlled to switch between various learned representations, enabling a single network to solve a task from multiple different domains. We conduct extensive experiments showing the effectiveness of our method on a range of image classification tasks and explore different aspects of its behavior.
rejected-papers
This work tackles an important problem of incremental learning and does so with extensive experimentation. As pointed out by two reviewers, the idea does seem novel and interesting, but the submission would require some rewriting before being potentially accepted at a venue like ICLR. I suggest focusing the paper more on the task-incremental learning aspects, doing the ablation studies (and other changes) as requested by the reviewers, and having a rich appendix with details (with more discussion in the paper itself).
train
[ "BJJTve9gM", "HyOveS5gf", "HyK6w83xM", "rJsPGMmbz", "HyGLffmbM", "S1PmnJmbM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes to adapt convnet representations to new tasks while avoiding catastrophic forgetting by learning a per-task “controller” specifying weightings of the convolution-al filters throughout the network while keeping the filters themselves fixed.\n\n\nPros\n\nThe proposed approach is novel and broadly applicable. By definition it maintains the exact performance on the original task, and enables the network to transfer to new tasks using a controller with a small number of parameters (asymptotically smaller than that of the base network).\n\nThe method is tested on a number of datasets (each used as source and target) and shows good transfer learning performance on each one. A number of different fine-tuning regimes are explored.\n\nThe paper is mostly clear and well-written (though with a few typos that should be fixed).\n\n\nCons/Questions/Suggestions\n\nThe distinction between the convolutional and fully-connected layers (called “classifiers”) in the approach description (sec 3) is somewhat arbitrary -- after all, convolutional layers are a generalization of fully-connected layers. (This is hinted at by the mention of fully convolutional networks.) The method could just as easily be applied to learn a task-specific rotation of the fully-connected layer weights. A more systematic set of experiments could compare learning the proposed weightings on the first K layers of the network (for K={0, 1, …, N}) and learning independent weights for the latter N-K layers, but I understand this would be a rather large experimental burden.\n\nWhen discussing the controller initialization (sec 4.3), it’s stated that the diagonal init works the best, and that this means one only needs to learn the diagonals to get the best results. Is this implying that the gradients wrt off-diagonal entries of the controller weight matrix are 0 under the diagonal initialization, hence the off-diagonal entries remain zero after learning? It’s not immediately clear to me whether this is the case -- it could help to clarify this in the text.\n\nIf the off-diag gradients are indeed 0 under the diag init, it could also make sense to experiment with an “identity+noise” initialization of the controller matrix, which might give the best of both worlds in terms of flexibility and inductive bias to maintain the original representation. (Equivalently, one could treat the controller-weighted filters as a “residual” term on the original filters F with the controller weights W initialized to noise, with the final filters being F+(W\\crossF) rather than just W\\crossF.)\n\nThe dataset classifier (sec 4.3.4) could be learnt end-to-end by using a softmax output of the dataset classifier as the alpha weighting. It would be interesting to see how this compares with the hard thresholding method used here. (As an intermediate step, the performance could also be measured with the dataset classifier trained in the same way but used as a soft weighting, rather than the hard version rounding alpha to 0 or 1.)\n\n\nOverall, the paper is clear and the proposed method is sensible, novel, and evaluated reasonably thoroughly.", "This paper proposes new idea of using controller modules for increment learning. Instead of finetuning the whole network, only the added parameters of the controller modules are learned while the output of the old task stays the same. Experiments are conducted on multiple image classification datasets. \n\nI found the idea of using controller modules for increment learning interesting and have some practical use cases. However, this paper has the following weakness:\n1) Missing simple baselines. I m curious to see some other multitask learning approach, e.g. branch out on the last few layers for different tasks and finetune the last few layers. The number of parameters won't be affected so much and it will achieve better performance than 'feature' in table 3.\n2) Gain of margin is really small. The performance improvements in Table1 and Table3 are very small. I understand the point is to argue with fewer parameters the model can achieve comparable accuracies. However, there could be other ways to design the network architecture to reduce the size (sharing the lower level representations).\n3) Presentation of the paper is not quite good. Figures are blurry and too small. ", "----------------- Summary -----------------\nThe paper tackles the problem of task-incremental learning using deep networks. It devises an architecture and a training procedure aiming for some desirable properties; a) it does not require retraining using previous tasks’ data, b) the number of network parameters grows only sublinearly c) it preserves the output of the previous tasks intact.\n\n----------------- Overall -----------------\nThe paper tackles an important problem, aims for important characteristics, and does extensive and various experiments. While the broadness of the experiments are encouraging, the main task which is to propose an effective task-incremental learning procedure is not conclusively tested, mainly due to the lack of thorough ablation studies (for instance when convolutional layers are fixed) and the architecture seems to change from one baseline (method) to another.\n\n----------------- Details -----------------\n- in the abstract it says: \"Existing approaches either learn sub-optimal solutions, require joint training, or incur a substantial increment in the number of parameters for each added task, typically as many as the original network.\"\nThe linear-combination constraint in the proposed approach is a strong one and can learn a sub-optimal solution for the newly introduced tasks.\n\n- Page 3: R^C → R^{C_o}\n\n- The notation is (probably unnecessarily) too complicated, perhaps it’s better to formulate it without being faithful to the actual implementation but for higher clarity and ease of understanding. For instance, one could start from denoting feature maps and applying the controller/transform matrix W on that, circumventing the clutter of convolutional kernels.\n\n- What is the DAN architecture? \n\n- In table 1 a better comparison is when using same architecture (instead of VGG) to train it from scratch or fine-tune from ImageNet (the first two rows)\n\n- What is the architecture used for random-weights baseline?\n\n- An experiment is needed where no controller is attached but just the additional fully-connected layers to see the isolated improvements gained by the linear transform of convolutional layers.\n\n- Multiple Base Networks: The assumption in incremental learning is that one does not have access to all tasks/datasets at once, otherwise one would train them jointly which would save parameters, training time and performance. So, finding the best base network using the validation set is not relevant.\n\n- The same concern as above applies to the transferability and dataset decider experiments\n", "It contains replies to all reviewer's comments.", "It contains replies to all reviewer's comments.", "Thanks for the time taken for reviewing an the constructive suggestions. \nReplies to reviewers:\nReviewer 1:\n\n1. Perhaps some additional ablation studies as freezing some layers are in place, though several baselines were tested, ranging from freezing all but the top-layer (\"feature\"), freezing nothing (\"fine-tuning\"), as well as comparing (see table 3) to stronger baselines such as LWF and the very recent Residual Adapters - all of which are outperformed. Some experiments were omitted due to lack of space and to avoid missing the main point due to cluttering the paper. In addition, less powerful variants of the proposed method were suggested and evaluated, such as the \"diagonal\" method. \n2. Indeed two main architectures were used, the VGG architecture for exploratory experiments regarding transferability, initialization methods, etc and for the Visual Decathlon Challenge a Res-net based architecture was used. Perhaps the exposition or order of experiments caused confusion. However, using different datasets and architectures also serves to show applicability of the across settings. \n3. About sub-optimality: indeed we constrain the space of solutions. A task whose basic required features are span an orthogonal subspace will surely result in poor performance under this method. This limitation is quite explicitly acknowledged in the discussion (section 4.5) and mentioned as an issue for future work. In addition, we address these issues in the experiments by testing which base-network is suitable for transferring to other tasks with the best average performance (see Fig 1(b), as well as section 4.2, and fig. 2(b)). Arguably, also the number of convolutional filters in the first level of a modern CNN is limiting, for example, in resnet we have 16 3x3 filters, and the space of 3x3 RGB channels would require 27 3x3 filters to be fully spanned. But we know that the space of natural images is much smaller than that of all images (though likely not a linear subspace). \n4. Notation : We weren't sure if explicitly writing the notation this was would be better or worse than leaving it in a more compact form. We agree it seems a bit over-complicated.\n5. DAN refers to any architecture which was augmented with the controller modules + extra heads for additional tasks. We regret this was not clear from the text and can try to clarify it.\n6. What better comparison would you suggest for table 1? This captures both transferability or powerful pre-training w.r.t various tasks and the compactness of representation.\n6. \"an experiment is needed with just an additional fully connected layer\" : this is actually in the paper as one of the baselines, e.g. called \"feature\" in table 3, also referred to as \"feature extraction\", \"shallow transfer learning\" , \"ft-last\" (table 2, 3rd row).\n7. Multiple base-networks: We agree with the reviewer's reasoning if all the data is not available at once. This, as well as the transferability tests, were more of an exploratory nature to see relations between representations learned on various dataset. \n\nReviewer3:\n1. Please see answer 1 to reviewer 1. \n2. We do not claim that performance in terms of accuracy is much higher than regular \nfine-tuning. The main claim is indeed efficiency of representation and this is not left\nwithout comparison to several other methods, including recent ones; As shown in table 3,\nwe outperform LWF - though not by a large margin, but do gain much in terms of representation size, and outperform the incremental version of Residual adapters, and match Residual adapters where they used *joint* training. To recap, we show improvements over well accepted and some very recent baselines that address some of the same challenges. \n3. Please elaborate on what you mean by \"presentation\". Does this refer to figure aesthetics? Indeed, some can be made larger. Which figures were blurry? \n\nReviewer2:\n1. The reason to avoid tasks-specific rotation of fc layers is because the number of weights required to do so would usually surpass that required to learn all parameters anew, e.g, a fc layer of 512x1000 would require 1000x1000 parameters.\n2. About diagonal init: this is discussed (though not in terms of gradients) in section 4.1.1, and indeed a the text briefly mentions a similar recent work that does as the reviewer suggested by using residual units.\n3. We had some very initial experiments with soft thresholding / weighing but these were left out, as the paper was already quite long. There is mention on shifting representations in a soft way, see Fig. 2(c). The suggestion of training with a non-integer alpha is a very interesting one and is reminiscent of recent work on training with affine-combinations training images. Thanks for the suggestion!\n\n" ]
[ 6, 4, 5, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_ryj0790hb", "iclr_2018_ryj0790hb", "iclr_2018_ryj0790hb", "BJJTve9gM", "HyOveS5gf", "iclr_2018_ryj0790hb" ]
iclr_2018_r1DPFCyA-
Discriminative k-shot learning using probabilistic models
This paper introduces a probabilistic framework for k-shot image classification. The goal is to generalise from an initial large-scale classification task to a separate task comprising new classes and small numbers of examples. The new approach not only leverages the feature-based representation learned by a neural network from the initial task (representational transfer), but also information about the classes (concept transfer). The concept information is encapsulated in a probabilistic model for the final layer weights of the neural network which acts as a prior for probabilistic k-shot learning. We show that even a simple probabilistic model achieves state-of-the-art on a standard k-shot learning dataset by a large margin. Moreover, it is able to accurately model uncertainty, leading to well calibrated classifiers, and is easily extensible and flexible, unlike many recent approaches to k-shot learning.
rejected-papers
This submission presents intriguingly good results on k-shot learning and I agree with the authors that the results are better than the presented previous work, and that the method is simple, so I took a deeper look into the paper despite the overall negative reviews. However, I think in its current form, the paper is not suitable for publication: - The previous work, that the authors compare to, were not really using comparable architectures: in fact, likely much worse base models with fewer parameters etc. I think any future version of this work would need to control for architecture capacity, otherwise how can we be sure where the gains come from? To me, this is a major unknown in terms of the credit assignment for the great results. - The authors should be comparing with MAML (and follow-up work) by Finn et al. (2017) - I don't really understand why the authors claim to have no need for validation sets. That's a very strong claim: are ALL the hyper-parameters (model architectures etc) just chosen in another, principled way? This issue would definitely need to be addressed in a follow-up work.
train
[ "rJ60euDeG", "SyJRNAKeG", "SknsYOMZf", "ryuLjYUfG", "HJsAYK8Gz", "r1hd5K8zG", "S1Wl_KLfM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper presents a procedure to efficiently do K-shot learning in a classification setting by creating informative priors from information learned from a large, fully labeled dataset. Image features are learned using a standard convolutional neural network---the last layer form image features, while the last set of weights are taken to be image \"concepts\". The method treats these weights as data, and uses these data to construct an informative prior over weights for new features.\n\n- Sentence two: would be nice to include a citation from developmental psychology.\n\n- Probabilistic modeling section: treating the trained weights like \"data\" is a good way to convey intuition about your method. It might be good to clarify some specifics earlier on in the \"Probabilistic Modeling\" paragraph, e.g. how many \"observations\" are associated with this matrix. \n \n- In the second phase, concept transfer, is the only information from the supervised weights the mean and estimated covariance? For instance, if there are 80 classes and 256 features from the supervised phase, the weight \"data\" model is 80 conditionally IID vectors of length 256 ~ Normal(\\mu, \\Sigma). The posterior MAP for \\mu and \\Sigma are then used as a prior for weights in the K-shot task. How many parameters are estimated for \n\n * gauss iso: mu = 256-length vector, \\sigma = scalar variance value of weights\n * log reg: mu = 256-length zero vector, \\sigma = scalar variance value of weights\n * log reg cross val: mu = 256-length zero vector, \\sigma = cross validated value\n\nIf the above is correct, the information boosts K-shot accuracy is completely contained in the 256-length posterior mean vector and the scalar weight variance value?\n\n- Is any uncertainty about \\mu_MAP or \\Sigma_MAP propagated through to uncertainty in the K-shot weights? If not, would this influence the choice of covariance structure for \\Sigma_MAP? How sensitive are inferences to the choice of Normal inverse-Wishart hyper parameters? \n\n- What do you believe is the source of the mis-calibration in the \"predictied probability vs. proportion of times correct\" plot in Figure 2? \n\nTechnical: The method appears to be technically correct.\n\nClarity: The paper is pretty clearly written, however some specific details of the method are difficult to understand.\n\nNovel: I am not familiar with K-shot learning tasks to assess the novelty of this approach. \n\nImpact: While the reported results seem impressive and encouraging, I believe this a relatively incremental approach. ", "Authors present a k-shot learning method that is based on generating representations with a pre-trained network and learning a regularized logistic regression using the available data. The regularised regression is formulated as a MAP estimation problem with the prior estimated from the weights of the original network connected final hidden layer to the logits — before the soft-max layer. \n\nThe motivation of the article regarding “concepts” is interesting. It seems especially justified when the training set that is used to train the original network has similar objects as the smaller set that is used for k-shot learning. Maps shown in Figures 6 and 7 provide good motivation for this approach. \n\nDespite the strong motivation, the article raises some concerns regarding the method. \n1. The assumption about independence of w vectors across classes is a very strong one and as far as I can see, it does not have a sound justification. The original networks are trained to distinguish between classes. The weight vectors are estimated with this goal. Therefore, it is very likely that vectors of different classes are highly correlated. Going beyond this assumption also seems difficult. The proposed model estimates $\\theta^{MAP}$ using only one W matrix, the one that is estimated by training the original network in the most usual way. In this case, the prior over $\\theta$ would have a large influence on the MAP estimate and setting it properly becomes important. As far as I can see, there is no good recipe presented in the article for setting this prior. \n2. How is the prior model defined? It is the most important component of the method while precise details are not provided. How are the hyperparameters set? Furthermore, this detail needs to be in the main text. \n3. With the isotropic assumption on the covariance matrix, the main difference between logistic regression, which is regularized by L2 norm and coefficient set proportional to the empirical variance, and the proposed method seems to be the mean vector $\\mu^{MAP}$. From the details provided in the appendix — which should be in the main text in my opinion — I believe this vector is a combination of the prior and mean of w_c across classes. If the prior is set to 0, how different is this vector from 0? Authors should focus on this in my opinion to explain why methods work differently in 1-shot learning. In the other problems, the results suggest they are pretty much the same. \n4. Authors’ motivation about concepts is interesting however, if the model bases its prediction on mean of w_c vectors over classes, then I am not sure if authors really achieve what they motivate for. \n5. Results are not very convincing. If the method was substantially different than baseline, I believe this would have been no problem. Given the proximity of the proposed method to the baseline with regularised logistic regression, lack of empirical advantage is an issue. If the proposed model works better in the 1-shot scenario, then authors should delve into it to explain the advantage. \n\nMinor comments: \nEvaluation in an online setting section is unclear. It needs to be rewritten in my opinion. ", "The authors introduce a probabilistic k-shot learning model based on previous training of a CNN on a large dataset. The weights of the softmax layer of the CNN are then used as the MAP solution in a concept learning scenario to put a prior over the soft-max weights of the classifier (logistic regression) for the dataset with k-shot examples. The model is compared against other models for doing k-shot learning in miniImageNet, and against different versions of the same model for CIFAR-100.\n\nThe paper introduces a very simple idea that allows transfer knowledge from a network trained with a large-dataset to a network-trained with a smaller amount of data, the data with k-shot examples per class. This is not a general way to do k-shot learning, because it heavily depends on the availability of the large dataset where the weights of the soft-max function can be extracted. But it seems to work for natural image data. \n\nHow many data observations are necessary to estimate \\widetilde{W}_{MAP} such that it is still possible to obtain superior performance in the k-shot learning problem? Did you try the methods of Table 1 for CIFAR-100? The experiments on this dataset use models that are variations of the same proposed model. \n", "Thank you very much for your review; we reply to it below. Please also read the general reply above.\n\nWe agree that our approach requires a certain amount of old training classes (“a database”) to build good features, but so do competing methods, such as matching networks or prototypical networks. In such a case, our method is general: learn good features with this data, and build a simple probabilistic model on top of the learnt parameters. Good feature representations exist for many data modalities such as images or text. If not much data is available in the training classes, we are not aware of methods with any guarantees. \n\nWe did not investigate the dependence on the number of training classes. Generally, more classes is always better as is the case for all competing methods.\nWe did not run Matching Networks on CIFAR 100 as we used this dataset to compare different probabilistic models and not to compare against other methods.", "Thank you very much for your review; we reply to it below. Please also read the general reply above.\n\nad 1) $\\theta^{MAP}$ is estimated using the W matrix from training, but this matrix contains C samples from p(W|\\theta). The importance of the prior vanishes as C increases and is not so fundamental. \nThe softmax likelihood does have an identifiability problem: if all weights are shifted by the same offset then the same probabilities will result. By itself, this can result in dependencies in the weight’s posterior. However, the L2 regularisation applied in the first phase of learning (representational learning) mitigates this effect. Moreover, these dependencies have no effect on the quality of prediction, since by definition predictions are the same for these settings of the weights. We do not believe there are strong additional dependencies between the top-level weights, once the lower layers are fixed. Indeed, once the lower layers are fixed, the average hidden layer activation for a class indicates what a ‘good’ setting of the softmax weight for that class will be: the weight should simply lie in the direction of this vector (see [10] for a similar argument). A related observation lends some support to this argument: pilot experiments showed that -- when the lower level of the network is fixed and the top level weights are retrained several times to classify a class in the context of sets of different randomly selected classes -- the same weight vector is recovered each time for the common class. The conclusion is that the context does not matter, but rather just the representation of that class at the hidden layer. This does not speak directly to dependencies between the weight vectors of different classes, but it is consistent with this hypothesis and may explain the very strong performance of this seemingly overly-simple approach. \n\n2) We use a Normal-Inverse-Wishart prior, which is the standard conjugate prior to a Gaussian model and has four hyperparameters, mu_0, kappa_0, Lambda_0, and nu_0. Standard approaches to set these hyperparameters are discussed in [Murphy 2012]; we try two approaches: 1) a weakly data dependent prior, 2) a prior that is set by cross-validation of log probabilities on the weights (see discussion in the supplement). Both approaches yield similar results, and especially for the isotropic model, the results are not very sensitive to the choice of prior parameters. We will clarify in the final version. \n\n3) This is correct; if mu_0is zero, the size of kappa_0 determines how different mu_MAP will be to zero. Typically, this value is chosen to be (much) smaller than one (Murphy 2012), such that mu_MAP is non-zero, even in the one-shot case.\n\n4) We agree that the concept transfer is limited to very few parameters in our experiments. Our experiments on CIFAR (e.g., Figure 10) show that there is no advantage, on this dataset, for using a more structured model, such as a mixture of Gaussians. However, the framework we present is general and allows for more elaborate probabilistic models leading to more ambitious concept transfer. Models such as the Gaussian latent feature model presented in Section 5.1 of [Griffiths, Ghahramani 2011] could be considered. Our results are a first step in this direction, and we see this as an exciting direction of future work for datasets with a higher number of classes.\n\n5) We achieve state-of-the-art results by a large margin over competing methods, and the success of our approach should be measured against the current reference methods in the literature. Previous state of the art papers only compare to nearest neighbours with a very shallow network and, in particular, never compare to logistic regression, which is a much stronger baseline. Our approach is orthogonal to previous methods. On the considered datasets, learning deep features already helps the baselines (NN and LRCV) to beat other state of the art methods. We aimed to convey that our choice of probabilistic model is closely related to logistic regression, which we do not deem a disadvantage. Indeed, one message of the paper is that something as simple as logistic regression can beat all current methods, which has important implications for k-shot learning problems and how they should be tackled. We do not increase in accuracy substantially, but we get better calibration, which is often desirable (e.g., situations where making mistakes is expensive, such as self driving cars). Moreover, the results clearly show that using standard cross-validation leads to worse accuracy and much worse calibration, and, in particular, is not possible for 1-shot learning. In this regard, logistic regression using the weight variance as regularization is a proposed method, and not a baseline, which we will stress in the main text. \n\n6) Thank you for pointing out that this section is not clear enough; we are happy to improve it. Could you please elaborate briefly on the aspects that are unclear?", "Thank you very much for your review; we reply to it below. Please also read the general reply above.\n\nad probabilistic model section) Thank you for the comment, we will incorporate this in the text.\n\nad transfer and dimension of parameters) This is correct, we will make it clearer in the text.\n\nad uncertainty of parameters) No, in our case \\mu_MAP and \\Sigma_MAP correspond to point estimates, which do not carry uncertainty; however, our framework also allows for other inference methods, such as MCMC sampling or variational inference, which do not require point estimates and use samples from the entire distribution.\n\nad miscalibration) The problem of the miscalibration of neural networks is well known and has, for example, been analysed in [Guo et al. 2017]. We are not immune to this shortcoming of deep classifiers; however, we show that we are calibrated better than other methods.", "We thank the reviewers for their insightful comments. We will address them point by point. However, it seems that we did not manage to convey the simple but important findings of our work and we would like to emphasise them again: The field of k-shot learning has received significant attention in the last years, and many benchmarks use image datasets such as miniimagenet, cifar100, or omniglot. The most prominent methods so far are based on episodic training, which is believed to be necessary for performing well on these k-shot learning task. In our opinion, this leads to slow and overly complicated training procedures. Our work suggests that these complications are not necessary to tackle few shot learning, and that a simple baseline based on deep features generalises surprisingly well and beats episodic training approaches. Previous state-of-the-art papers only compare to nearest neighbours with a very shallow network and, in particular, never compare to logistic regression, which is a much stronger baseline. In our opinion, this observation together with careful analysis of different models can influence the direction of the field moving forward.\nSome of the concerns are regarding the simplicity of the Gaussian model. We argue that this simple probabilistic model performs so well compared to more complex models due to the low number of training classes in the studied datasets. However, our method is more general, and applying more complex variants to datasets with a large number of classes is an exciting and promising direction for future research. In order to illustrate the performance of our method and to compare to other methods, we chose to consider miniImagenet, which has become a de-facto standard." ]
[ 5, 5, 5, -1, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2018_r1DPFCyA-", "iclr_2018_r1DPFCyA-", "iclr_2018_r1DPFCyA-", "SknsYOMZf", "SyJRNAKeG", "rJ60euDeG", "iclr_2018_r1DPFCyA-" ]
iclr_2018_rkeZRGbRW
Variance Regularizing Adversarial Learning
We study how, in generative adversarial networks, variance in the discriminator's output affects the generator's ability to learn the data distribution. In particular, we contrast the results from various well-known techniques for training GANs when the discriminator is near-optimal and updated multiple times per update to the generator. As an alternative, we propose an additional method to train GANs by explicitly modeling the discriminator's output as a bi-modal Gaussian distribution over the real/fake indicator variables. In order to do this, we train the Gaussian classifier to match the target bi-modal distribution implicitly through meta-adversarial training. We observe that our new method, when trained together with a strong discriminator, provides meaningful, non-vanishing gradients.
rejected-papers
The reviewers found a number of short-comings in this work that would prevent it from being accepted at ICLR in its current form, both in terms of writing (not specifying the loss function), experiments that are too limited, and inconclusive comparisons with existing regularization techniques. I recommend the authors take into account the feedback from reviewers in any follow-up submissions.
train
[ "HyZBE0IlM", "ByjCp4qgM", "HkxvlUclG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies how the variance of the discriminator affect the gradient signal provided to the generator and therefore how it might limit its ability to learn the true data distribution.\n\nThe approach suggested in this paper models the output of the discriminator using a mixture of two Gaussians (one for “fake” and the other for “not fake”). This seems like a rather crude approximation as the distribution of each “class” is likely to be multimodal. Can the authors comment on this? Could they extend their approach to use a mixture of multimodal distributions?\n\nThe paper mentions that fixing the means of the distribution can be “problematic during optimization as the discriminator’s goal is to maximize the difference between these two means.“. This relates to my previous comment where the distribution might not be unimodal. In this case, shifting the mean doesn’t seem to be a good solution and might just yield to oscillations between different modes. Can you please comment on this?\n\nMode collapse: Can you comment on the behavior of your approach w.r.t. to mode collapse?\n\nImplementation details: How is the mean of the two Gaussians initialized? \n\nRelation to instance noise and regularization techniques: Instance noise is a common trick being used to train GANs, see e.g. http://www.inference.vc/instance-noise-a-trick-for-stabilising-gan-training/\nThis also relates to some regularization techniques, e.g. Roth et al., 2017 that provides a regularizer that amounts to convolving the densities with white Gaussian noise. Can you please elaborate on the potential advantages of the proposed solution over these existing techniques?\n\nComparison to existing baselines: Given that the paper addresses the stability problem, I would expect some empirical comparison to at least one or two of the stability methods cited in the introduction, e.g. Gulrajani et al., 2017 or Roth et al., 2017.\n\nRelation to Kernel MMD: Can the authors elaborate on how their method relates to approaches that replace the discriminator with MMD nets. e.g.\n- Training generative neural networks via Maximum Mean Discrepancy optimization, Dziugaite et al\n- Generative models and model criticism via optimized maximum mean discrepancy, Sutherland et al\nMore explicitly, the variance in these methods can be controlled via the bandwidth of the kernel and I therefore wonder what would one use a simple mixture of Gaussians instead?\n", "The paper proposes variance regularizing adversarial learning (VRAL), a new method for training GANs.\n\nThe motivation is to ensure that the gradient for the generator does not vanish. The authors propose to use a discriminator whose output targets a mixture of two Gaussians (one component each for real and fake data). The means and variances are fixed so that the discriminator does not overfit, which ensures that the generator learning is not hindered. \n\nThe discriminator itself is trained through two additional meta-discriminators (!) Are the meta-discriminators really necessary? Have you tried matching moments or using other methods for comparing the distributions?\n\nIt would be useful to write down the actual loss function so that it's easier to compare with other GAN variants. In particular, I'm curious to understand the difference between VRAL and Fisher-GAN. The authors discuss this in the end of Section 3, but a more careful comparison is needed.\n\nThe experimental results are pretty limited and lack detailed quantitative evaluation, which makes it harder to compare the performance of the proposed variant to existing algorithms.\n\nOverall, I think that the idea is interesting, but the paper needs more work and does not meet the ICLR acceptance bar.\n\nFYI, another concurrent submission showed that gradient penalties stabilize training of GANs:\nMANY PATHS TO EQUILIBRIUM: GANS DO NOT NEED TO DECREASE A DIVERGENCE AT EVERY STEP\nhttps://openreview.net/pdf?id=ByQpn1ZA-", "The authors provided empirical analysis of different variants of GANs and proposed a regularization scheme to combat the vanishing gradient when the discriminator is well trained. \n\nMore specifically, the authors demonstrated the importance of intra-class variance in the discriminator’s output. Methods whose discriminators tend to map inputs of a class to single real values are unable to provide a reliable learning signal for the generator, such as the standard GAN and Least Squares GAN. Variance in the discriminator’s output is essential to allow the generator to learn in the presence of a well-trained discriminator. To ensure the discriminator’s output follows the mixture of two univariate Gaussians, the authors proposed to add two additional discriminators which are trained in a similar was as the original GAN formulation. The technique is related to Linear Discriminant Analysis. From a broader perspective, the new meta-adversarial learning can be applied to ensure various desirable properties in GANs.\n\nThe performance of variance regularization scheme was evaluated on the CIFAR-10 and CelebA data.\n\nSummary:\n——\nI think the paper discusses a very interesting topic and presents an interesting direction for training the GANs. A few points are missing which would provide significantly more value to readers. See comments below for details and other points.\n\nComments:\n——\n1.\tWhy would a bi-modal distribution be meaningful? Deep nets implicitly transform the data which is probably much more effective than using complex bi-modal Gaussian distribution; the bi-modal concept can likely be captured using classical techniques.\n\n2.\tOn page 4, in Eq. (8) and (9), it remains unclear what $\\mathcal{R}$ and $\\mathcal{F}$ really are beyond two-layer MLPs; are the results of those two-layer MLPs used as the mean of a Gaussian distribution, i.e., $\\mu_r$ and $\\mu_f$?\n\n3.\tRegarding the description above Eq. (12), what is really used in practice, i.e., in the experiments? The paper omits many details that seem important for understanding. Could the authors provide more details on choosing the generator loss function and why Eq. (12) provides satisfying results in practice? \n\nMinor Comments:\n——\n1.\tIn Sec 2.1, the sentence needs to be corrected: “As shown in Arjovsky & Bottou (2017), the JS divergence will be flat everywhere important if P and Q both lie on low-dimensional manifolds (as is likely the case with real data) and do not prefectly align.”\n\n2.\tLast sentence in Conclusion: “which can be applied to ensure enforce various desirable properties in GANs.” Please remove either “ensure” or “enforce.”\n" ]
[ 5, 4, 6 ]
[ 4, 4, 3 ]
[ "iclr_2018_rkeZRGbRW", "iclr_2018_rkeZRGbRW", "iclr_2018_rkeZRGbRW" ]
iclr_2018_H1BO9M-0Z
Lifelong Word Embedding via Meta-Learning
Learning high-quality word embeddings is of significant importance in achieving better performance in many down-stream learning tasks. On one hand, traditional word embeddings are trained on a large scale corpus for general-purpose tasks, which are often sub-optimal for many domain-specific tasks. On the other hand, many domain-specific tasks do not have a large enough domain corpus to obtain high-quality embeddings. We observe that domains are not isolated and a small domain corpus can leverage the learned knowledge from many past domains to augment that corpus in order to generate high-quality embeddings. In this paper, we formulate the learning of word embeddings as a lifelong learning process. Given knowledge learned from many previous domains and a small new domain corpus, the proposed method can effectively generate new domain embeddings by leveraging a simple but effective algorithm and a meta-learner, where the meta-learner is able to provide word context similarity information at the domain-level. Experimental results demonstrate that the proposed method can effectively learn new domain embeddings from a small corpus and past domain knowledges\footnote{We will release the code after final revisions.}. We also demonstrate that general-purpose embeddings trained from a large scale corpus are sub-optimal in domain-specific tasks.
rejected-papers
While the problem of learning word embeddings for a new domain is important, the proposed method was found to be unclearly presented and missing a number of important baselines. The reviewers found the technical contribution to be of only limited value.
val
[ "HyyI-JYlM", "SktcRWIgf", "HJ8Q-8deM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a lifelong learning method for learning word embeddings. Given a new domain of interest, the method leverages previously seen domains in order to hopefully generate better embeddings compared to ones computed over just the new domain, or standard pre-trained embeddings.\n\nThe general problem space here -- how to leverage embeddings across several domains in order to improve performance in a given domain -- is important and relevant to ICLR. However, this submission needs to be improved in terms of clarity and its experiments.\n\nIn terms of clarity, the paper has a large number of typos (I list a few at the end of this review) and more significantly, at several points in the paper is hard to tell what exactly was done and why. When presenting algorithms, starting with an English description of the high-level goal and steps of the algorithm would be helpful. What are the inputs and outputs of the meta-learner, and how will it be used to obtain embeddings for the new domain? The paper states the purpose of the meta learning is \"to learn a general word context similarity from the first m domains\", but I was never sure what this meant. Further, some of the paper's pseudocode includes unexplained steps like \"invert by domain index\" and \"scanco-occurrence\". \n\nIn terms of the experiments, the paper is missing some important baselines that would help us understand how well the approach works. First, besides the GloVe common crawl embeddings used here, there are several other embedding sets (including the other GloVe embeddings released along with the ones used here, and the Google News word2vec embeddings) that should be considered. Also, the paper never considers concatenations of large pre-trained embedding sets with each other and/or with the new domain corpus -- such concatenations often give a big boost to accuracy, see :\n\"Think Globally, Embed Locally—Locally Linear Meta-embedding of Words\", Bollegala et al., 2017\nhttps://arxiv.org/pdf/1709.06671.pdf\n\nThat paper is not peer reviewed to my knowledge so it is not necessary to compare against the new methods introduced there, but their baselines of concatenation of pre-trained embedding sets should be compared against in the submission.\n\nBeyond trying other embeddings, the paper should also compare against simpler combination approaches, including simpler variants of its own approach. What if we just selected the one past domain that was most similar to the new domain, by some measure? And how does the performance of the technique depend on the setting of m? Investigating some of these questions would help us understand how well the approach works and in which settings.\n\nMinor:\n\nSecond paragraph, GloVec should be GloVe\n\n\"given many domains with uncertain noise for the new domain\" -- not clear what \"uncertain noise\" means, perhaps \"uncertain relevance\" would be more clear\n\nThe text refers to a Figure 3 which does not exist, probably means Figure 2. I didn't understand the need for both figures, Figure 1 is almost contained within Figure 2\n\nWhen m is introduced, it would help to say that m < n and justify why dividing the n domains into two chunks (of m and n-m domains) is necessary.\n\n\"from the first m domain corpus\" -> \"from the first m domains\"?\n\n\"may not helpful\" -> \"may not be helpful\"\n\n\"vocabularie\" -> \"vocabulary\"\n\n\"system first retrieval\" -> \"system first retrieves\"\n\nCOMMENTS ON REVISIONS: I appreciate the authors including the new experiments against concatenation baselines. The concatenation does fairly comparably to LL in Tables 3&4. LL wins by a bit more in Table 2. Given these somewhat close/inconsistent wins, it would help the paper to include an explanation of why and under what conditions the LL approach will outperform concatenation.", "Summary:\nThis paper proposes an approach to learn embeddings in new domains by leveraging the embeddings from other domains in an incremental fashion. The proposed approach will be useful when the new domain does not have enough data available. The baselines chosen are 1). no embeddings 2). generic embeddings from english wiki, common crawl and combining data from previous and new domains. Empirical performance is shown on 3 downstream tasks: Product-type classification, Sentiment Classification and Aspect Extraction. The proposed embeddings just barely beat the baseline on product classification and sentiment classification, but significantly beat them on aspect extraction task.\n\n\nComments:\n\nThe paper puts itself nicely in context of the previous work and the addressed problem of learning word embeddings for new domain in the absence of enough data is an important one that needs to be addressed. There is reasonable novelty in the proposed method compared to the existing literature. But, I was a little disappointed by the paper as several details of the model were unclear to me and the paper's writing could definitely be improved to make things clearer. \n\n1). In the \"Meta-learner\" section 4.1, the authors talk about word features (u{_w_{i,j,k}},u{_w_{i,j',k}}). It is unclear what these word features are. Are they one-hot encodings or embeddings or something else? It would really help if the paper gave some expository examples.\n\n2). In Algorithm 1, how do you deal with vocabulary items in the new domain that do not exist in the previous domains i.e. when the intersection of V_i and V_{n+1} is the null set. This is very important because the main appeal of this work is its applicability to new domains with scarce data which have far fewer words and hence the above scenario is more likely to happen.\n\n3). The results in Table 3 are a little confusing. Why do the lifelong word embeddings relatively perform far worse on precision but significantly better on recall compared to the baselines? What is driving those difference in results?\n\n4). Typos: In Section 3, \"...is depicted in Figure 1 and Figure 3\". I think you mean \"Figure 1 and Figure 2\" as there is no Figure 3. \n", "In this paper, the authors proposed to learn word embedding for the target domain in the lifelong learning manner. The basic idea is to learn a so-call meter learner to measure similarities of the same words between the target domain and the source domains for help learning word embedding for the target domain with a small corpus. \n\nOverall, the descriptions of the proposed model (Section 3 - Section 5) are hard to follow. This is not because the proposed model is technically difficult to understand. On the contrary, the model is heuristic, and simple, but the descriptions are unclear. Section 3 is supposed to give an overview and high-level introduction of the whole model using the Figure 1, and Figure 2 (not Figure 3 mentioned in text). However, after reading Section 3, I do not catch any useful information about the proposed model expect for knowing that a so-called meta learner is used. Section 4 and Section 5 are supposed to give details of different components of the proposed model and explain the motivations. However, descriptions in these two sections are very confusing, e.g, many symbols in Algorithm 1 are presented with any descriptions. Moreover, the motivations behind the proposed methods for different components are missing. Also, a lot of types make the descriptions more difficult to follow, e.g., \"may not helpful or even harmful\", '\"Figure 3\", \"we show this Section 6\", \"large size a vocabulary\", etc.\n\nAnother major concern is that the technical contributions of the proposed model is quite limited. The only technical contributions are (4) and the way to construct the co-occurrence information A. However, such contributions are quite minor, and technically heuristic. Moreover, regarding the aggregation layer in the pairwise network, it is similar to feature engineering. In this case, why not just train a flat classifier, like logistic regression, with rich feature engineering, in stead of using a neural network.\n\nRegarding experiments, one straight-forward baseline is missing. As n domains are supposed to be given in advance before the n+1 domain (target domain) comes, one can use multi-domain learning approaches with ensemble learning techniques to learn word embedding for the target domain. For instance, one can learn n pairwise (1 out of n sources + the target) cross-domain word embedding, and combine them using the similarity between each source and the target as the weight." ]
[ 4, 5, 3 ]
[ 4, 4, 4 ]
[ "iclr_2018_H1BO9M-0Z", "iclr_2018_H1BO9M-0Z", "iclr_2018_H1BO9M-0Z" ]
iclr_2018_BJB7fkWR-
Domain Adaptation for Deep Reinforcement Learning in Visually Distinct Games
Many deep reinforcement learning approaches use graphical state representations, this means visually distinct games that share the same underlying structure cannot effectively share knowledge. This paper outlines a new approach for learning underlying game state embeddings irrespective of the visual rendering of the game state. We utilise approaches from multi-task learning and domain adaption in order to place visually distinct game states on a shared embedding manifold. We present our results in the context of deep reinforcement learning agents.
rejected-papers
The reviewers have found that while the task of visual domain adaptation is meaningful to explore and improve, the proposed method is not sufficiently well-motivated, explained or empirically tested.
train
[ "H1xFygmyz", "SyZl4CKeM", "BJ5RWXilM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors propose a new approach for learning underlying structure of visually distinct games.\n\nThe proposed approach combines convolutional layers for processing input images, Asynchronous Advantage Actor Critic for deep reinforcement learning task and adversarial approach to force the embedding representation to be independent of the visual representation of games. \n\nThe network architecture is suitably described and seems reasonable to learn simultaneously similar games, which are visually distinct. However, the authors do not explain how this architecture can be used to do the domain adaptation. \nIndeed, if some games have been learnt by the proposed algorithm, the authors do not precise what modules have to be retrained to learn a new game. This is a critical issue, because the experiments show that there is no gain in terms of performance to learn a shared embedding manifold (see DA-DRL versus baseline in figure 5).\nIf there is a gain to learn a shared embedding manifold, which is plausible, this gain should be evaluated between a baseline, that learns separately the games, and an algorithm, that learns incrementally the games. \nMoreover, in the experimental setting, the games are not similar but simply the same.\n\nMy opinion is that this paper is not ready for publication. The interesting issues are referred to future works.\n", "This paper introduces a method to learn a policy on visually different but otherwise identical games. While the idea would be interesting in general, unfortunately the experiment section is very much toy example so that it is hard to know the applicability of the proposed approach to any more reasonable scenario. Any sort of remotely convincing experiment is left to 'future work'.\n\nThe experimental setup is 4x4 grid world with different basic shape or grey level rendering. I am quite convinced that any somewhat correctly setup vanilla deep RL algorithm would solve these sort of tasks/ ensemble of tasks almost instantly out of the box.\n\nFigure 5: Looks to me like the baseline is actually doing much better than the proposed methods?\n\nFigure 6: Looking at those 2D PCAs, I am not sure any of those method really abstracts the rendering away. Anyway, it would be good to have a quantified metric on this, which is not just eyeballing PCA scatter plots.", "- This paper discusses an agent architecture which uses a shared representation to train multiple tasks with different sprite level visual statistics. The key idea is that the agent learns a shared representations for tasks with different visual statistics\n\n- A lot of important references touching on very similar ideas are missing. For e.g. \"Unsupervised Pixel-level Domain Adaptation with Generative Adversarial Networks\", \"Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping\", \"Schema Networks: Zero-shot Transfer with a Generative Causal Model of Intuitive Physics\". \n\n- This paper has a lot of orthogonal details. For instance sec 2.1 reviews the history of games and AI, which is besides the key point and does not provide any literary context. \n\n- Only single runs for the results are shown in plots. How statistically valid are the results?\n\n- In the last section authors mention the intent to do future work on atari and other env. Given that this general idea has been discussed in the literature several times, it seems imperative to at least scale up the experiments before the paper is ready for publication" ]
[ 3, 2, 4 ]
[ 3, 4, 5 ]
[ "iclr_2018_BJB7fkWR-", "iclr_2018_BJB7fkWR-", "iclr_2018_BJB7fkWR-" ]
iclr_2018_B1suU-bAW
Learning Covariate-Specific Embeddings with Tensor Decompositions
Word embedding is a useful approach to capture co-occurrence structures in a large corpus of text. In addition to the text data itself, we often have additional covariates associated with individual documents in the corpus---e.g. the demographic of the author, time and venue of publication, etc.---and we would like the embedding to naturally capture the information of the covariates. In this paper, we propose a new tensor decomposition model for word embeddings with covariates. Our model jointly learns a \emph{base} embedding for all the words as well as a weighted diagonal transformation to model how each covariate modifies the base embedding. To obtain the specific embedding for a particular author or venue, for example, we can then simply multiply the base embedding by the transformation matrix associated with that time or venue. The main advantages of our approach is data efficiency and interpretability of the covariate transformation matrix. Our experiments demonstrate that our joint model learns substantially better embeddings conditioned on each covariate compared to the standard approach of learning a separate embedding for each covariate using only the relevant subset of data. Furthermore, our model encourages the embeddings to be ``topic-aligned'' in the sense that the dimensions have specific independent meanings. This allows our covariate-specific embeddings to be compared by topic, enabling downstream differential analysis. We empirically evaluate the benefits of our algorithm on several datasets, and demonstrate how it can be used to address many natural questions about the effects of covariates.
rejected-papers
The reviewers agree that this paper provides a sensible mechanism for producing word embeddings that exploit correlating features in the data (e.g. texts written by the same author), but point to other work doing the same thing. The lack of direct comparison in the experimental section is troublesome, although it is entirely possible the authors' were not aware of related work. Unfortunately, the lack of an author response to the reviews makes it hard to see the argument in defense of this paper, and I must recommend rejection.
train
[ "HkqIGZGlG", "HkIjbYOxz", "SkNrPRFgM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper produces word embedding tensors where the third order gives covariate information, via venue or author. The model is simple: tensor factorization, where the covariate can be viewed as warping the cosine distance to favor that covariate's more commonly cooccuring vocabulary (e.g. trump on hillary and crooked)\n\n\nThere is a nice variety of authors and words, though I question if even with all those books, the corpus is big enough to produce meaningful vectors. From my own experience, even if I spend several hours copy-pasting from project gutenberg, it is not enough for even good matrix factorization embeddings, much less tensor embeddings. It is hard to believe that meaningful results are achieved using such a small dataset with random initialization. \n\nI think table 5 is also a bit strange. If the rank is > 1000 I wonder how meaningful it actually is. For the usual analogies task, you can usually find what you are looking for in the top 5 or less. \n\nIt seems that table 1 is the only evaluation of the proposed method against any other type of method (glove, which is not a tensor-based method). I think this is not sufficient. \n\nOverall, I believe the idea is nice, and the initial analysis is good, but I think the evaluation, especially against other methods, needs to be stronger. Methods like neelakantan et al's multisense embedding, for example, which the work cites, can be used in some of these evaluations, specifically on those where covariate information clearly contributes (like contextual tasks). The addition of one or two tables with either a standard task against reported results or created tasks against downloadable contextual / tensor embeddings would be enough for me to change my vote. ", "The authors present a method for learning word embeddings from related groups of data. The model is based on tensor factorization which extends GloVe to higher order co-ocurrence tensors, where the co-ocurrence is of words within subgroups of the text data. These two papers need to be cited:\n\nRudolph et al., NIPS 2017, \"Sturctured Embedding Models for Grouped Data\": This paper also presents a method for learning embeddings specific for subgroups of the data, but based on hierarchical modeling. An experimental comparison is needed.\n\nCotterell et al., EACL 2017 \"Explaining and Generalizing Skip-Gram through Exponential Family Principal Component Analysis\": This paper also derives a tensor factorization based approach for learning word embeddings for different covariates. Here the covariates are morphological tags such as part-of-speech tags of the words.\n\nDue to these two citations, the novelty of both the problem set-up of learning different embeddings for each covariate and the novelty of the tensor factorization based model are limited.\n\nThe writing is ok. I appreciated the set-up of the introduction with the two questions. However, the questions themselves could have been formulated differently: \nQ1: the way Q1 is formulated makes it sound like the covariates could be both discrete and continuous while the method presented later in the paper is only for discrete covariates (i.e. group structure of the data).\nQ2: The authors mention topic alignment without specifying what the topics are aligned to. It would be clearer if they stated explicitly that the alignment is between covariate-specific embeddings. It is also distracting that they call the embedding dimensions topics.\nAlso, why highlight the problem of authorship attribution of Shakespear's work in the introduction, if that problem is not addressed later on?\n\nIn the model section, the paragraphs \"notation\" and \"objective function and discussion\" are clear. I also liked the idea of having the section \"A geometric view of embeddings and tensor decomposition\", but that section needs to be improved. For example, the authors describe RandWalk (Arora et al. 2016) but how their work falls into that framework is unclear.\nIn the third paragraph, starting with \"Therefore we consider a natural extension of this model, ...\" it is unclear which model the authors are referring to. (RandWalk or their tensor factorization?).\nWhat are the context vectors in Figure 1? I am guessing the random walk transitions are the ellipsoids? How are they to be interpreted? \n\nIn the last paragraph, beginning with \"Note that this is essentially saying...\", I don't agree with the argument that the \"base embeddings\" decompose into independent topics. The dimensions of the base embeddings are some kind of latent attributes and each individual dimension could be used by the model to capture a variety of attributes. There is nothing that prevents the model from using multiple dimensions to capture related structure of the data. Also, the qualitative results in Table 3 do not convince me that the embedding dimensions represent topics. For example \"horses\" has highest value in embedding dimension 99. It's nearest neighbours in the embedding space (i.e. semantically similar words) will also have high values in coordinate 99. Hence, the apparent semantic coherence in what the authors call \"topics\".\n\nThe authors present multiple qualitative and quantitative evaluations. The clustering by weight (4.1.) is nice and convincing that the model learns something useful. 4.2, the only quantitative analysis was missing some details. Please give references for the evaluation metrics used, for proper credit and so people can look up these tasks. Also, comparison needed to fitting GloVe on the entire corpus (without covariates) and existing methods Rudolph et al. 2017 and Cotterell et al. 2017. \nSection 5.2 was nice and so was 5.3. However, for the covariate specific analogies (5.3.) the authors could also analyze word similarities without the analogy component and probably see similar qualitative results. Specifically, they could analyze for a set of query words, what the most similar words are in the embeddings obtained from different subsections of the data.\n\nPROS:\n+ nice tensor factorization model for learning word embeddings specific to discrete covariates.\n+ the tensor factorization set-up ensures that the embedding dimensions are aligned \n+ clustering by weights (4.1) is useful and seems coherent\n+ covariate-specific analogies are a creative analysis\n\nCONS:\n- problem set-up not novel and existing approach not cited (experimental comparison needed)\n- interpretation of embedding dimensions as topics not convincing\n- connection to Rand-Walk (Aurora 2016) not stated precisely enough\n- quantitative results (Table 1) too little detail:\n * why is this metric appropriate?\n * comparison to GloVe on the entire corpus (not covariate specific)\n * no reference for the metrics used (AP, BLESS, etc.?)\n- covariate specific analogies presented confusingly and similar but simpler analysis might be possible by looking at variance in neighbours v_b and v_d without involving v_a and v_c (i.e. don't talk about analogies but about similarities)", "This paper presents an embedding algorithm for text corpora that allows known\ncovariates, e.g. author information, to modify a shared embedding to take context\ninto account. The method is an extension of the GloVe method and in the case of\na single covariate value the proposed method reduces to GloVe. The covariate-dependent\nembeddings are diagonal scalings of the shared embedding. The authors demonstrate\nthe method on a corpus of books by various authors and on a corpus of subreddits.\nThough not technically difficult, the extension of GloVe to covariate-dependent\nembeddings is very interesting and well motivated. Some of the experimental results\ndo a good job of demonstrating the advantages of the models. However, some of the\nexperiments are not obvious that the model is really doing a good job.\n\nI have some small qualms with the presentation of the method. First, using the term\n\"size m\" for the number of values that the covariate can take is a bit misleading.\nUsually the size of a covariate would be the dimensionality. These would be the same\nif the covariate is one hot coded, however, this isn't obvious in the paper right now.\nAdditionally, v_i and c_k live in R^d, however, it's not really explained what\n'd' is, is it the number of 'topics', or something else? Additionally, the functional\nform chosen for f() in the objective was chosen to match previous work but with no\nexplanation as to why that's a reasonable form to choose. Finally, the authors\nsay toward the end of Section 2 that \"A careful comparision shows that this\napproximation is precisely that which is implied by equation 4, as desired\". This is\ncryptic, just show us that this is the case.\n\nRegarding the experiments there needs to be more discussion about how the\ndifferent model parameters were determined. The authors say \"... and after tuning\nour algorithm to emged this dataset, ...\", but this isn't enough. What type of\ntuning did you do to choose in particular the latent dimensionality and the\nlearning rate? I will detail concerns for the specific experiments below.\n\nSection 4.1:\n- How does held-out data fit into the plot?\n\nSection 4.2:\n- For the second embedding, what exactly was the algorithm trained on? Just the\n book, or the whole corpus?\n- What is the reader supposed to take away from Table 1? Are higher or lower\n values better? Maybe highlight the best scores for each column.\n\n\nSection 4.3:\n- Many of these distributions don't look sparse.\n- There is a terminology problem in this section. Coordinates in a vector are\n not sparse, the vector itself is sparse if there are many zeros, but\n coordinates are either zero or not zero. The authors' use of 'sparse' when\n they mean 'zero' is really confusing.\n- Due to the weird sparsity terminology Table 1 is very confusing. Based on how\n the authors use 'sparse' I think that Table 1 shows the fraction of zeros in\n the learned embedding vectors. But if so, then these vectors aren't sparse at all\n as most values are non-zero.\n\nSection 5.1:\n- I don't agree with the authors that the topics in Table 3 are interpretable.\n As such, I think it's a reach to claim the model is learning interpretable topics.\n This isn't necessarily a problem, it's fine for models to not do everything well,\n but it's a stretch for the authors to claim that these results are a positive\n aspect of the model. The results in Section 5.2 seem to make a lot of sense and\n show the big contribution of the model.\n\nSection 5.3:\n- What is the \"a : b :: c : d\" notation?\n" ]
[ 5, 5, 5 ]
[ 3, 5, 4 ]
[ "iclr_2018_B1suU-bAW", "iclr_2018_B1suU-bAW", "iclr_2018_B1suU-bAW" ]
iclr_2018_S16FPMgRZ
Tensor Contraction & Regression Networks
Convolution neural networks typically consist of many convolutional layers followed by several fully-connected layers. While convolutional layers map between high-order activation tensors, the fully-connected layers operate on flattened activation vectors. Despite its success, this approach has notable drawbacks. Flattening discards the multi-dimensional structure of the activations, and the fully-connected layers require a large number of parameters. We present two new techniques to address these problems. First, we introduce tensor contraction layers which can replace the ordinary fully-connected layers in a neural network. Second, we introduce tensor regression layers, which express the output of a neural network as a low-rank multi-linear mapping from a high-order activation tensor to the softmax layer. Both the contraction and regression weights are learned end-to-end by backpropagation. By imposing low rank on both, we use significantly fewer parameters. Experiments on the ImageNet dataset show that applied to the popular VGG and ResNet architectures, our methods significantly reduce the number of parameters in the fully connected layers (about 65% space savings) while negligibly impacting accuracy.
rejected-papers
This paper proposes methods for replacing parts of neural networks with tensors, the values of which are efficiently estimated through factorisation methods. The paper is well written and clear, but the two main objections from reviewers surround the novelty and evaluation of the method proposed. I am conscious that the authors have responded to reviewers on the topic of novelty, but the case could be made more strongly in the paper, perhaps by showing significant improvements over alternatives. The evaluation was considered weak by reviewers, in particular due to the lack of comparable baselines. Interesting work, but I'm afraid on the basis of the reviews, I must recommend rejection.
test
[ "SJ3JSwBlG", "rysUKSIxM", "Byz0IGvgz", "SJl_QRsmM", "rJP6M0oQz", "ry1Kf0j7G", "ryINzRoXz", "SyZphSqeM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "In this paper, new layer architectures of neural networks using a low-rank representation of tensors are proposed. The main idea is assuming Tucker-type low-rank assumption for both a weight and an input. The performance is evaluated with toy data and Imagenet.\n\n[Clarity]\nThe paper is well written and easy to follow.\n\n[Originality]\nI mainly concern about the originality. Applying low-rank tensor decomposition in a network architecture has a lot of past studies and I feel this paper fails to clarify what is really distinguished from the other studies. For example, I found at least two papers [1,2] that are relevant. ([2] appears at the reference but it is not referred to.) How is the proposed method different from them?\n\nAlso, the \"end-to-end\" feature is repeatedly emphasized in the paper, but I don't understand its benefit. \n\n[1] Tai, Cheng, et al. \"Convolutional neural networks with low-rank regularization.\" arXiv preprint arXiv:1511.06067 (2015).\n[2] Lebedev, Vadim, et al. \"Speeding-up convolutional neural networks using fine-tuned cp-decomposition.\" arXiv preprint arXiv:1412.6553 (2014).\n\n[Significance]\nIn the experiments, the proposed method is compared with the vanilla model (i.e., the model having no low-rank structure) but with no other baseline using different compression techniques such as Novikov et al., 2015. So I cannot judge whether this method is better in terms of compression-accuracy tradeoff.\n\n\nPros:\n- The proposed model (layer architecture) is simple and easy to implement\n\nCons:\n- The novelty is low\n- No competitive baseline in experiments\n", "This paper incorporates tensor decomposition and tensor regression into CNN by replacing its flattening operations and fully-connected layers with a new tensor regression layer. \n\nPros:\n\nThe low-rank representation of tensors is able to reduce the model complexity in the original CNN without sacrificing much prediction accuracy. This is promising as it enables the implementation of complex deep learning algorithms on mobile devices due to its huge space saving performance. Overall, this paper is easy to follow. \n\nCons: \n\nQ1: Can the authors discuss the computational time of the proposed tensor regression layers and compare it to that of the baseline CNN? The tensor regression layer is computationally more expensive than the flattening operations in original CNN. Usually, it also involves expensive model selection procedure to choose the tuning parameters (N+1 ranks and a L2 norm sparsity parameter). In the experiments, the authors simply tried a few ranks without serious tuning. \n\nQ2: The authors reported the space saving in Table 1 but not in Table 2. Since spacing saving is a major contribution of the proposed method, can authors add the space saving percentage in Table 2?\n\nQ3: There are a few typos in the current paper. I would suggest the authors to take a careful proofreading. For example,\n\n(1) In the “Related work“ paragraph on page 2, “Lebedev et al. (2014) proposes…” should be “Lebedev et al. (2014) propose…”. Many other references have the same issue. \n\n(2) In Figure 1, the letter $X$ should be $\\tilde{\\cal X}$.\n\n(3) In expression (5) on page 3, the core tensor is denoted by $\\tilde{\\cal G}$. Is this the same as $\\tilde{\\cal X}^{‘}$ in Figure 1?\n\n(4) In expression (5) on page 3, the core tensor $\\tilde{\\cal G}$ is of dimension $(D_0, R_1, \\ldots, R_N)$. However, in expression (8) on page 5, $\\tilde{\\cal G}$ is of dimension $(R_0, R_1, \\ldots, R_N, R_{N+1})$.\n\n(5) Use \\cite{} and \\citep{} correctly. For example, in the “Related work“ paragraph on page 2,\n\n“Several prior papers address the power of tensor regression to preserve natural multi-modal structure and learn compact predictive models Guo et al. (2012); Rabusseau & Kadri (2016); Zhou et al. (2013); Yu & Liu (2016).”\n\nshould be\n\n“Several prior papers address the power of tensor regression to preserve natural multi-modal structure and learn compact predictive models (Guo et al., 2012; Rabusseau & Kadri, 2016; Zhou et al., 2013; Yu & Liu, 2016).”\n\n\n\n\n\n\n\n\n", "This paper combines the tensor contraction method and the tensor regression method and applies them to CNN. This paper is well written and easy to read. \n\nHowever, I cannot find a strong or unique contribution from this paper. Both of the methods (tensor contraction and tensor decomposition) are well developed in the existing studies, and combining these ideas does not seem non-trivial.\n\n--Main question\n\nWhy authors focus on the combination of the methods? Both of the two methods can perform independently. Is there a special synergy effect?\n\n--Minor question\n\nThe performance of the tensor contraction method depends on a size of tensors. Is there any effective way to determine the size of tensors?", "Thank you for your interest in our paper.\n\nRegarding the methods you mention:\n\n[3] (Non-linear Convolution Filters for CNN-based learning) is a method to augment existing architectures by exploring a combination of linear and non-linear filters in the convolutional layers.\n\n[2] (Factorized Bilinear Models for Image Recognition) is in the well studied field of bilinear models. Tensor Contraction can be seen as a generalisation of bilinear pooling to any arbitrary number of dimensions.\n \n[1] (Attribute-Enhanced Face Recognition with Neural Tensor Fusion) proposes a feature fusion method as a tensor optimisation problem. Specifically, it performs fusion from two feature vectors and the framework is therefore limited to a third order weight tensor with two vector inputs.\n\nOur work is significantly different from these:\nWe propose to preserve and leverage the tensor structure of the activations. We do so by introducing new generic, end-to-end trainable layers that allow large space savings while preserving the multi-dimensional structure. Specifically, we introduce Tensor Contraction Layers (TCL) that reduce the dimension of the input while preserving its multi-linear structure, and Tensor Regression Layer (TRL) that directly maps an input tensor to an output tensor using low-rank regression weights.\n", "We thank the reviewer for the feedback and address each point below:\n\nQ1: We show (Figure 5) that there is a large region where the rank can be decreased without impacting performance, making rank selection easy. In particular, we plot the evolution of the performance as a function of the rank. Please note that l2 normalization, which does not add extra parameters to tune (the parameters regularization is done via weight decay, as done in all state-of-the-art architectures, we kept the same parameters as in the original architectures).\n\nQ2: Table 2 corresponds to the overcomplete case (without pooling), it therefore didn’t make as much sense to mention the space savings since the corresponding architecture is not a standard used one. The main point of that experiment is to show that tensor contraction and regression can be used not only for low-rank problems but also over-defined ones (i.e. by leveraging the low-rank structure of the tensors we can optimise efficiently larger networks).\n\nQ3: Thank you for pointing these out, they have all been corrected:\n (1), (2) and (5) have been corrected\n (3): yes, they referred to the same, we have changed the notation to clarify this.\n (4): this notation has also been clarified. The idea is that we leave the first dimension (batch size) untouched in both the TCL and TRL. In the TCL, X’ corresponds to the output of the layer while in the TRL, it corresponds to the core of the regression weights and therefore does not include the batch size. We now denote X’ the output of the TCL, while G denotes the core of the tensor regression weights.\n", "We thank the reviewer for the feedback and offer some clarifications regarding the primary criticisms:\n\nThe two publications that you mention focus on re-parametrizing convolutional layers, with the main purpose of speeding these up:\n[1] (Convolutional neural networks with low-rank regularization) parametrizes each convolutional layers as the composition of two convolutional layers with less parameters.\n[2] (Speeding-up convolutional neural networks using fine-tuned cp-decomposition) is referred to in the related work and also focuses on speeding up convolutional layers. This is done by performing CP decomposition on the convolution kernel before fine-tuning the whole network.\n\nThese papers only focus on decomposing the weight tensors. By contrast, we propose to preserve the multi-linear structure of the activation throughout the network. In particular TCL focuses on contraction activation tensors. Additionally, those works focus on the convolutional kernels, where the number of parameters is already quite small, while our work is focused on eliminating the standard flattening and fully-connected layers\n\nNote that in the TCL, we do not assume a Tucker form of the activation tensors but rather apply tensor contraction to them to reduce their dimensionality. Similarly, TRL is a new layer that does not simply consist in assuming a Tucker form of the regression weight but directly maps an input tensor to an output tensor using low-rank regression tensor weights. It can be used to replace the flattening and fully-connected layers in traditional network architectures.\n\nOur contribution is the introduction of these two novel layers, trainable end-to-end using gradient backpropagation. Being able to these train end-to-end is crucial to be able to learn the whole network jointly (most existing tensor methods are solved analytically, and existing work on deep learning and tensor decomposition focuses mainly on pre-training a network, applying some sort of decomposition to the weights and fine-tuning. By training end-to-end we learn the whole network jointly).\n\nWe compare with a competitive, state-of-the-art architecture, (ResNet) and show that performance is maintained while showing large space savings. Note that there has been no such attempt in the past to compare with. As we mention in the related work, Novikov et. al. (2015) retain the flattening and fully connected layers for the output while we present an end-to-end tensorized architecture. They obtain space savings by applying tensor decomposition to the weights of some of the fully-connected layers, reshaped as tensors (which also means selecting both the size of the tensor to reshape to, in addition to the rank of the decomposition).\n", "We thank the reviewer for the feedback.\n\n1. Regarding the novelty of this work: To our knowledge, no previous papers propose incorporating either tensor contraction or regression as layers in deep neural networks. Instead these methods have previously been studied as stand-alone techniques, where they are solved analytically. Our main contribution is the introduction of these two novel layers, trainable end-to-end via gradient descent and the empirical finding that we can enjoy dramatic space savings with negligible loss in accuracy.\n\nIt is possible that the reviewer has encountered pre-printed versions of this paper. To preserve double-blindness, we won’t link to those drafts here. But we request that the reviewer be careful not to mistakenly hold against this work its previous inclusion in workshops and on the arXiv. \n\n2. Regarding the usefulness of studying the two in combination: We combine the two as they are naturally complementary methods. Tensor contraction reduces the dimensionality of the input tensor, this reduced tensor can then be mapped to an output tensor using tensor regression. \n\nAs shown in figure 5, the performance of the TRL is not very sensitive to the choice of the rank, making that selection easy.\n", "In what way is the proposed tensor contraction layer different from Tucker Decomposition for Feature Fusion in 'Attribute-Enhanced Face Recognition with Neural Tensor Fusion Networks'[1]? How does the tensor regression layer compare with the nonlinear activations in 'Factorized Bilinear Models for Image Recognition'[2] and 'Non-linear Convolution Filters for CNN-based Learning'[3]?\n\n[1] http://www.research.ed.ac.uk/portal/en/publications/attributeenhanced-face-recognition-with-neural-tensor-fusion-networks(b5f001b1-21c5-44e0-ad6f-21481e83590e).html\n\n[2] https://arxiv.org/abs/1611.05709\n\n[3] https://arxiv.org/abs/1708.07038" ]
[ 4, 6, 4, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S16FPMgRZ", "iclr_2018_S16FPMgRZ", "iclr_2018_S16FPMgRZ", "SyZphSqeM", "rysUKSIxM", "SJ3JSwBlG", "Byz0IGvgz", "iclr_2018_S16FPMgRZ" ]
iclr_2018_rkGZuJb0b
Compact Neural Networks based on the Multiscale Entanglement Renormalization Ansatz
The goal of this paper is to demonstrate a method for tensorizing neural networks based upon an efficient way of approximating scale invariant quantum states, the Multi-scale Entanglement Renormalization Ansatz (MERA). We employ MERA as a replacement for linear layers in a neural network and test this implementation on the CIFAR-10 dataset. The proposed method outperforms factorization using tensor trains, providing greater compression for the same level of accuracy and greater accuracy for the same level of compression. We demonstrate MERA-layers with 3900 times fewer parameters and a reduction in accuracy of less than 1% compared to the equivalent fully connected layers.
rejected-papers
This paper proposes a tree-structured tensor factorisation method for parameter reduction. The reviewers felt the paper was somewhat interesting, but agreed that more detail was needed in the method description, and that the experiments were on the whole uninformative. This seems like a promising research direction which needs more empirical work, but is not ready for publication as is.
train
[ "HJfixHulz", "B1TCDcOxG", "SyR6NUcxz", "SyMJMna7f", "By4ce367M", "S1UvxhaXz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "In the paper the authors suggest to use MERA tensorization technique for compressing neural networks. MERA itseld in a known framework in QM but not in ML. Although the idea seems to be fruitful and interesting I find the paper quite unclear. The most important part is section 2 which presents the methodology used. However there no equations or formal descriptions of what is MERA and how it works. Only figures which are difficult to understand. It is almost impossible to reproduce the results based on such iformal description of tensorization method. The authors should be more careful and provide more details when describing the algorithm. There was enough room for making the algorithm more clear. This is my main point for critisism.\n\nAnother issue is related with practical usefulness. MERA allows to get better compression than TT keeping the same accuracy. But the authors do compress only one layer. In this case the total compression of DNN is almost tha same so why do we need yet another tensorization technique? I think the authors should try tenzorizing several layers and explore whether they can do any better than TT compression. Currently I would say the results are comparable but not definitely better.\n\nUPDATE: The revised version seems to be a bit more clear. Now the reader unfamiliar with MERA (with some efforts) can understand how the methods works. Although my second concern remains. Up to now it looks just yet another tensorization method with only slight advantages over TT framework. Tensorization of conv.layers could improve the paper a lot. I increased the score to 5 for making the presentation of MERA more readable.", "The paper presents a new parameterization of linear maps for use in neural networks, based on the Multiscale Entanglement Renormalization Ansatz (MERA). The basic idea is to use a hierarchical factorization of the linear map, that greatly reduces the number of parameters while still allowing for relatively complex interactions between variables to be modelled. A limited number of experiments on CIFAR10 suggests that the method may work a bit better than related factorizations.\n\nThe paper contains interesting new ideas and is generally well written. However, a few things are not fully explained, and the experiments are too limited to be convincing.\n\n\nExposition\nOn a first reading, it is initially unclear why we are talking about higher order tensors at all. Usually, fully connected layers are written as matrix-vector multiplications. It is only on the bottom of page 3 that it is explained that we will reshape the input to a rank-k (k=12) tensor before applying the MERA factored map. It would be helpful to state this sooner. It would also be nice to state that (in the absense of any factorization of the weight tensor) a linear contraction of such a high-rank tensor is no less general than a matrix-vector multiplication.\n\nMost ML researchers will not know Haar measure. It would be more reader friendly to say something like \"uniform distribution over orthogonal matrices (i.e. Haar measure)\" or something like that. Explaining how to sample orthogonal matrices / tensors (e.g. by SVD) would be helpful as well.\n\nThe article does not explain what \"disentanglers\" are. It is very important to explain this, because it will not be generally known by the machine learning audience, and is the main thing that distinguishes this work form earlier tree-based factorizations.\n\nOn page 5 it is explained that the computational complexity of the proposed method is N^{log_2 D}. For D=2, this is better than a fully connected layer. Although this theoretical speedup may not currently have been realized, it perhaps could be achieved by a custom GPU kernel. It would be nice to highlight this potential benefit in the introduction.\n\n\nTheoretical motivation\nAlthough I find the theoretical motivation for the method somewhat compelling, some questions remain that the authors may want to address. For one thing, the paper talks about exploiting \"hierarchical / multiscale structure\", but this does not refer to the spatial multi-scale structure that is naturally present in images. Instead, the dimensions of a hidden activation vector are arbitrarily ordered, partitioned into pairs, and reshaped into a (2, 2, ..., 2) shape tensor. The pairing of dimensions determines the kinds of interactions the MERA layer can express. Although the earlier layers could learn to produce a representation that can be effectively analyzed by the MERA layer, one is left to wonder if the method could be made to exploit the spatial multi-scale structure that we know is actually present in image data.\n\nAnother point is that although from a classical statistics perspective it would seem that reducing the number of parameters should be generally beneficial, it has been observed many times that in deep learning, highly overparameterized models are easier to optimize and do not necessarily overfit. Thus at this point it is not clear whether starting with a highly constrained parameterization would allow us to obtain state of the art accuracy levels, or whether it is better to start with an overparameterized model and gradually constrain it or perform a post-training compression step.\n\n\nExperiments\nIn the introduction it is claimed that the method of Liu et al. cannot capture correlations on different length scales because it lacks disentanglers. Although this may be theoretically correct, the paper does not experimentally verify that the proposed factorization with disentanglers outperforms a similar approach without disentanglers. In my opinion this is a critical omission, because the addition of disentanglers seems to be the main or perhaps only difference to previous work.\n\nThe experiments show that MERA can drastically reduce the number of parameters of fully connected layers with only a modest drop in accuracy, for a particular ConvNet trained on CIFAR10. Unfortunately this ConvNet is far from state of the art, so it is not clear if the method would also work for better architectures. Furthermore, training deep nets can be tricky, and so the poor performance makes it impossible to tell if the baseline is (unintentionally) crippled.\n\nComparing MERA-2 to TT-3 or MERA-3 to TT-5 (which have an approximately equal number of parameters), the difference in accuracy appears to be less than 1 percentage point. Since only a handful of specific MERA / TT architectures were compared on a single dataset, it is not at all clear that we can expect MERA to outperform TT in many situations. In fact, it is not even clear that the small difference observed is stable under random retraining.\n\n\nSummary\nAn interesting paper with novel theoretical ideas, but insufficient experimental validation. Some expository issues need to be fixed.", "The authors study compressing feed forward layers using low rank tensor decompositions. For instance a feed forward layer of 4096 x 4096 would first be reshaped into a rank-12 tensor with each index having dimension 2, and then a tensor decomposition would be applied to reduce the number of parameters. \n\nPrevious work used tensor trains which decompose the tensor as a chain. Here the authors explore a tree like decomposition. The authors only describe their model using pictures and do not provide any rigorous description of how their decomposition works.\n\nThe results are mediocre. While the author's approach does seem to reduce the feed forward net parameters by 30% compared to the tensor train decomposition for similar accuracy, the total number of parameters for both MERA (authors' approach) and Tensor Train is similar since in this regime the CNN parameters dominate (and the authors' approach does not work to compress those).\n\n", "\nThank you for your helpful comments. \n\nWe have adapted the manuscript to include a more comprehensive description of those principles that may not be familiar to a machine learning audience and a more formal description of the MERA layers. We hope that you find the revised version to be more clear.\n\nIn the MERA and MPO experiments we compress the two penultimate layers of the network. We have amended the paper to make this more clear. \n", "\nThank you for your considered response. \n\nWe have adapted the manuscript to give a more comprehensive description of the architecture and those principles that may not be familiar to a machine learning audience, including tensor notation, disentanglers and sampling random orthogonal matrices/tensors. We hope that this is now more clear. \n\nThe role of the disentanglers, in terms of performance, has not been directly examined. One difficulty is that removing the disentanglers also reduces the number of model parameters thus biasing the comparison. We do not believe this would be a fair comparison. In future work we are planning to more thoroughly examine the role of disentanglers in various architectures. \n\nWhether compression is best achieved by factorizing the weight tensors or constraining or distilling larger models during or after training is an interesting question and we don’t make this comparison. However, using factorization initially would seem to allow for models with more capacity using the same number of parameters and the two approaches are not always mutually exclusive\n\nRegarding your comment about the reshaping of the activation vector from the final convolutional layer. We agree that this is a somewhat arbitrary choice that is also apparent in other tensorization methods. This issue could be avoided by constructing the entire network from tensor components, which we plan to examine in future work. \n\nTo compare the MERA and TT factorization methods we used a very simple architecture and basic data augmentation to as best possible isolate the effects of factorization from other design choices. It would indeed be very interesting to test these methods in a more complicated model. \n\nThank you again for a very helpful and detailed response. \n", "Thank you for your helpful comments. \n\nWe have revised the manuscript to include a more comprehensive description of the MERA decomposition. We hope that you find this sufficient. \n\nWe have now considered the regime in which the convolutional parameters make up a relatively small number of the total number of parameters in the network. In this ablated network the MERA layer also outperforms the tensor train network. " ]
[ 5, 5, 4, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_rkGZuJb0b", "iclr_2018_rkGZuJb0b", "iclr_2018_rkGZuJb0b", "HJfixHulz", "B1TCDcOxG", "SyR6NUcxz" ]
iclr_2018_B1bgpzZAZ
ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions
The task of Reading Comprehension with Multiple Choice Questions, requires a human (or machine) to read a given \{\textit{passage, question}\} pair and select one of the n given options. The current state of the art model for this task first computes a query-aware representation for the passage and then \textit{selects} the option which has the maximum similarity with this representation. However, when humans perform this task they do not just focus on option selection but use a combination of \textit{elimination} and \textit{selection}. Specifically, a human would first try to eliminate the most irrelevant option and then read the document again in the light of this new information (and perhaps ignore portions corresponding to the eliminated option). This process could be repeated multiple times till the reader is finally ready to select the correct option. We propose \textit{ElimiNet}, a neural network based model which tries to mimic this process. Specifically, it has gates which decide whether an option can be eliminated given the \{\textit{document, question}\} pair and if so it tries to make the document representation orthogonal to this eliminatedd option (akin to ignoring portions of the document corresponding to the eliminated option). The model makes multiple rounds of partial elimination to refine the document representation and finally uses a selection module to pick the best option. We evaluate our model on the recently released large scale RACE dataset and show that it outperforms the current state of the art model on 7 out of the 13 question types in this dataset. Further we show that taking an ensemble of our \textit{elimination-selection} based method with a \textit{selection} based method gives us an improvement of 7\% (relative) over the best reported performance on this dataset.
rejected-papers
This paper provides a method for eliminating options in multiple-answer reading comprehension tasks, based on the contents of the text, in order to reduce the "answer space" a machine reading model must consider. While there's nothing wrong with this, conceptually, reviewers have questioned whether or not this is a particularly useful process to include in a machine reading pipeline, versus having agents that understand the text well enough to select the correct answer (which is, after all, the primary goal of machine reading). Some reviewers were uncomfortable with the choice of dataset, suggesting SQuAD might be a better alternative), and why I am not sure I agree with that recommendation, it would be good to see stronger positive results on more than one dataset. At the end of the day, it is the lack of convincing experimental results showing that this method yields substantial improvements over comparable baselines which does the most harm to this well written paper, and I must recommend rejection.
train
[ "SyfPjhYef", "HkHGUsPef", "HJViVF5gf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper gives an elaboration on the Gated Attention Reader (GAR) adding gates based on answer elimination in multiple choice reading comprehension. I found the formal presentation of the model reasonably clear the the empirical evaluation reasonably compelling.\n\nIn my opinion the main weakness of the paper is the focus on the RACE dataset. This dataset has not attracted much attention and most work in reading comprehension has now moved to the SQUAD dataset for which there is an active leader board. I realize that SQUAD is not explicitly multiple choice and that this is a challenge for an answer elimination architecture. However, it seems that answer elimination might be applied to each choice of the initial position of a possible answer span. In any case, competing with an active leader board would be much more compelling.", "In this paper, a model is built for reading comprehension with multiple choices. The model consists of three modules: encoder, interaction module and elimination module. The major contributions are two folds: firstly, proposing the interesting option elimination problem for multi-step reading comprehension; and secondly, proposing the elimination module where a eliminate gate is used to select different orthogonal factors from the document representations. Intuitively, one answer option can be viewed as eliminated if the document representation vector has its factor along the option vector ignored.\n\nThe elimination module is interesting, but the usefulness of “elimination” is not well justified for two reasons. First, the improvement of the proposed model over the previous state of the art is limited. Second, the model is built upon GAR until the elimination module, then according to Table 1 it seems to indicate that the elimination module does not help significantly (0.4% improvement). \n\nIn order to show the usefulness of the elimination module, the model should be exactly built on the GAR with an additional elimination module (i.e. after removing the elimination module, the performance should be similar to GAR but not something significantly worse with a 42.58% accuracy). Then we can explicitly compare the performance between GAR and the GAR w/ elimination module to tell how much the new module helps.\n\nOther issues:\n\n1) Is there any difference to directly use $x$ and $h^z$ instead of $x^e$ and $x^r$ to compute $\\tilde{x}_i$? Even though the authors find the orthogonal vectors, they’re gated summed together very soon. It would be better to show how much “elimination” and “subtraction” effect the final performance, besides the effect of subtraction gate.\n\n2) A figure showing the model architecture and the corresponding QA process will better help the readers understand the proposed model.\n\n3) $c_i$ in page 5 is not defined. What’s the performance of only using $s_i$ for answer selection or replacing $x^L$ with $s_i$ in score function?\n\n4) It would be better to have the experiments trained with different $n$ to show how multi-hop effects the final performance, besides the case study in Figure 3.\n\nMinor issues:\n\n1) In Eqn. (4), it would be better to use a vector as the input of softmax.\n\n2) It would be easier for discussion if the authors could assign numbers to every equation.", "This paper proposes a new reading comprehension model for multi-choice questions and the main motivation is that some options should be eliminated first to infer better passage/question representations.\n\nIt is a well-written paper, however, I am not very convinced by its motivation, the proposed model and the experimental results. \n\nFirst of all, the improvement is rather limited. It is only 0.4 improvement overall on the RACE dataset; although it outperforms GAR on 7 out of 13 categories; but why is it worse on the other 6 categories? I don’t see any convincing explanations here.\n\nSecondly, in terms of the development of reading comprehension models, I don’t see why we need to care about eliminating the irrelevant options. It is hard to generalize to any other RC/QA tasks. If the point is that the options can add useful information to induce better representations for passage/question, there should be some simple baselines in the middle that this paper should compare to. The two baselines SAR and GAR both only induce a representation from paragraph/question, and finally compare to the representation of each option. Maybe a simple baseline is to merge the question and all the options and see if a better document representation can be defined. \n\nSome visualizations/motivational examples could be also useful to understand how some options are eliminated and how the document representation has been changed based on that.\n" ]
[ 5, 5, 4 ]
[ 3, 3, 4 ]
[ "iclr_2018_B1bgpzZAZ", "iclr_2018_B1bgpzZAZ", "iclr_2018_B1bgpzZAZ" ]
iclr_2018_ryF-cQ6T-
Machine Learning by Two-Dimensional Hierarchical Tensor Networks: A Quantum Information Theoretic Perspective on Deep Architectures
The resemblance between the methods used in studying quantum-many body physics and in machine learning has drawn considerable attention. In particular, tensor networks (TNs) and deep learning architectures bear striking similarities to the extent that TNs can be used for machine learning. Previous results used one-dimensional TNs in image recognition, showing limited scalability and a request of high bond dimension. In this work, we train two-dimensional hierarchical TNs to solve image recognition problems, using a training algorithm derived from the multipartite entanglement renormalization ansatz (MERA). This approach overcomes scalability issues and implies novel mathematical connections among quantum many-body physics, quantum information theory, and machine learning. While keeping the TN unitary in the training phase, TN states can be defined, which optimally encodes each class of the images into a quantum many-body state. We study the quantum features of the TN states, including quantum entanglement and fidelity. We suggest these quantities could be novel properties that characterize the image classes, as well as the machine learning tasks. Our work could be further applied to identifying possible quantum properties of certain artificial intelligence methods.
rejected-papers
This paper seeks to integrate tensor-based models from physics into machine learning architectures. The two main objections to this paper are first that, despite honest (I assume) efforts from the authors, it remains somewhat hard to understand without substantial background knowledge of physics. Second, that the experiments focus on MNIST and CIFAR image classification tasks, two datasets where linear models perform with high accuracy, and as such are unsuitable for properly evaluating the claims made about the models in this paper. Unfortunately, it does not seem there is sufficient enthusiasm for this paper amongst the reviewers to justify its inclusion in the conference.
train
[ "rycZrCJef", "S17TnsFez", "rkd7rq6gf", "Bylqi1qQz", "HJoitKU7G", "r1LNXkUmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Authors of this paper derived an efficient quantum-inspired learning algorithm based on a hierarchical representation that is known as tree tensor network, which is inspired by the multipartite entanglement renormalization ansatz approach where the tensors in the TN are kept to be unitary during training. Some observations are: The limitation of learnability of TTN strongly depends on the physical indexes and the geometrical indexes determine how well the TTNs approximate the limit; TTNs exhibit same increase level of abstractions as CNN or DBN; Fidelity and entanglement entropy can be considered as some measurements of the network.\n\nAuthors introduced the two-dimensional hierarchical tensor networks for solving image recognition problems, which suits more the 2-D nature of images. In section 2, authors stated that the choice of feature function is arbitrary, and a specific feature map was introduced in Section 4. However, it is not straightforward to connect (10) to (1) or (2). It is better to clarify this connection because some important parameters such as the virtual bond and input bond are related to the complexity of the proposed algorithm as well as the limitation of learnability. For example, the scaling of the complexity O(dN_T(b_v^5 + b_i^4)) is not easy to understand. Is it related to specific feature map? How about the complexity of eigen-decomposition for one tensor at each iterates. And also, whether the tricks used to accelerate the computations will affect the convergence of the algorithm? More details on these problems are required for readers’ better understanding.\n\nFrom Fig 2, it is difficult to see the relationship between learnability and parameters such input bond and virtual bond because it seems there are no clear trends in the Fig 2(a) and (b) to make any conclusion. It is better to clarify these relationships with either clear explanation or better examples.\n\nFrom Fig 3, authors claimed that TN obtained the same levels of abstractions as in deep learning. However, from Fig 3 only, it is hard to make this conclusion. First, there are not too many differences from Fig 3(a) to Fig 3(e). Second, there is no visualization result reported from deep learning on the same data for comparison. Hence, it is not convincing to draw this conclusion only from Fig 3. \n\nIn Section 4.2, what strategy is used to obtain these parameters in Table 1?\n\nIn Section 5, it is interesting to see more experiments in terms of fidelity and entanglement entropy.\n", "Full disclosure: the authors' submission is not anonymous. They included a github link at the bottom of page 6 and I am aware of the name of the author and coauthors (and have previously read their work and am a fan of it). Thus, this review is not double blind. I notified the area chair last week and we agreed that I submit this review. \n\n---\n\nThis is an interesting application of tensor networks to machine learning. The work proposes using a tree tensor network for image classification. Each image is first mapped into a higher-dimensional space. Then the input features are contracted with the tensors of the tensor network. The maximum value of the final layer of the network gives the predicted class. The training algorithm is inspired by the multipartite entanglement renormalization ansatz: it corresponds to updating each tensor in the network by performing a singular value decomposition of the environment tensor (everything in the cost function after removing the current tensor to be updated).\n\nOverall, I think this is an interesting, novel contribution, but it is not accessible to non-physicists right now. The paper could be rewritten to be accessible to non-physicists and would be a highly-valuable interdisciplinary contribution.\n\n* Consider redoing the experiments with a different cost function: least squares is an unnatural cost function to use for classification. Cross entropy would be better.\n\n* discuss the scalability: why did you downsample MNIST from 28x28 pixels to 16x16 pixels? Why is training accuracy not reported on the 10-class model in Table 1? If it is because of a slow implementation, that's fine. But if it is because of the scalability of the method, it would be good to report that. In either case it wouldn't hurt the paper, it is just important to know. \n\n* In section 5, you say \"input vectors are still initially arranged ... according to their spatial locations in the image\". But don't you change the spatial locations of the image to follow equation (10)? It would be good to add a sentence clarifying this. \n\n---\n\nIn its current form, reading the paper requires a physics background. \n\nThere are a few things that would make it easier to read for a general machine learning audience:\n\n* connect your method to matrix factorization and tensor decomposition approaches\n\n* include an algorithm box for Strategy-I and Strategy-II\n\n* include an appendix, with a brief review of upward and downward indices which is crucial for understanding your method (few people in machine learning are familiar with Einstein notation)\n\n* relate your interesting ideas about quantum states to existing work in information theory. I am skeptical of the label 'quantum': how do quantum mechanical tools apply to images? What is a 'quantum' many-body state here? There is no intrinsic uncertainty principle at play in image classification. I would guess that the ideas you propose are equivalent to existing work in information theory. That would make it less confusing.\n\n* in general, maybe mention the inspiration of your work from MERA, but avoid using physics language when there are no clear physical systems. This will make your work more understandable and easier to follow. A high-level motivation for MERA from a physics perspective suffices; the rest can be phrased in terms of tensor decompositions. \n\n---\n\nMinor nits:\n\n* replace \\citet with \\citep everywhere - all citations are malformed\n\n* figure 1 could be clarified - say that see-through gray dots are dimensions, blue squares are tensors, edges are contractions\n\n* all figure x and y labels and legends are too small\n\n* some typos: \"which classify an input image by choosing\"; \"we apply different feature map to each\"; small grammar issues in many places\n\n* Figure 4: \"up-down\" and \"left-right\" not defined anywhere", "The paper studies the mapping of a mathematical object representing quantum entanglement to a neural network architecture that can be trained to predict data, e.g. images. A contribution of the paper is to propose a 2D tensor network model for that purpose, which has higher representation power than simple tensor networks used in previous works.\n\nThere are several issues with the paper: \n\nFirst, it is hard to relate the presented model to its machine learning counterpart. e.g. it is not stated clearly what is the underlying function class (are they essentially linear classifiers built on some feature space representation?).\n\nThe benchmark study doesn’t bring much evidence about the modeling advantages brought by the proposed method. Separating pairs of CIFAR-10 classes is relatively easy and can be done with reasonable accuracy without much nonlinearity. Similarly, an error rate of 5% on MNIST is already achievable by basic machine learning classifiers.\n\nThe concept of bond and bond dimensions, which are central in this paper due to their relation to model complexity, could be better explained.", "Thanks for the suggestions. In the revised version, we have made many efforts on the manuscript so that it could be easily accessible to non-physicists. And, we have tried other cost functions including cross entropy, optimized by algorithms such as full gradient descent, SGD, mini-batch and Adam. The experiments show that the currently-used MERA algorithm (environment + SVD) provides the best accuracy and efficiency. On the other hand, we still cannot embed the cross-entropy cost function into our MERA-inspired algorithm, that is, in the current framework, the cross entropy cost function is incompatible with MERA. In a future study, we will evaluate various algorithms to train the TTN. Thanks for your suggestion.\nWe supplemented the training accuracy in Table 1, which is the mean training accuracy among the 10 classifiers. With regards to the downsampling on the MNIST dataset, indeed, we could build a Tree Tensor Network (TTN) on image with 28*28 pixels, but it would increase the complexity of the code. For 2^n * 2^n pixels (with n an integer), we could easily write the code, where the TTN has n layers.\nRegarding the input vectors, we apologize that we did not specify this clearly enough. The input vectors are indeed arranged according to the spatial locations of the pixels. The feature map transform one pixel (a scalar) to a normalized vector. After the feature map, each image becomes the product of 2^n * 2^n vectors; each vector is located in the same place as the corresponding pixel. We added several sentence in the manuscript to better specify this. And, we have added an algorithm box to specify the one-against-all strategy. And, we added some sentences to explain the upward, downward indices and related notations in the texts.\nIn a quantum many-body system, the interactions between particles create quantum entanglement which is considered as new physical resource, so we bridge quantum many-body theory to machine learning, and we hope this will help us to develop some new tools and concepts to explore machine learning. Based on this idea, we employed tensor network to study the basic machine learning task: image classification.\nRegarding the minor nits, we replaced the \\citet with \\citep, and updated the legends and labels. The “up-down” and “left-right” in Fig. 4 were also explained.\n", "Thanks for your note. The error about feature map was fixed. Yes, the feature map we used is not arbitrary. In our case, it should be normalized. \nThe scaling of complexity is not related to specific feature map. We used nested loop to update each tensor and the tensor contraction dominate main part of computation. For instance, for the tensors located in 1st layer, the complexity of contraction is dominated by input bond b_i, i.e. O(M*N_T*b_i^4*b_v), and for the tensors located in other layers, the complexity of contraction is dominated by virtual bond b_v, i.e. O(M*N_T*b_v^5). Therefore, the complexity of contracting the whole tensor network is composed by these two parts, i.e. O(M*N_T*(b_v^5 + b_i^4*b_v)).\nWe used singular value decomposition (SVD) to update each tensor. Considering the SVD on a matrix with M*N (M>>N), the overall cost is O(mn^2). For instance, it could be found in the following book: Trefethen, Lloyd N.; Bau III, David (1997). Numerical linear algebra. Philadelphia: Society for Industrial and Applied Mathematics. ISBN 978-0-89871-361-9.\nThe tricks we employed do not affect the computing results and the convergence of the algorithm. We just restore all temporary variables in each iteration, so it avoids double counting.\nIn Fig2(a) and Fig2(b), we show the relationship between training accuracy and parameters such input bond and virtual bond. Furthermore, in our classification task, the training accuracy indicates the learnability of the classifier we trained. In other words, the higher training accuracy we have, the better the classifier approximates the classification boundary. So in this case, we say the classifier learn the classification boundary. In our experiments, we fixed the input bond to four different values, i.e. 2, 3, 4 and 5. And we found the significant increase of training accuracy with the rise of virtual bond. And, the input bond determines the upper limits of training accuracy we could have, that is, the upper limits of the learnability of the model.\nThe claim is more careful: “We observe the same pattern as in deep learning, having a clear separation in the highest level of abstraction\". Naturally, we hoped that the separation would become gradually clearer as in the layers of a CNN, whereas, just as the referee points out, the separation only becomes apparent in the last layer. This is why we were careful with the claim. We did not include a similar series of images for a CNN since this is so standard in the analysis of deep networks. If the referee believes the inclusion of this images would improve the manuscript, we have them ready.\nAccording to the representation power of tensor network, the larger bond value we used, the better training performance we could have. However, as the same as other existing deep learning models such as deep neural networks, we also have the problem i.e. overfitting. So we start from a small bond value i.e. 2 for each classifier and observed the testing results. Once we have an acceptable result (>92\\%), we stop try to use larger value to avoid overfitting.\n", "We regret that the reviewer does not agree to the relevance of the study. As it is the case with other tensor network-based learning algorithms that recently cropped up, the objective is not to beat a state-of-the-art CNN, at least not yet. Our main goal in this work is to understand the representation power of a hierarchical network and introduce rigorous metrics from physics to study the model and the data at various levels of the representation. \nThe surprising finding is not superior performance, but that it works at all and that we see a correspondence between the higher and higher level abstractions that a CNN provides and the two-dimensional tree tensor network. The latter is mathematically well-understood. We believe that this is highly relevant to machine learning and we expect to see more and more research done in this direction, hence we feel that ICLR is in fact the right outlet for this work.\nWe are making extensive changes across the manuscript to make it easier to read for non-physicists; please refer to the comments made to the other referees for model details.\n" ]
[ 6, 4, 3, -1, -1, -1 ]
[ 3, 3, 2, -1, -1, -1 ]
[ "iclr_2018_ryF-cQ6T-", "iclr_2018_ryF-cQ6T-", "iclr_2018_ryF-cQ6T-", "S17TnsFez", "rycZrCJef", "rkd7rq6gf" ]
iclr_2018_HyHmGyZCZ
Comparison of Paragram and GloVe Results for Similarity Benchmarks
Distributional Semantics Models(DSM) derive word space from linguistic items in context. Meaning is obtained by defining a distance measure between vectors corresponding to lexical entities. Such vectors present several problems. This work concentrates on quality of word embeddings, improvement of word embedding vectors, applicability of a novel similarity metric used ‘on top’ of the word embeddings. In this paper we provide comparison between two methods for post process improvements to the baseline DSM vectors. The counter-fitting method which enforces antonymy and synonymy constraints into the Paragram vector space representations recently showed improvement in the vectors’ capability for judging semantic similarity. The second method is our novel RESM method applied to GloVe baseline vectors. By applying the hubness reduction method, implementing relational knowledge into the model by retrofitting synonyms and providing a new ranking similarity definition RESM that gives maximum weight to the top vector component values we equal the results for the ESL and TOEFL sets in comparison with our calculations using the Paragram and Paragram + Counter-fitting methods. For SIMLEX-999 gold standard since we cannot use the RESM the results using GloVe and PPDB are significantly worse compared to Paragram. Apparently, counter-fitting corrects hubness. The Paragram or our cosine retrofitting method are state-of-the-art results for the SIMLEX-999 gold standard. They are 0.2 better for SIMLEX-999 than word2vec with sense de-conflation (that was announced to be state-of the-art method for less reliable gold standards). Apparently relational knowledge and counter-fitting is more important for judging semantic similarity than sense determination for words. It is to be mentioned, though that Paragram hyperparameters are fitted to SIMLEX-999 results. The lesson is that many corrections to word embeddings are necessary and methods with more parameters and hyperparameters perform better.
rejected-papers
This paper proposes a method for refining distributional semantic representation at the lexical level. The reviews are fairly unanimous in that they found both the initial version of the paper, which was deemed quite rushed, and the substantial revision unworthy of publication in their current state. The weakness of both the motivation and the experimental results, as well as the lack of a clear hypothesis being tested, or of an explanation as to why the proposed method should work, indicates that this work needs revision and further evaluation beyond what is possible for this conference. I unfortunately must recommend rejection.
train
[ "S1ZbRMqlM", "HJmKXVcgz", "SJWbIA3eG", "SJcyMXTmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "The paper suggests taking GloVe word vectors, adjust them, and then use a non-Euclidean similarity function between them. The idea is tested on very small data sets (80 and 50 examples, respectively). The proposed techniques are a combination of previously published steps, and the new algorithm fails to reach state-of-the-art on the tiny data sets.\n\nIt isn't clear what the authors are trying to prove, nor whether they have successfully proven what they are trying to prove. Is the point that GloVe is a bad algorithm? That these steps are general? If the latter, then the experimental results are far weaker than what I would find convincing. Why not try on multiple different word embeddings? What happens if you start with random vectors? What happens when you try a bigger data set or a more complex problem?", "This paper proposes a ranking-based similarity metric for distributional semantic models. The main idea is to learn \"baseline\" word embeddings, retrofitting those and applying localized centering, to then calculate similarity using a measure called \"Ranking-based Exponential Similarity Measure\" (RESM), which is based on the recently proposed APSyn measure.\n\nI think the work has several important issues:\n\n1. The work is very light on references. There is a lot of previous work on evaluating similarity in word embeddings (e.g. Hill et al, a lot of the papers in RepEval workshops, etc.); specialization for similarity of word embeddings (e.g. Kiela et al., Mrksic et al., and many others); multi-sense embeddings (e.g. from Navigli's group); and the hubness problem (e.g. Dinu et al.). For the localized centering approach, Hara et al.'s introduced that method. None of this work is cited, which I find inexcusable.
\n\n2. The evaluation is limited, in that the standard evaluations (e.g. SimLex would be a good one to add, as well as many others, please refer to the literature) are not used and there is no comparison to previous work. The results are also presented in a confusing way, with the current state of the art results separate from the main results of the paper. It is unclear what exactly helps, in which case, and why.
\n\n3. There are technical issues with what is presented, with some seemingly factual errors. For example, \"In this case we could apply the inversion, however it is much more convinient [sic] to take the negative of distance. Number 1 in the equation stands for the normalizing, hence the similarity is defined as follows\" - the 1 does not stand for normalizing, that is the way to invert the cosine distance (put differently, cosine distance is 1-cosine similarity, which is a metric in Euclidean space due to the properties of the dot product). Another example, \"are obtained using the GloVe vector, not using PPMI\" - there are close relationships between what GloVe learns and PPMI, which the authors seem unaware of (see e.g. the GloVe paper and Omer Levy's work).
\n\n4. Then there is the additional question, why should we care? The paper does not really motivate why it is important to score well on these tests: these kinds of tests are often used as ways to measure the quality of word embeddings, but in this case the main contribution is the similarity metric used *on top* of the word embeddings. In other words, what is supposed to be the take-away, and why should we care?\n\nAs such, I do not recommend it for acceptance - it needs significant work before it can be accepted at a conference.\n\nMinor points:\n- Typo in Eq 10\n- Typo on page 6 (/cite instead of \\cite)", "I hate to say that the current version of this paper is not ready, as it is poorly written. The authors present some observations of the weaknesses of the existing vector space models and list a 6-step approach for refining existing word vectors (GloVe in this work), and test the refined vectors on 80 TOEFL questions and 50 ESL questions. In addition to the incoherent presentation, the proposed method lacks proper justification. Given the small size of the datasets, it is also unclear how generalizable the approach is.\n\nPros:\n 1. Experimental study on retrofitting existing word vectors for ESL and TOEFL lexical similarity datasets\n\nCons:\u000b 1. The paper is poorly written and the proposed methods are not well justified.\n 2. Results on tiny datasets\n", "The original paper was very significantly changed, expanded (8,5 instead 6 pages). This work concentrates on quality of word embeddings, improvement of word embedding vectors, applicability of a novel similarity metric used ‘on top’ of the word embeddings. The comparison of our cosine retrofitting to Paragram + Counterfitting for SIMLEX -999; and our RESM + cosine retrofitting to Paragram was done.\nIn particular this revision provides the following:\n\n1.\tImproves the clarity of the original version by almost twice as many experimental details; also in the area of what is state-of-the-art and what is not (using reliable gold standards, and concentrating on absolute results rather than on result changes often caused by a single effect).\n2. Removes a major deficiency of the original paper by including and addressing the Paragram and Paragram + Counter-fitting methods’ results.\n3. Adds all references that were considered necessary by reviewers. It is not that we were not aware of most of them. Notice that there was a one page limit on references. It seems we were one of a very few to obey this rule.\n4. In addition to TOEFL and ESL we included the SIMLEX-999 standard. We consider them the only reliably annotated sets at the moment for two reasons already mentioned by [1].\n5. The main results in Table 3 were augmented by Paragram, and Paragram + Counter-fitting methods and the multi-sense aware methods (Pilehvar and Navigli) .\nThere are many important conclusions reached in this paper: mostly many corrections to word embeddings are necessary for state-of-the-art results, and methods with more parameters and hyperparameters perform better.\n\n\n[1] Hill, Reichart, and Korhonen. Simlex-999: Evaluating semantic models with (genuine)\nsimilarity estimation. Computational Linguistics, , 2015. " ]
[ 2, 4, 3, -1 ]
[ 4, 5, 4, -1 ]
[ "iclr_2018_HyHmGyZCZ", "iclr_2018_HyHmGyZCZ", "iclr_2018_HyHmGyZCZ", "iclr_2018_HyHmGyZCZ" ]
iclr_2018_SJlhPMWAW
GraphVAE: Towards Generation of Small Graphs Using Variational Autoencoders
Deep learning on graphs has become a popular research topic with many applications. However, past work has concentrated on learning graph embedding tasks only, which is in contrast with advances in generative models for images and text. Is it possible to transfer this progress to the domain of graphs? We propose to sidestep hurdles associated with linearization of such discrete structures by having a decoder output a probabilistic fully-connected graph of a predefined maximum size directly at once. Our method is formulated as a variational autoencoder. We evaluate on the challenging task of conditional molecule generation.
rejected-papers
The authors present GraphVAE, a method for fitting a generative deep model, a variational autoencoder, to small graphs. Fitting deep learning models to graphs remains challenging (although there is relevant literature as brought up by the reviewers and anonymous comments) and this paper is a strong start. In weighing the various reviews, AnonReviewer3 is weighed more highly than AnonReviewer1 and AnonReviewer2 since that review is far more thorough and the reviewer is more expert on this subject. Unfortunately, the review from AnonReviewer1 is extremely short and of very low confidence. As such, this paper sits just below the borderline for acceptance. In general, the main criticisms of the paper are that some claims are too strong (e.g. non-differentiability of discrete structures), treatment of related work (missing references, etc.) and weak experiments and baselines. The consensus among the reviews (even AnonReviewer2) is that the paper is preliminary. The paper is close, however, and addressing these concerns will make the paper much stronger. Pros: - Proposes a method to build a generative deep model of graphs - Addresses a timely and interesting topic in deep learning - Exposition is clear Cons: - Treatment of related literature should be improved - Experiments and baselines are somewhat weak - "Preliminary" - Only works on rather small graphs (i.e. O(k^4) for graphs with k nodes)
train
[ "rJa-njiVz", "S1ccEH6VM", "SkfkNHa4z", "B1ubmvfZM", "ryqbAq3VG", "B1oxoZnEM", "r1W-8-8EG", "rJZdWiUxz", "ByvkN-k-G", "SJB8_E2zz", "Hypz2G3Gz", "H1lPszhMM", "H1KfiM3MM", "Hk1_cfnzf", "BkVdoCJZG" ]
[ "public", "author", "official_reviewer", "official_reviewer", "author", "author", "public", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "Interesting paper. How important is the graph matching layer to the whole network? There are recent graph matching methods that have been shown to outperform MPM (such as this one http://openaccess.thecvf.com/content_cvpr_2017/papers/Le-Huu_Alternating_Direction_Graph_CVPR_2017_paper.pdf). It is worth investigating whether replacing MPM by a better matching method will yield better results. It would be nice to include some discussion on this.", "Thank you!", "Thanks for the response, adding baselines, and a better treatment of related work. I've raised my score by a point.", "This paper studies the problem of learning to generate graphs using deep learning methods. The main challenges of generating graphs as opposed to text or images are said to be the following:\n(a) Graphs are discrete structures, and incrementally constructing them would lead to non-differentiability (I don't agree with this; see below)\n(b) It's not clear how to linearize the construction of graphs due to their symmetries. Based on this motivation, the paper decides to generate a graph in \"one shot\", directly outputting node and edge existence probabilities, and node attribute vectors.\n\nA graph is represented by a soft adjacency matrix A (entries are probability of existence of an edge), an edge attribute tensor E (entries are probability of each edge being one of d_e discrete types), and a node attribute matrix F, which has a node vector for each potential node. A cross entropy loss is developed to measure the loss between generated A, E, and F and corresponding targets.\n\nThe main issue with training models in this formulation is the alignment of the generated graph to the ground truth graph. To handle this, the paper proposes to use a simple graph matching algorithm (Max Pooling Matching) to align nodes and edges. A downside to the algorithm is that it has complexity O(k^4) for graphs with k nodes, but the authors argue that this is not a problem when generating small graphs. Once the best correspondence is found, it is treated as constant and gradients are propagated appropriately.\n\nExperimentally, generative models of chemical graphs are trained on two datasets. Qualitative results and ELBO values are reported as the dimensionality of the embeddings is varied. No baseline results are presented. A further small set of experiments evaluates the quality of the matching algorithm on a synthetic setup.\n\nStrengths:\n- Generating graphs is an interesting problem, and the proposed approach seems like an easy-to-implement, mostly reasonable way of approaching the problem.\n\n- The exposition is clear (although a bit more detail on MPM matching would be appreciated)\n\nHowever, there are some significant weaknesses. First, the motivation for one-shot graph construction is not very strong:\n\n- I don't understand why the non-differentiability argued in (a) above is an issue. If training uses a maximum likelihood objective, then we should be able to decompose the generation of a graph into a sequence of decisions and maximize the sum of the logprobs of the conditionals. People do this all the time with sequence data and non-differentiability is not an issue.\n\n- I also don't agree that the one shot graph construction sidesteps the issue of how to linearize the construction of a graph. Even after doing so, the authors need to solve a matching problem to resolve the alignment issue. I see this as equivalent to choosing an order in which to linearize the order of nodes and edges in the graph.\n\nSecond, the experiments are quite weak. No baselines are presented to back up the claims motivating the formulation. I don't know how to interpret whether the results are good or bad. I would have at least liked to see a comparison to a method that generated SMILES format in an autoregressive manner (similar to previous work on chemical graph generation), and would ideally have liked to see an attempt at solving the alignment problem within an autoregressive formulation (e.g., by greedily constructing the alignment as the graph was generated). If one is willing to spend O(k^4) computation to solve the alignment problem, then there seem like many possibilities that could be easily applied to the autoregressive formulation. The authors might also be interested in a concurrent ICLR submission that approaches the problem from an autoregressive angle (https://openreview.net/pdf?id=Hy1d-ebAb). \n\nFinally, I would have expected to see a discussion and comparison to \"Learning Graphical State Transitions\" (Johnson, 2017). Please also don't make statements like \"To the best of our knowledge, we are the first to address graph generation using deep learning.\" This is very clearly not true. Even disregarding Johnson (2017), which the authors claim to be unaware of, I would consider approaches that generate SMILES format (like Gomez-Bombarelli et al) to be doing graph generation using deep learning.\n\nOverall, the paper is about an interesting subject, but in my opinion the execution isn't strong enough to warrant publication at this point.\n", "That's a good question. In Table 1 we show the performance of using no graph matching at all (NoGM), which works but produces many valid samples of low variety, so that we can claim that using some form of graph matching is certainly helpful. We have chosen MPM in particular because of convenience, as the algorithm is fast, and easy to understand and implement (also on GPU). There are indeed more recent graph matching algorithms (especially of higher order) but their implementation is either not public or is provided in Matlab (the case of your suggestion), which made them difficult and slow to integrate in our PyTorch codebase and thus, we have not run experiments with them. Nevertheless, we have found that 1) modifying the similarity function definition has played a more important role than changing parameters of MPM (such as number of iterations or trying out a sum-pooling variant), and 2) MPM scales relatively well with the size of the graphs (Table 2). While we cannot truly answer your question, our intuition is that a better graph matching algorithm would likely improve the results a bit but MPM itself does not seem to be the main performance bottleneck.", "Thank you for the question, our wording is indeed not exact and will be amended in the final version of the paper. By \"soft attention pooling\" we refer to the graph-level output model in Equation 7 of [Li et al., 2015b], where the networks \"i\" and \"j\" are each a single fully connected layer with 128 output channels and tanh functions are replaced with the identity (as suggested in [Li et al., 2015b]).\nOur encoder implementation was based on https://github.com/mys007/ecc, where we adapted GraphPoolModule.py to use sum-pooling instead of max/mean-pooling. The gating itself can be implemented in a few lines:\n\nclass SelfGate(nn.Module):\n def __init__(self, lin1, lin2):\n super(SelfGate, self).__init__()\n self.lin1 = nn.Linear(64 + 4, 128)\n self.lin2 = nn.Linear(64 + 4, 128)\n def forward(self, input, input0):\n inp = torch.cat([input, input0], dim=1)\n return nnf.sigmoid(self.lin1(inp)) * self.lin2(inp)", "I've tried implementing the model described in the paper, but I cannot understand why a global pooling layer would need channels. The cited paper (Li et al., 2015b) doesn't really seem to be related to pooling, and doesn't even mention it at all.\n\nCould you elaborate a bit on the matter, or point me to an implementation of such a pooling layer? \n\nThanks", "This work proposed an interesting graph generator using a variational autoencoder. The work should be interesting to researchers in the various areas. However, the work can only work on small graphs. The search space of small graph generation is usually very small, is there any other traditional methods can work on this problem? Moreover, the notations are a little confusing. ", "The authors propose a variational auto encoder architecture to generate graphs. \n\nPros:\n- the formulation of the problem as the modeling of a probabilistic graph is of interest \n- some of the main issues with graph generation are acknowledged (e.g. the problem of invariance to node permutation) and a solution is proposed (the binary assignment matrix)\n- notions for measuring the quality of the output graphs are of interest: here the authors propose some ways to use domain knowledge to check simple properties of molecular graphs \n\nCons: \n- the work is quite preliminary\n- many crucial elements in graph generation are not dealt with: \n a) the adjacency matrix and the label tensors are not independent of each other, the notion of a graph is in itself a way to represent the 'relational links' between the various components\n b) the boundaries between a feasible and an infeasible graph are sharp: one edge or one label can be sufficient for acting the transition independently of the graph size, this makes it a difficult task for a continuous model. The authors acknowledge this but do not offer ways to tackle the issue\n c) conditioning on the label histogram should make the problem easy: one is giving away the number of nodes and the label identities after all; however even in this setup the approach fails more often than not \n d) the graph matching procedure proposed is a rough patch for a much deeper problem\n- the evaluation should include a measure of the capacity of the architecture to :\n a) reconstruct perfectly the input\n b) denoise perturbations over node labels and additional/missing edges ", "- We added comparison to two baselines ([Kusner et al, 2017] and [Gomez-Bombarelli et al, 2016]) on QM9 and ZINC, results of unconditioned models on QM9 (Table 1), and results with unregularized training (Appendix C). \n- We introduce a model variant with a higher percentage of valid samples by making node probabilities a function of edge probabilities (Appendix B).\n- We added a brief summary of max-pooling matching (Appendix A).\n- We made multiple minor edits over the paper to enhance clarity, mention further observations, and refer to more related work such as [Johnson, 2017], [Vinyals et al, 2016], and [Stewart et al, 2016].", "Thank you for your review. We address your critique in the following.\n\n# Ordering vs Alignment\n\nThe choice between linearization and matching is certainly an interesting topic, these are indeed two sides of the same coin. The graph canonization problem (i.e. consistent node ordering) is at least as computationally hard as the graph isomorphism problem (i.e. matching), which is NP-hard for general graphs. Fortunately, there are practical algorithms available for both problems, such as Nauty [McKay & Piperno, 2014] for canonization and max-pooling matching [Cho et al, 2014] for approximate isomorphism, used in our paper. Thus, both ways are feasible for small graphs, though not for free.\n\nWe decided for one-shot construction with matching to allow the decoder to find its own orderings, motivated by the empirical result of [Vinyals et al, ICLR'16] that the linearization order matters when learning on sets. It is a priori unclear that enforcing a specific canonical ordering of vertices with a strategy for incremental construction (e.g. adding vertices one by one and connecting them to existing nodes) would lead to the best results. In this sense, we indeed sidestep the issue of how to linearize the construction by postponing the correspondence problem to the loss for the final result. We do not avoid the computational penalty of alignment. Note that our matching approach can be seen as inexact search of output permutation in Equation 9 in [Vinyals et al, ICLR'16].\n\nOne could also consider incremental (likely autoregressive, as you suggested) construction with matching. However, [Johnson, ICLR'17] noted in his construction of probabilistic graphs that a loss function for only the final result was insufficient and deep supervision over individual construction steps was necessary for good performance. Your idea of \"greedily constructing the alignment as the graph was generated\" certainly sounds quite promising in this context, thank you for it. It might nicely combine the idea of the concurrent submission (https://openreview.net/pdf?id=Hy1d-ebAb) and our paper. Though we would consider it as a direction for future work at this momement, as it would lead to extensive modification of our current submission.\n\n\n# Non-differentiability \n\nWe agree that non-differentiability is not a major obstacle if the generation of a graph is linearized, i.e. decomposed into a sequence of decisions in the ground truth. This may be given by the nature of some tasks, such as those addressed by [Johnson, ICLR'17], where graphs are built according to a sequence of statements. In general, however, the choice of such a decomposition is not clear, as we argue above. In this regard, it is interesting to learn from the mentioned concurrent submission (https://openreview.net/pdf?id=Hy1d-ebAb) that random orderings seem to work well. Nevertheless, even ML training with teacher forcing is not the perfect solution due to exposure bias (possibly poor prediction performance if the RNN's conditioning context diverges from sequences seen during training, i.e. the inability to fix its mistakes) [Bengio et al, 2015]. \n\n# Baselines\n\nWe agree that the omission of baselines was clearly a weak point. In the updated paper, we compare with character-based decoder of [Gomez-Bombarelli et al, 2016] and grammar-based decoder of [Kusner et al, 2017]. We found that the ratio of valid samples can be similar to a grammar-based decoder [Kusner et al, 2017] on QM9 while offering much higher variance; see Tables 1 and 3 in the updated paper. Unlike Kusner et al, we could achieve this without manual specification of a grammar or other rules, besides the help from maximum spanning tree. \n\n# Other points\n\nThank you for the reference to [Johnson, ICLR'17], we have updated the paper in this regard and toned down our statement on being the first, in this light. We also included a short appendix on MPM matching.", "Thank you for making us aware of the connection to object detection literature. We have added a reference to Stewart et al. in the updated version of our paper. Indeed, we share the same problem of matching unordered network outputs to ground truth, although the matching freedom is additionally constrained by edges in our case and we need to consider this by first running approximate graph matching to get reasonable similarities for Hungarian algorithm. As in our submission, Stewart et al. assumes that the matching is fixed for a given iteration and the gradient does not flow through the actual computation of matching (it is therefore not a perfect end-to-end model). Using a fixed matching in loss functions appears also in earlier (deep learning based) object detection papers in fact, e.g. Scalable High Quality Object Detection by Szegedy et al., 2014 (https://arxiv.org/abs/1412.1441).", "Thank you for your review. We address your critique in the following.\n\n# a) adjacency matrix and the label tensors are not independent of each other\n\nOur decoder uses a single stream of feature channels until its last layer, which should make the three predicted tensors rather dependent. In fact, we tried to go a step further and derive the adjacency matrix from feature tensors by introducing a virtual \"not-present\" edge and node class. However, this did not improve performance, likely due to a the fact that this required whole feature tensors to be correct, whereas our presented loss ignores unmatched parts of these tensors.\n\n# c) conditioning on the label histogram should make the problem easy: one is giving away the number of nodes and the label identities after all; however even in this setup the approach fails more often than not\n\nThank you for making this hypothesis. We performed an additional experiment by training in unconditioned setting on QM9 (see updated Table 1). Indeed, conditional training is able to reach a lower loss, though this difference diminishes with increasing size of the embedding (likely due to the autoencoder having more freedom to capture such statistics by itself). The number of valid samples fluctuates over configurations and is roughly the same for both conditional and unconditional setting.\n\nWe managed to improve our results on QM9 (so that can it succeed slightly more often than not), compared them to previous work, and found that the ratio of valid samples can be similar to a grammar-based decoder [Kusner et al, 2017] on QM9 while offering much higher variance; see Tables 1 and 3 in the updated paper. Unlike Kusner et al, we could achieve this without manual specification of a grammar or other rules, besides the help from maximum spanning tree.\n\n# A measure of the capacity of the architecture to reconstruct perfectly the input\n\nThis is a very good point. To this end, we removed the regularization and trained our architecture as a standard autoencoder, where the only goal is to aim for perfect reconstruction. Unfortunately, it turned out the architecture is not powerful enough to perfectly reconstruct the input, unless the set of possible inputs is rather small (e.g. for a fixed set of 1000 training examples). We added this information to Appendix C. In this light, we did not pursue the scenario of denoising autoencoder, which you also suggested.", "Thank you for your review. Regarding traditional methods besides stochastic blockmodels [Snijders & Nowicki, 1997], we should have also mentioned the research on random graph models, such as [Erdös & Rényi, 1960] or [Barabási & Albert, 1999]. These models make fairly strong assumptions and cannot be used to model e.g. chemical compounds, though. You can consult e.g. \"A Survey of Statistical Network Models\" [Goldenberg et al, 2009] for a detailed review.\n\nRegarding the little confusing notation, we have updated the paper in several places today; could you please provide more details so that we can further improve the manuscript?", "I would like to point the authors to a relevant paper that similarly solves a one-to-one matching problem on unordered sets via the Hungarian algorithm within an end-to-end model:\n\nR. Stewart, M. Andriluka, A.Y. Ng, End-to-End People Detection in Crowded Scenes, CVPR 2016\n\nI think it would be fair to cite and discuss their work." ]
[ -1, -1, -1, 5, -1, -1, -1, 7, 7, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 3, -1, -1, -1, 2, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJlhPMWAW", "SkfkNHa4z", "Hypz2G3Gz", "iclr_2018_SJlhPMWAW", "rJa-njiVz", "r1W-8-8EG", "iclr_2018_SJlhPMWAW", "iclr_2018_SJlhPMWAW", "iclr_2018_SJlhPMWAW", "iclr_2018_SJlhPMWAW", "B1ubmvfZM", "BkVdoCJZG", "ByvkN-k-G", "rJZdWiUxz", "iclr_2018_SJlhPMWAW" ]
iclr_2018_S1fcY-Z0-
Bayesian Hypernetworks
We propose Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks. A Bayesian hypernetwork, h, is a neural network which learns to transform a simple noise distribution, p(e) = N(0,I), to a distribution q(t) := q(h(e)) over the parameters t of another neural network (the ``primary network). We train q with variational inference, using an invertible h to enable efficient estimation of the variational lower bound on the posterior p(t | D) via sampling. In contrast to most methods for Bayesian deep learning, Bayesian hypernets can represent a complex multimodal approximate posterior with correlations between parameters, while enabling cheap iid sampling of q(t). In practice, Bayesian hypernets provide a better defense against adversarial examples than dropout, and also exhibit competitive performance on a suite of tasks which evaluate model uncertainty, including regularization, active learning, and anomaly detection.
rejected-papers
This paper presents a new method for approximate Bayesian inference in neural networks. The reviewers all found the proposed idea interesting but originally had questions about its novelty (with regard to normalizing flows) and questioned the technical rigor of the approach. The authors did a good job of addressing the technical concerns, causing two of the reviewers to raise their scores. However, the paper remains just borderline and none of the reviewers are willing to champion the paper as their questions about novelty and empirical evaluation remain. The reviewers all questioned fundamental technical aspects of the paper (which were clarified in the discussion), indicating that the paper requires more careful exposition of the technical contributions. Taking the reviewers feedback and discussion into account, running some more compelling experiments and rewriting the paper to make the technical aspects more clear would make this a much stronger submission. Pros: - Provides an interesting idea for approximate Bayesian inference in deep networks - The paper appears correct - The approach is scalable and tractable Cons: - The technical writing is not rigorous - The reviewers don't seem convinced by the empirical analysis - Incremental over existing (but recent) work (Luizos and Welling)
train
[ "HJC7ApOHM", "HJPY0ycef", "Hyjt0II4M", "rJNPwwYef", "B1Ev6EUEz", "Hy5hZMulM", "Skyy9P6Qz", "BJIlL5aGM", "B1H3S5pGG", "HknN5vp7z", "B1Z75Ppmf", "HJgbqvp7M", "SJXrL9TGM", "B1_f8G0fG", "HkMZb19ef", "HkpJDjmez" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public", "public", "public", "public", "public", "public", "public", "official_reviewer" ]
[ "We agree that *mechanically*, the procedure for sampling the posterior in MNF and BHN is very similar, to whit:\n1. in BHNs, we sample the (scaling factors of the) parameters directly; this is equivalent to scaling units’ pre-activations.\n2. in MNF, they sample z (which can be viewed as a scaling factor of the activations), and then add some i.i.d. Gaussian noise (with std-dev sigma) to the resulting parameters. \nSo when sigma --> 0 in MFN, the only difference would be whether the outputs of the flow are used to rescale the activations or pre-activations.\n\nNevertheless, we don’t think the derivation in the MNF paper would behave well mathematically as sigma --> 0.\nThis sigma refers to the std-dev of q(W | zTf), and equation 13 of the paper (https://arxiv.org/pdf/1703.01961.pdf) includes the term −KL(q(W|zTf)||p(W)), which will go to -infinity as sigma --> 0 (since p(W) has a fixed variance).\nThis seems problematic, since this term is part of the objective function of MNF (eqn7).\n\nSo injecting extra i.i.d. Gaussian noise in the parameter space seems fundamental to MNF. This could be disadvantageous, since i.i.d. noise might put probability mass in poor regions of parameter space, although we might also expect it to provide additional regularization benefits (as Gaussian dropout does).", "* Edit: I increased my rating to 6. The authors fixed the first error I pointed out below. Regarding the second point: I still think it is possible to take a limit of sigma -> 0 in MNF, which makes the methods very similar.\n\nThe authors propose a new method of defining approximate posteriors for use in Bayesian neural networks. The idea of using hypernetworks for Bayesian inference is compelling, and the authors show some promising first results. I see two issues, and would be willing to increase my rating if these were sufficiently addressed.\n\n- The paper says it uses an \"isotropic standard normal prior on the weights of the network\". However, the stochastic part of the generated weights (i.e. the scales) is of a lower dimension than the weights. It seems to me this means that the KL divergence between prior and posterior is undefined, or infinite, as the posterior is only defined on a sub-manifold. What exactly is the loss term that is added to the training objective? And how is this justified?\n\n- The instantiation of Bayesian hypernetworks that is used in experiments seems to be a special case of the method of multiplicative normalizing flows as proposed by Louizos and Welling and discussed in this paper. If the variances / sigmas are zero in the latter method, their approximation seems functionally equivalent to Bayesian hypernetworks (though with different parameterization). Is my understanding correct? If so, the novelty of the proposed method is limited.", "Thank you for the clarifications; I have revised the score of the paper according to your comments. It is interesting that the quality of the predictive distribution does not change for the toy task when employing a full hypernetwork and probably points at optimization difficulties for the bound (as the uncertainty still does not increase for the right part of the function). ", "This paper presents Bayesian Hypernetworks; variational Bayesian neural networks where the variational posterior over the weights is governed by a hyper network that implements a normalizing flow (NF) such as RealNVP and IAF. As directly outputting the weight matrix with a hyper network is computationally expensive the authors instead propose to utilize weight normalisation on the weights and then use the hyper network to output scalar scaling variables for each hidden unit, similarly to what was done at [1]. The main difference with this prior work is that [1] consider these NF scaling variables as auxiliary random variables to a mean field Gaussian distribution over the weights whereas this paper attempts to posit a distribution directly on the weights via the NF. This avoids the nested variational approximation and auxiliary models of [1], which can potentially yield a tighter bound. The proposed method is evaluated on extensive experiments.\n\nThis paper seems like a plausible idea with extensive experiments but the similarity with [1] make it an incremental contribution and, furthermore, it seems that it has a technical issue with what is explained at Section 3.3. More specifically, if you generate the parameters \\theta according to Eq. 7 and posit a prior over \\theta then you will have a problematic variational bound as there will be a KL divergence, KL(q(\\theta) || p(\\theta)), with distributions of different support (since q(\\theta) is defined only along the directions spanned by u), which is infinite. For the KL to be valid you will need to posit a prior distribution over `g`, p(g), and then consider KL(q(g) || p(g)), with q(g) being given by the NF. From the experiment paragraph at page 5 though I deduct that you instead employ “an isotropic standard normal prior over the weights”, i.e. \\theta, thus I believe that you indeed have a problematic bound. How do you actually compute logq(\\theta) when you employ the parametrisation discussed at 3.3? Did you use that parametrisation in every experiment?\n\nOther than that, I believe that it would be interesting to experiment with a `full` hyper network, i.e. generating directly the entire parameter vector \\theta, e.g. at the toy regression experiment where the dimensionality is small. This would then better illustrate the tradeoffs you make when you reduce the flexibility of the hyper-network to just outputting the row scaling variables and the effect this has at the posterior approximation.\n \nTypos:\n(1) Page 3, 3.1.1 log(\\theta) -> logp(\\theta).\n(2) Eq. 6, it needs to be |det \\frac{\\partial h(\\epsilon)}{\\partial \\epsilon}|^{-1} or |det \\frac{\\partial h^{-1}(\\theta)}{\\partial \\theta}| for a valid change of variables formula.\n\n[1] Louizos & Welling, Multiplicative Normalizing Flows for Variational Bayesian Neural Networks.", "Many thanks for the detailed response. I still think that the novelty is limited and that a wider experimental work would have strengthen the paper. \n\nMy reason to use variational GPs is that they would be an alternative way to generate a posterior over model parameters.\n\nI'll probably leave my score to a 6\n", "This paper proposes Bayesian hypernetworks to carry out Bayesian learning of deep networks. The idea is to construct a generative model capable of approximating the posterior distribution over the parameters of deep networks. I think that the paper is well written and easy to follow. \n\nI like the idea of constructing general approximation strategies for complex posterior distribution and the proposed approach inherits all the scalability properties of modern deep learning techniques. In this respect, I think that the paper tackles a timely topic and is interesting to read. \n\nIt is not entirely clear to me why the Authors name their proposal Bayesian hypernetworks. This seems to suggest that also the hypernetwork is infered using Bayesian inference, but if I understand correctly this is not the case. \n\nI have some comments on novelty and realization of the experiments. In the positioning of the work in the literature, the Authors point out that hypernetworks have been proposed before, so it is not clear what is the actual novelty in the proposal. Is it the use of Real NVPs and IAFs as hypernetworks? These methods have been already proposed and extensively studied in the literature, and even if they have been adapted to be hypernetworks here, I believe that the novelty is fairly limited. \n\nThe experimental part is interesting as it explores a number of learning scenarios. However, I think that it would have been useful to add comparisons with standard variational inference (e.g., Graves, 2011) for deep networks to substantiate the claims that this approach underestimates uncertainty. I believe that this would strengthen the comparative evaluation. \n\nI think the paper would have made a stronger case by including other approaches to approximate posteriors using generative models. For example, the variational Gaussian process paper sounds like an ideal method to include here. \n\n[1] D. Tran, R. Ranganath, and D. M. Blei. Variational Gaussian process. arXiv preprint arXiv:1511.06499, 2015.", "The reviewers’ main concerns were: 1) the technical soundness of our approach (specifically WRT the potential degeneracy of the KL-divergence), and 2) lack of novelty, especially with respect to the approach of Luizos and Welling [1]. The first concern is addressed in our comment “Clarifying: we use KL(p(g) || q(g)), so it’s not degenerate.”\n\nWe acknowledge the similarity with Multiplicative Normalizing Flows (MNF) [1], but (quoting our response to reviewer3) note several differences as well:\n0) Mathematically, MNF treats all the neural net parameters as random variables, and derive a lower-bound on the ELBO to allow for a hierarchical posterior (z->W); whereas we only treat g as random variables, and perform standard variational inference on this model.\n1) While the scaling outputs of the normalizing flow in MNF (z) operate on units activations (or equivalently, outgoing weights), in BHNs, the scaling factors (g) operate on the pre-activations (or equivalently, incoming weights).\n2) We normalize the direction component of the weights (u := v/||v||), which means that the scale of g (which is analogous to z, in MNF) can control the complexity of the model. Crucially, this allows us to place a meaningful prior on g, and thus avoid introducing an auxiliary inference model for g.\n3) We also perform a more broad experimental evaluation, including experiments on active learning.\n\nIt may be worth mentioning that we developed Bayesian Hypernetworks independently, originally for submission to NIPS.\nEven in the context of [1], we believe our work provides the following valuable contributions to the community:\n1) Further experimental validation that normalizing flows can outperform simpler approaches to variational deep learning.\n2) We make connections with hypernetworks and generative models, which might inspire more creative parametrizations and/or applications of generative modelling techniques in this line of work, such as Shi et al. 2017 [2] (although their paper also predates our ICLR submission).\n3) While both works demonstrate the benefits of using flexible and powerful approximate posteriors, in contrast to [1], our results suggest that it may not be necessary to construct a full posterior over the weights of a neural network in order to capture these benefits.\n\n\nWe’ve also updated the paper as follows:\n1) We’ve corrected the experiments section to explain that our prior is over the scaling factors (g), not the parameters (theta).\n2) We’ve added a section to the appendix which gives the derivation of our training objective. \n3) We’ve added an additional plot to Figure 1 using a Bayesian Hypernetwork to output a full posterior over all the primary network parameters (theta), as requested by reviewer 1. \n4) We’ve fixed the typos pointed out by reviewer 1.\n\n[1] Christos Louizos and Max Welling. Multiplicative normalizing flows for variational bayesian neural\nnetworks. arXiv e-prints, March 2017.\n[2] Jiaxin Shi, Shengyang Sun, and Jun Zhu. Implicit variational inference with kernel density ratio\nfitting. arXiv preprint arXiv:1705.10119, 2017.", "Reviewers 1 and 3 rightly note that our experiments section states that we place an isotropic Gaussian prior over the weights of the network. We apologize for the confusion. In fact, our prior and posterior are over the scaling factors, g, so the KL divergence is well-defined. Specifically, we use an isotropic Gaussian prior for g. The resulting penalty term is -log(p(g)) + log(q(g)). \n\nNote that under the weight-norm parametrization, ||w|| == g, and in fact, the -log(p(g)) term and it’s gradients are equivalent to a weight-decay penalty on the sampled weights (w), and thus encourage smaller norm weights and lower complexity functions, just like an isotropic Gaussian prior on the weights would.\n", "1) We didn't experiment with it very much. In our preliminary investigations it didn't seem to make much difference, so we just didn't look into it more. This is consistent with Blundell et al’s observation (see Bayes by backprop paper sec 3.1 last paragraph).\n\n2) We're using a different architecture. We just chose something that was easy to implement and train quickly. For instance, we don't use residual layers.\n\n3) 5000 examples. This is mentioned in the text, but we can add it to the caption as well\n\n4) We did not compare with ensemble methods.", "Thanks for the feedback; we’ll respond item-wise.\n\n“It is not entirely clear to me why the Authors name their proposal Bayesian hypernetworks. This seems to suggest that also the hypernetwork is infered using Bayesian inference, but if I understand correctly this is not the case.”\nThat’s correct. Rather, the hypernetwork *performs* (variational) Bayesian inference, which is why we chose this name.\n\n “ [...] it is not clear what is the actual novelty in the proposal. Is it the use of Real NVPs and IAFs as hypernetworks?”\nThe novelty of our approach (over e.g. “HyperNetworks” (Ha et al. 2016) is not the architecture of the hypernet, but rather the idea of inputting noise to the hypernetwork (as opposed to learned parameters or network activations) in order to learn a distribution over network parameters. But also note that other recent works (e.g. Luizos and Welling 2017) have also proposed similar ideas (without drawing connections to hypernetworks). \n\n”However, I think that it would have been useful to add comparisons with standard variational inference (e.g., Graves, 2011) for deep networks”\nWe do compare with standard mean-field Variational Inference as implemented by Blundell et al. (2015) in their paper “Weight Uncertainty in Neural Networks”, which we consider a stronger baseline than Graves (2011). With 0 coupling layers, Bayesian Hypernetworks reduce to a naive mean file method, as before applying additional normalizing flows, the spherical Gaussian noise is transformed by an element-wise scale and shift layer, yielding an arbitrary diagonal Gaussian.\nThe method of Blundell et al. (2015) is more modern and simpler than Graves’; Blundell et al. use the reparametrization trick to compute unbiased estimates of gradients of the variational posterior, whereas Graves uses Gaussian gradient identities due to the Bonnet and Price theorem, to estimate the gradient and further approximate the Fisher information matrix for the sigma parameters via a diagonal approximation of the Hessian. This yields a biased and noisy estimate of gradient, and Graves’ method is typically outperformed by more modern approaches, see, for instance Hernandez-Lobato and Adams (2015).\n\n“I think the paper would have made a stronger case by including other approaches to approximate posteriors using generative models. For example, the variational Gaussian process paper sounds like an ideal method to include here.”\nWe agree that comparing with more approaches would be valuable, but a thorough comparison of the many existing Bayesian methods is out of our scope. In this work, we chose to compare with two of the most popular modern techniques for variational inference in neural networks, Bayes by Backprop (Blundell et al., 2015), and MCdropout (Gal and Ghahramani, 2016). Is there any reason you believe variational Gaussian processes [1] in particular are an ideal method to compare with?\n\nReferences: \n- David Ha, Andrew Dai, and Quoc V. Le. Hypernetworks. 2017. URL https://openreview. net/pdf?id=rkpACe1lx.\n- Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. In Proceedings of The 32nd International Conference on Machine Learning, pp. 1613–1622, 2015.\n- Christos Louizos and Max Welling. Multiplicative normalizing flows for variational bayesian neural networks. arXiv e-prints, March 2017.\n- Jose Miguel Hernandez-Lobato and Ryan Adams. Probabilistic backpropagation for scalable learning of Bayesian neural networks. In Proceedings of The 32nd International Conference on Machine Learning, pp. 1861–1869, 2015.\n- Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning, pp. 1050–1059, 2016.", "Thanks for the thoughtful and detailed comments.\n\nYou are correct that we must “posit a prior distribution over `g`, p(g), and then consider KL(q(g) || p(g)), with q(g) being given by the NF”, and this is in fact what we do. We recognize that the original submission was misleading on this point, and we’ve clarified our approach in the main text and an additional section of the appendix. Note that the log-likelihoods log p(theta) and log p(g) actually yield equivalent L2 weight-decay penalty terms (under a spherical Gaussian prior), and this justifies our choice of prior on g as a means of encouraging simpler models. \n\nTo compute log q(theta), we use the change of variables equation from equation 6 (with theta replaced with g using the weight norm parametrization); since we only need to evaluate log q(g) (and derivatives) at sampled values of g, IAF or RNVP are both able to compute the determinants efficiently.\n\nWe did use the weight-norm parametrization of section 3.3 in all of the experiments, but following your suggestion, we’ve included results with a fully-parametrized Bayesian Hypernetwork in Figure 1 of the updates paper. The results are nearly identical, indicating that our method makes a good trade-off between performance and scalable computation.\n\nWe’ve also corrected the typos; thanks for catching them!", "Thank you for the helpful feedback. We’ll aim to address both issues in a satisfying way.\n\nFirst, you are correct that the manifold of generated weights is of lower dimensionality than the space of all possible weights.\nIn fact, the quoted \"isotropic standard normal prior on the weights of the network\" in the experiments section should have said “[...] prior on the scaling factors (g) of the weights of the network”, and we’ve made that correction in the updated paper. Thus there is no technical issue with computing the KL-divergence, although this also means that we only maintain uncertainty over g, and use point estimates for the directions of the weights. Note, however, that the scaling factors exactly control the norm of the weights, and so our prior distribution on g expresses a preference for simpler (smaller-norm) weights and lower-complexity models, just as a prior on the weights (theta) would; this justifies our choice of this prior.\n\nSecond, our method is not actually a special case of multiplicative normalizing flows (MNF), since the derivation of MNF prohibits reducing the variance to 0. Our methods *are* quite similar, but make different trade-offs in order to allow scaling to large networks. Mathematically, MNF treats all the neural net parameters as random variables, and derives a lower-bound on the ELBO to allow for a hierarchical posterior (z->W); whereas we only treat g as random variables, and perform standard variational inference on this model.\nFurther differences in our work are: \n1) While the scaling outputs of the normalizing flow in MNF (z) operate on units activations (or equivalently, outgoing weights), in BHNs, the scaling factors (g) operate on the pre-activations (or equivalently, incoming weights).\n2) We normalize the direction component of the weights (u := v/||v||), which means that the scale of g (which is analogous to z, in MNF) can control the complexity of the model. Crucially, this allows us to place a meaningful prior on g, and thus avoid introducing an auxiliary inference model for g.\n3) We also perform a more broad experimental evaluation, including experiments on active learning.\n\nWe hope this addresses your concerns, and are happy to continue the conversation otherwise.", "There are many recent works on Bayesian DNNs, and it’s beyond out scope to compare with all of them, although of course it would be possible to do so and interesting to see the results.\nThe methods you mention have some significant differences from Variational methods (such as Bayesian Hypernets), with corresponding pros and cons, and thus somewhat different use cases. Thus we believe that improving approximate (e.g. Variational) inference approaches to Bayesian DNNs is a worthwhile research direction in and of itself.\nParticle-based methods are similar to ensemble methods: they train several different models in parallel. In [1], they try to maximize pairwise distances between N different models. This has the following pros and cons:\nPros: \n•\tMore likely to express different modes (with up to N modes)\nCons: \n•\tNeeds N time more memory.\n•\tNeeds N^2 computation for the pairwise distances.\n•\tOnly gives you access to N samples from the posterior.\nMeanwhile, Markov chain-based methods (such as SGLD [2] and precursors) have the advantage of allowing asymptotically unbiased samples from the true posterior (unlike particle-based or variational methods). However, the Metropolis Hastings step of [2] is expensive since the whole dataset needs to be evaluated; as a result, this step is usually not performed in practice, resulting in additional bias. Furthermore, successive samples from such Markov chain-based methods are typically highly correlated. This autocorrelation introduces bias if the algorithm terminates prematurely. In contrast, variational inference methods that feature directed sampling of the approximate posterior (such as our work) yield independent samples by construction. The disadvantage of variational inference methods is that they can produce poor samples of model parameters when the true posterior is not well approximated by the family of approximating distributions; our work addresses this problem by using a more flexible approximate posterior.", "Thanks for clarifying the difference. Well done.\n\nSince the the paper focuses on the variational methods for DNNs using hypernetworks, is it possible to revise the title to better reflect the content? The current title gives the impression that the paper represents all Bayesian inference methods. ", "Is it possible to compare with particle [1] and sample-based [2] methods to learn the weight uncertainty of neural networks? which have shown excellent performance.\n\n[1] Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm, 2016\n[2] Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks, 2016", "* You use the same noise sample for all examples in a minibatch. Is this because the computation in the hypernet otherwise becomes too expensive? An advantage of your proposed approach seems to be that by outputting only the scales of the parameters you could easily use different scales for different examples as far as the primary network is concerned.\n\n* You claim to show \"that BHNs act as a regularizer, outperforming dropout and traditional mean field\". However the results shown in e.g. table 1 for CIFAR-10 seem to be quite a bit worse than previous SOTA results obtained with dropout. Why the difference?\n\n* Please expand the caption in Figure 3. Is this MNIST? With the full training set or a restricted set?\n\n* How do your anomaly detection results compare against methods that use ensembles? (e.g. http://papers.nips.cc/paper/7219-simple-and-scalable-predictive-uncertainty-estimation-using-deep-ensembles)" ]
[ -1, 6, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "HJPY0ycef", "iclr_2018_S1fcY-Z0-", "B1Z75Ppmf", "iclr_2018_S1fcY-Z0-", "HknN5vp7z", "iclr_2018_S1fcY-Z0-", "iclr_2018_S1fcY-Z0-", "iclr_2018_S1fcY-Z0-", "HkpJDjmez", "Hy5hZMulM", "rJNPwwYef", "HJPY0ycef", "HkMZb19ef", "SJXrL9TGM", "iclr_2018_S1fcY-Z0-", "iclr_2018_S1fcY-Z0-" ]
iclr_2018_SkqV-XZRZ
Variational Bi-LSTMs
Recurrent neural networks like long short-term memory (LSTM) are important architectures for sequential prediction tasks. LSTMs (and RNNs in general) model sequences along the forward time direction. Bidirectional LSTMs (Bi-LSTMs), which model sequences along both forward and backward directions, generally perform better at such tasks because they capture a richer representation of the data. In the training of Bi-LSTMs, the forward and backward paths are learned independently. We propose a variant of the Bi-LSTM architecture, which we call Variational Bi-LSTM, that creates a dependence between the two paths (during training, but which may be omitted during inference). Our model acts as a regularizer and encourages the two networks to inform each other in making their respective predictions using distinct information. We perform ablation studies to better understand the different components of our model and evaluate the method on various benchmarks, showing state-of-the-art performance.
rejected-papers
This paper proposes a method for performing stochastic variational inference for bidirectional LSTMs through introducing an additional latent variable that induces a dependence between the forward and backward directions. The authors demonstrate that their method achieves very strong empirical performance (log-likelihood on test data) on the benchmark TIMIT and BLIZZARD datasets. The paper is borderline in terms of scores with a 7, 6 and 4. Unfortunately the highest rating also corresponds to the least thorough review and that review seems to indicate that the reviewer found the technical exposition confusing. AnonReviewer2 also found the writing confusing and discovered mistakes in the technical aspects of the paper (e.g. in Eq 1). Unfortunately, the reviewer who seemed to find the paper most easy to understand also gave the lowest score. A trend among the reviewers and anonymous comments was that the paper didn't do a good enough job of placing itself in the context of related work (Goyal et. al, "Z-forcing") in particular. The authors seem to have addressed this (curiously in an anonymous link and not in an updated manuscript) but the manuscript itself has not been updated. In general, this paper presents an interesting idea with strong empirical results. The paper itself is not well composed, however, and can be improved upon significantly. Taking the reviews into account and including a better treatment of related work in writing and empirically will make this a much stronger paper. Pros: - Strong empirical performance (log-likelihood on test data) - A neat idea - Deep generative models are of great interest to the community Cons: - Incremental in relation to Goyal et al., 2017 - Needs better treatment of related work - The writing is confusing and the technical exposition is not clear enough
test
[ "HyrTqpoVf", "Hy7uO8PlG", "HJg6l0FxM", "H1BQNZqgf", "SkL8dLWEf", "SkeZ-Btff", "BJRGgSFff", "ByA_NNKzz", "rJU7Epmef", "SknDf6mxz", "S1-LoAGxM", "H1RmckMeM", "rkK4-0bgG", "ByTDAogxG" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public", "author", "public" ]
[ "We apologize for the confusion. If we consider the Jensen's inequality derived from the term \\log p(b,h) as a starting point, then your argument about using alpha and beta equal to 1 would be absolutely correct. However, we do not have the term \\log p(b,h) in the original objective (equation 4). To arrive at our objective, consider that we have designed the architecture as shown in figure 1. Then we derive equation 10 from equation 5 as follows:\n\n1. The term p(x_{t+1} | x_{1:t}, z_t, \\tilde{b}_t) in equation 5 is instantiated with p(x_{t+1} | h_t) in equation 10.\n2. The KLD term in equation 10 is an instantiation of the KLD term in equation 5 based on our architecture.\n3. Finally, we add 3 regularization terms: p(x_{t+1} | b_t) + alpha p(b_t | z_t) + beta p(h_t | z_t)\n\nNotice that using only the first two steps as our objective leads to a b_t that is a deterministic but random function of x_t and b_{t-1} depending completely on the initialization of the weights of the backward LSTM. The 3rd step adds two regularizations related to b_t and a regularization on h_t. In other words, you are right in pointing out that equation 10 is not exactly the lower bound stated in equation 5, but rather it is one with added regularization terms which help the model generalize better. Specifically, the regularization term p(x_{t+1}|b_t) helps b_t learn information about future, the second regularization term alpha p(b_t | z_t) makes \\tild{b_t} learn to be close to b_t that can help predict x_{t+1}, and the last regularization term beta p(h_t | z_t) regularizes h_t by imposing a reconstruction loss. \n\nThank you for pointing it out that the name “stochastic gradient” can be misleading since it has been used previously. We have changed it to “skipping gradient” in the latest version of our paper.\n", "*Quality*\n\nThe paper is easy to parse, with clear diagrams and derivations at the start. The problem context is clearly stated, as is the proposed model.\n\nThe improvements in terms of average log-likelihood are clear. The model does improve over state-of-the-art in some cases, but not all.\n\nBased on the presented findings, it is difficult to determine the quality of the learned models overall, since they are only evaluated in terms of average log likelihood. It is also difficult to determine whether the improvements are due to the model change, or some difference in how the models themselves were trained (particularly in the case of Z-Forcing, a closely related technique). I would like to see more exploration of this point, as the section titled “ablation studies” is short and does not sufficiently address the issue of what component of the model is contributing to the observed improvements in average log-likelihood.\n\nHence, I have assigned a score of \"4\" for the following reasons: the quality of the generated models is unclear; the paper does not clearly distinguish itself from the closely-related Z-Forcing concept (published at NIPS 2017); and the reasons for the improvements shown in average log-likelihood are not explored sufficiently, that is, the ablation studies don't eliminate key parts of the model that could provide this information.\n\nMore information on this decision is given in the remainder.\n\n*Clarity*\n\nA lack of generated samples in the Experimental Results section makes it difficult to evaluate the performance of the models; log-likelihood alone can be an inadequate measure of performance without some care in how it is calculated and interpreted (refer, e.g., to Theis et al. 2016, “A Note on the Evaluation of Generative Models”).\n\nThere are some typos and organizational issues. For example, VAEs are reintroduced in the Related Works section, only to provide an explanation for an unrelated optimization challenge with the use of RNNs as encoders and decoders.\n\nI also find the motivations for the proposed model itself a little unclear. It seems unnatural to introduce a side-channel-cum-regularizer between a sequence moving forward in time and the same sequence moving backwards, through a variational distribution. In the introduction, improved regularization for LSTM models is cited as a primary motivation for introducing and learning two approximate distributions for latent variables between the forward and backward paths of a bi-LSTM. Is there a serious need for new regularization in such models? The need for this particular regularization choice is not particularly clear based on this explanation, nor are the improvements state-of-the-art in all cases. This weakens a possible theoretical contribution of the paper.\n\n*Originality*\n\nThe proposed modification appears to amount to a regularizer for bi-LSTMs which bears close similarity to Z-Forcing (cited in the paper). I recommend a more careful comparison between the two methods. Without such a comparison, they are a little hard to distinguish, and the originality of this paper is hard to evaluate. Both appear to employ the same core idea of regularizing an LSTM using a learned variational distributions. The differences *seem* to be in the small details, and these details appear to provide better performance in terms of average log-likelihood on all tasks compared to Z-Forcing--but, crucially, not compared to other models in all cases.", "This paper proposes a particular form of variational RNN that uses a forward likelihood and a backwards posterior. Additional regularization terms are also added to encourage the model to encode longer term dependencies in its latent distributions.\n\nMy first concern with this paper is that the derivation in Eq. 1 does not seem to be correct. There is a p(z_1:T) term that should appear in the integrand.\n\nIt is not clear to me why h_t should depend on \\tilde{b}_t. All paths from input to output through \\tilde{b}_t also pass through z_t so I don't see how this could be adding information. It may add capacity to the decoder in the form of extra weights, but the same could be achieved by making z_t larger. Why not treat \\tilde{b}_t symmetrically to \\tilde{h}_t, and use it only as a regularizer? \n\nIn the no reconstruction loss experiments do you still sample \\tilde{b}_t in the generative part? Baselines where the \\tilde{b}_t -> h_t edge is removed would be very nice.\n\nIt seems the Blizzard results in Figure 2 are missing no reconstruction loss + full backprop.\n\nI don't understand the description of the \"Skip Gradient\" trick. Exactly which gradients are you skipping at random?\n\nDo you have any intuition for why it is sometimes necessary to set beta=0?\n", "This paper builds a sequential deep generative model with (1) an inference network parameterized by an RNN running from the future to the past and (2) an explicit representation of the hidden state of the backward RNN in the generative model. The model is validated on held-out likelihood via the ELBO on text, handwriting, speech and images. It presents good emprical results and works at par with or better than many other baselines considered.\n\nThe main source of novelty here the choice made in the transition function of z_t to also incorporate an explicit variable to models the hidden state of the backward RNN at inference time and use that random variable in the generative process. This is a choice of structural prior for the transition function of the generative model that I think lends it more expressivity realizing the empirical gains obtained.\n\nI found the presentation of both the model and learning objective to be confusing and had a hard time following it. The source of my confusion is is that \\tilde{b} (the following argument applies equivalently to \\tilde{h}) is argued to be a latent variable. Yet it is not inferred (via a variational distribution) during training.\n\nPlease correct me if I'm wrong but I believe that an easier to understand way to explain the model is as follows: both \\tilde{b} and \\tilde{h} should be presented as *observed* random variables during *training* and latent at inference time. Training then comprises maximizing the marginal likelihood of the data *and* maximizing the conditional likelihood of the two observed variables(via p_psi and p_eta; conditioned on z_t). Under this view, setting beta to 0 simply corresponds to not observing \\tilde{h_t}. alpha can be annealed but should never be set to anything less than 1 without breaking the semantics of the learned generative model.\n\nConsider Figure 1(b). It seems that the core difference between this work and [Chung et. al] is that this work parameterizes q(Z_t) using x_t....x_T (via a backward RNN). This choice of inference network can be motivated from the point of view of building a better approximation to the structure of the posterior distribution of Z_t under the generative model. Both [Fracarro et. al] and [Krishnan et. al] (https://arxiv.org/pdf/1609.09869.pdf) use RNNs from x_T to x_1 to train sequential state space models. [Gao et. al] (https://arxiv.org/pdf/1605.08454.pdf) derive an inference network with a block-diagonal structure motivated by correlations in the posterior distribution. Incorporating a discussion around this idea would provide useful context for where this work stands amongst the many sequential deep generative models in the\nliterature.\n\nQuestions for the authors:\n* How important is modeling \\tilde{h_t} in TIMIT, Blizzard and IMDB?\n* Did you try annealing the KL divergence in the PTB experiment. Based on the KL divergence you report it seems the latent variable is not necessary.\n\nOverall, I find the model to be interesting and it performs well empirically. However, the text of the paper lacks a bit of context and clarity that makes understanding it challenging to understand in its current form.", "\nAfter reading the author's response, R3's review, and the revised paper, I am more concerned about the clarity in this work than I was initially.\n\n\"Regarding your comment .... that’s true...... We treat alpha as a hyperparameter, so its value should be chosen based on validation set. The positive value of alpha governs how much weightage is given to the reconstruction of b_t vs the rest of the terms in the cost.\"\n\nOne my concerns that still remains from my initial review is that there is no probabilistic reason provided for using alpha as a hyperparameter.\n\nLet me try and illustrate why I think alpha (and beta, but for the moment, we'll ignore the latent variable \\tilde{h_t-1} and focus just on \\tilde{b}_t) should have the value 1.\nConsider a simplification of the model that looks at a single time slice of the model -- i.e. just the variables z_t, \\tilde{b}_t and h_t in Figure 1(b) (we'll refer to them as z, b, and h respectively).\n\nFor a single time-slice variant of this model, the joint distribution is p(z)p(b|z)p(h|b,z).\nAs mentioned earlier and acknowledged, at training time, b and h are observed while z is latent. So if we want to maximize the likelihood of the observed data during training, we have:\n\\log p(b,h) = \\log \\int_z p(b,h,z) = \\log \\int_z p(b|z) p(h|b,z) p(z) * q(z|h)/q(z|h) \\geq (Jensens) E_{q(z|h)}[\\log p(b|z)+ \\log p(h|b,z)] - KL[q(z|h)||p(z)]\n\nContrast this with Equation (10) in the paper. Note that using alpha <1 implies that we multiply a negative number (namely log p(b|z)) with a fraction which always *increases* the number artificially. This means the resulting objective is no longer a valid lower-bound on the marginal likelihood of the observed data.\nWhile I can potentially see a case for annealing alpha to 1, I'm a little concerned by the numbers reported when alpha is set to 0.0001. \nWhat is the number reported at test time -- dose it use alpha? Please do correct me if I've missed something and there is a probabilistic reason for why alpha can be <1; as far as I can tell, it has only appeared in Equation (10) and not before in Equation (5).\n\nOverall, while I think the paper outlines an interesting idea,\nin its current form (in the revised version) I still find it difficult to follow and not appropriately motivated\nor set in context of recent work (see also comments by R3). Finally, a minor point -- At the bottom of Page 4 there is a paragraph about a heuristic used at training time.\nPlease expand on this further if you found it useful by explicitly stating how it changes the lower-bound at training time. \nPlease also use a different name than \"Stochastic Backpropagation\" which has been used before in https://arxiv.org/abs/1401.4082.", "Thank you for your positive comments. Indeed, making the forward LSTM ‘aware’ of the backward LSTM’s state is a crucial factor in improving the expressivity of our model. \n\nWe apologize for the lack of clarity in the submitted version; we have made the text clearer in our latest version. To clarify the doubt you mentioned about \\tilde{b}, we do sample \\tilde{b}_t during training from p_{\\psi}(\\tilde{b}_t | z_t), and feed it to h_t, where z_t is sampled from q_\\phi (z_t | h_{t-1}, b_t). This process of inferring \\tilde{b}_t and feeding to h_t however is implicitly captured in the term p(x_{t+1} | h_t ). The only \\tilde{b}_t dependent term that appears in the objective is p_{\\psi}(b_t | z_t ), which (to be precise) maximizes p_{\\psi}( \\tilde{b}_t = b_t | z_t ).\n\nRegarding your comment “both \\tilde{b} and \\tilde{h} should be presented as *observed* random variables during *training* and latent at inference time”, that’s true. \n\nRegarding your comment about not setting alpha to anything less than 1, we are not sure if we understand your concern correctly. We treat alpha as a hyperparameter, so its value should be chosen based on validation set. The positive value of alpha governs how much weightage is given to the reconstruction of b_t vs the rest of the terms in the cost.\n\nAs suggested by the reviewer, here is a brief comparison between our model and the papers cited by the reviewer: \nIn krishnan et al, the data x_t at each time step t is modeled using a VAE with hidden state z, where the approximate posterior q_{\\phi} (z | x) is a function of the forward and backward hidden states, and the KL divergence minimizes the difference between this approximate posterior and the prior over z. The key difference between their model and ours is that their model learns a VAE on the data space, i.e., the reconstruction error is on the data itself, such that the latent variable z of the VAE is a function of the Bi-RNN's hidden states. In our model on the other hand, the VAE is learned on the Bi-LSTM's hidden state, i.e., the reconstruction error is on the forward and backward LSTM's hidden states h_t and b_t which share the latent variable z_t. In Gao et al, the approximate prior at time step t is modeled as q_{\\phi} (z_t | z_{t-1}, x_t ), which factorizes as q_{\\phi} (z_t | z_{t-1} ) . q_{\\phi} (z_t | x_t ). Each of the latter two functions are modeled as Gaussians with mean and variance as a non-linear function of z_{t-1} and x_t respectively. Thus this model does not make use of recurrent neural networks in modeling the data. Secondly, similar to Krishnan et al, this model learns to reconstruct data instead of a hidden space, as in our model.\n\nWe did try experiments without modeling \\tilde{h}_t but found the results to be slightly worse. We believe it acts as a regularizer on the activation h_t learned by the model. But in general the coefficient \\beta used for the reconstruction loss of h_t is a hyperparameter and so it should be chosen using the validation set. \nIndeed, in the ablation studies, we report that the KL term is not useful in the case of PTB dataset because the KL term is small and performance remains unaffected when not including it in the objective. But performance drops in the case of the other datasets if the KL term is removed since for these datasets the KL term is large. \n\nOnce again, we apologize for the lack of clarity. We have made the text clearer in the latest version of our paper which can be found at the anonymous link (https://anonfile.com/W6i9bad3b4/ICLR18_VLM.pdf).\n", "Thank you for your constructive comments. You are right. We have corrected Eq 1 in our latest version (https://anonfile.com/W6i9bad3b4/ICLR18_VLM.pdf). Please note that our model implementation was not affected by these writing mistakes.\n\nTo answer why it is beneficial to make h_t dependent on \\tilde{b}_t, note that forward and backward LSTMs model data sequences independently in different ways in a traditional Bi-LSTM. Creating the dependence from \\tilde{b}_t to h_t is important to make the forward LSTM use of information from the backward LSTM and thus learn a richer representation. This representation is useful in tasks like next step prediction where only the forward LSTM is used during inference and hence the structure captured by the backward LSTM is lost in the case of a traditional Bi-LSTM. In our model this structure is utilized.\n\nWe did experiments where we remove the connection from \\tilde{b}_t to h_t and found that only using \\tilde{b}_t in the reconstruction cost (as a regularizer) does not produce as good results as our model where both the reconstruction and feeding \\tilde{b}_t to h_t is used. Thus feeding \\tilde{b}_t to h_t helps the forward model during inference. On the flip side, we do not pass \\tilde{h}_t to b_t because we do not use the backward LSTM during inference, and so it may not benefit us.\n\nYes, in the no reconstruction loss experiments we do sample \\tilde{b}_t.\n\nWe have uploaded the Blizzard results in Figure 2 with no reconstruction loss + full backprop that you asked for to this anonymous link https://anonfile.com/j3nbo0dbbd/blz_rec_sdc_full.png It can be seen that reconstruction loss with stochastic backprop yields the best performance compared to all other alternatives.\n\nRegarding setting \\beta=0, we treat it as a hyperparameter and so it is chosen using the validation set. We do not have any explanation why having it zero is better sometimes. \n\nWe have made the description of skip gradient clearer in the latest version. The idea is to stochastically skip gradients of the auxiliary reconstruction costs with respect to the recurrent\nunits from back-propagating through time. To achieve this, at each time step, a mask drawn from a Bernoulli distribution which governs whether to skip the gradient or to back-propagate it for each data sample.", "Thank you for your detailed comments. The major concern of the reviewer seems to be the lack of a clear contrast between our proposed model and Z-Forcing, which is closely related to our model. We will try to make this distinction clearer. In order to showcase the differences between Z-Forcing and our model in terms of the differences between how our model is trained, rather than just the architectural differences (as stressed by the reviewer in their comments), we additionally conducted the following experiment which incrementally adds the additional optimization changes to Z-Forcing (that we added to train our model). Specifically we run experiments to see the effects of stochastic backprop on Z-Forcing. We also add a reconstruction cost on h_t in the Z-Forcing model as another separate experiment. So for a detailed comparison, we show the evolution of Bits Per Character (BPC) on PTB for four cases:\n1. Z-forcing\n2. Z-forcing + stochastic backprop (on the auxiliary cost)\n3. Z-forcing + stochastic backprop (on the auxiliary cost) + reconstruction/auxiliary loss\n4. Variational Bi-LSTM\nThe plot can be found in this anonymous link https://anonfile.com/HdFdcadbb6/ptb_sdc_zf_rec.png . As can be seen there is a gradual improvement from model 1 to model 4. \nFurther, we also have the following ablation studies in the latest version (https://anonfile.com/W6i9bad3b4/ICLR18_VLM.pdf) of our paper that the\n Reconstruction loss on h_t vs activity regularization on h_t-- here we show how the auxiliary reconstruction loss on h_t performs between compared with simply using an l2 regularization on h_t. \n Use of parametric encoder prior vs. fixed (standard VAE) Gaussian prior-- here we discuss the importance of the VAE prior we propose (which is conditional over h_t) compared to a fixed Gaussian that is usually used in VAEs.\n Effectiveness of auxiliary costs and stochastic back-propagation-- here we show that stochastic backpropagation helps during optimization.\n Importance of sampling from VAE prior during training-- here we show that sampling z_t during training has a regularization effect on the model.\n\nTo address the reviewer’s concern regarding additional qualitative analysis of the data generated by our mode, here are some of the samples generated by our model on the IMDB dataset:\nit was very well directed by the critics and critics who have n't seen it .\ni did n't want to see this movie .\nbut the movie does have a few laughs .\nthe action is also very well acted but it has a great story .\nit 's just a bit too slow and the ending is very good .\nthis film is not as bad as you 've heard .\nit 's also quite a good film with a great cast and great story lines .\nand what the movie was nominated for is a great cast .\nit 's a good film and you ca n't miss it .\n\nRegarding your concern for the need of a new regularizer, our reasoning is as follows. In the paper we already mention that the forward and backward LSTM capture different aspects of a temporal sequence, and this is the reason why (for traditional Bi-LSTMs) concatenating the hidden representations from the two LSTMs leads to better performance in tasks where such a concatenation is possible. However, this concatenation is not possible in the next step prediction tasks (Eg. language generation) where only the forward LSTM must be used during inference. Hence the information captured by the backward LSTM in a Bi-LSTM trained separate from the forward LSTM is lost. For this reason, an objective/regularization that jointly optimizes the two LSTMs in a Bi-LSTM is needed. Other examples of such joint optimization are Z-Forcing and twin networks that we cite in our paper.", "The analysis of our method with and without stochastic backprop as well as with and without reconstruction losses are provided in the ablation studies section (figure 2). The text around this figure is unfortunately missing in the submitted version but can be found in the latest (anonymous) version we have linked in our previous reply. This analysis shows how our model benefits from stochastic backprop.\n\nWe had also run experiments to see the effects of stochastic backprop on Z-forcing. We additionally also add a reconstruction cost on h_t in the Z-forcing model as another separate experiment. So for a detailed comparison, we show the evolution of BPC on PTB for four models:\n1. Z-forcing\n2. Z-forcing + stochastic backprop (on the auxiliary cost)\n3. Z-forcing + stochastic backprop (on the auxiliary cost) + reconstruction/auxiliary loss\n4. Variational Bi-LSTM\nThe plot can be found in this anonymous link https://anonfile.com/HdFdcadbb6/ptb_sdc_zf_rec.png . As can be seen there is a gradual improvement from model 1 to model 4. \n\nWe agree with your suggestion of exploring the usefulness of the latent variable z and we ourselves had given thought to it. However, this is not the focus of our work, and this analysis applies to all models that make use of a latent variable in LSTMs (including Z-forcing). So we leave this as separate future work.", "We thank you for raising this question. Your first doubt regarding \\tilde{b}_t is because of ambiguity in our notation. We do sample \\tilde{b}_t from p_psi. These samples are used in the third term-- alpha log p_psi(b_t | z_t). To remove ambiguity, this term should be read as-- alpha log p_psi(\\tilde{b}_t = b_t | z_t).\n\nRegarding your second question about terms that should relate \\tilde{b}_t and h_t, we believe the notations are correct. Imagine if we were to write the objective for a simple LSTM, then this objective would simply contain a summation of terms p(x_{t+1} | h_t) over time steps t. The dependence of h_t on the previous time steps are implicit. Similarly, in our objective, the term p(x_{t+1} | h_t) implicitly contains the dependence on \\tilde{b}_t, z_t and the previous time step variables.", "Dear authors,\n\nit's an interesting research, but I still have some questions about the objective, and I hope you can give me a help!\n\nIn Eq.10, the objective, \\tilde{b}_t is sampled from p_psi. However, there is no \\tilde{b}_t in the inside term, so it seems that there is no need to sample \\tilde{b}_t? Is it just a typo?\n\nNext, based on figure 1(a) and your answer, I think there may be some terms to stand for the direct connection between h_t and \\tilde{b}_t. However, it seems that, in Eq. 10 , there is no term to stand for the directly conditional dependence between h_t and \\tilde{b}_t( or b_t ). I guess maybe the term p(x_{t+1} | b_t) includes relations like p(x_{t+1} | h_t)p(h_t | b_t), is it true?\nThanks for your help!", "\"Another possible difference in your implementation could be that we suggest using stochastic back-propagation through the auxiliary costs. This entails that the gradients through the auxiliary cost be stochastically dropped during training\"\n\nI was not dropping these gradients, this is the only difference I could think of. Though, this raises the point, is all the benefit actually coming from this \"stochastic back-propagation\" over Z-Forcing ? As without using this stochastic back-propagation, results seems more or less same to Z-Forcing (https://arxiv.org/abs/1711.05411)\n\nI'd encourage the authors to add the results with/without \"stochastic back-propagation\" and compare themselves to the results which Z-Forcing paper reports.\n\nAnother thing which would make this submission strong, is to analyze how useful the latents (learned z's) are. For ex. may be for some classification task. ", "We appreciate your interest in our paper and your effort to reproduce our results. \nWe apologize for the lack of clarity in the submitted version. We have improved the model description in our current version. We would like to point out that while our model is similar in spirit to Z-forcing, there are notable differences in the derivation of the variational lower bound and the auxiliary costs that provide improvement in performance because the forward LSTM is implicitly directly fed the backward LSTM's state which is in contrast with Z-forcing.\n\nFrom your comment, it seems to us that you add a reconstruction cost on h_t on top of the Z-forcing objective. If this is true, then we would like to clarify that in addition to adding the reconstruction cost and feeding z_t to h_t, we also pass \\tilde{b}_t to h_t. Amongst other differences, this is a crucial difference between Z-forcing and our model. In other words, during training, we sample \\tilde{b}_t for a sampled z_t, and encourage this \\tilde{b}_t to be similar to b_t, and also feed this \\tilde{b}_t to h_t. In this way, our model learns to implicitly use b_t during training as an input to h_t. This is different from Z-forcing where the model passes z_t to h_t while minimizing the KL divergence difference between the prior and posterior over z_t.\n\nAnother possible difference in your implementation could be that we suggest using stochastic backpropagation through the auxiliary costs. This entails that the gradients through the auxiliary cost be stochastically dropped during training.\n\nWe hope these suggestions help in reproducing the results we report in our paper.\n\nFor further clarification, we have uploaded an anonymous copy of the latest version of our paper here: https://anonfile.com/W6i9bad3b4/ICLR18_VLM.pdf.", "Hello Authors, \n\nVery interesting work! \n\nI have been trying to reproduce your experiments. As far as I understand it is straightforward extension to Z-Forcing(https://arxiv.org/abs/1711.05411). I tried to replicate your results using the Z-Forcing code(https://github.com/sordonia/zforcing) so far I have not been able to replicate your results. Adding the reconstruction cost (in the forward RNN, which was also missing from Z-Forcing) does not seem to have any impact on results. \n\nSo are you doing something which is not mentioned in the paper?\n\n " ]
[ -1, 4, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "SkL8dLWEf", "iclr_2018_SkqV-XZRZ", "iclr_2018_SkqV-XZRZ", "iclr_2018_SkqV-XZRZ", "SkeZ-Btff", "H1BQNZqgf", "HJg6l0FxM", "Hy7uO8PlG", "H1RmckMeM", "S1-LoAGxM", "iclr_2018_SkqV-XZRZ", "rkK4-0bgG", "ByTDAogxG", "iclr_2018_SkqV-XZRZ" ]
iclr_2018_Bk6qQGWRb
Efficient Exploration through Bayesian Deep Q-Networks
We propose Bayesian Deep Q-Network (BDQN), a practical Thompson sampling based Reinforcement Learning (RL) Algorithm. Thompson sampling allows for targeted exploration in high dimensions through posterior sampling but is usually computationally expensive. We address this limitation by introducing uncertainty only at the output layer of the network through a Bayesian Linear Regression (BLR) model, which can be trained with fast closed-form updates and its samples can be drawn efficiently through the Gaussian distribution. We apply our method to a wide range of Atari Arcade Learning Environments. Since BDQN carries out more efficient exploration, it is able to reach higher rewards substantially faster than a key baseline, DDQN.
rejected-papers
This work develops a methodology for exploration in deep Q-learning through Thompson sampling to learn to play Atari games. The major innovation is to perform a Bayesian linear regression on the last layer of the deep neural network mapping from frames to Q-values. This Bayesian linear regression allows for efficiently drawing (approximate) samples from the network. A careful methodology is presented that achieves impressive results on a subset of Atari games. The initial reviews all indicated that the results were impressive but questioned the rigor of the empirical analysis and the implementation of the baselines. The authors have since improved the baselines and demonstrated impressive results across more games but questions over the empirical analysis remain (by AnonReviewer3 for instance) and the results still span only a small subset of the Atari suite. The reviewers took issue with the treatment of related work, placing the contributions of this paper in relation to previous literature. In general, this paper shows tremendous promise, but is just below borderline. It is very close to a strong and impressive paper, but requires more careful empirical work and a better treatment of related work. Hopefully the reviews and the discussion process will help make the paper much stronger for a future submission. Pros: - Very impressive results on a subset of Atari games - A simple and elegant solution to achieving approximate samples from the Q-network - The paper is well written and the methodology is clearly explained Cons: - Questions remain about the rigor of the empirical analysis (comparison to baselines) - Requires more thoughtful comparison in the manuscript to related literature - The theoretical justification for the proposed methods is not strong
val
[ "SkMlZJ9gG", "rJVfxQ2lf", "HkHX9bm-M", "SkCDCP6Qz", "r1fs6DpXf", "HkPHDDpXG", "S1Ux1vpmG", "Hy4eOLaXM", "HJ5TIU6Qz", "BkisdVq7f", "rk1A7xIGz", "B1Y_pIHfM", "ryfx6L7fM", "HkUmeK2WG", "B1MgfN_WM", "H1pN_dqlz", "BJpzYoqlz", "rJnxeAYeM", "ByuOaTteM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "public", "author", "author", "public", "public" ]
[ "The authors propose a new algorithm for exploration in Deep RL. They apply Bayesian linear regression, given the last layer of a DQN network as features, to estimate the Q function for each action. Posterior weights are sampled to select actions during execution (Thompson Sampling style). I generally liked the paper and the approach, here are some more detailed comments.\n\nUnlike traditional regression, here we are not observing noisy realisations of the true target, since the algorithm is bootstrapping on non-stationary targets. It’s not immediately clear what the semantics of this posterior are then. Take for example the case where a particular transition (s,a,r,s’) gets replayed multiple times in a row, the posterior about Q(s,a) might then become overly confident even though no new observation was introduced. \n\nPrevious applications of TS to MDPs (Strens, (A Bayesian framework for RL) 2000; Osband 2013) commit to a posterior sample for an episode. But the proposed algorithm samples every T_sample steps, did you find this beneficial to wait longer before resampling? It would be useful to comment on that aspect.\n\nThe method is evaluated on 6 Atari games (How were the games selected? Do they have exploration challenges?) against a single baseline (DDQN). DDQN wasn’t proposed as an exploration method so it would be good to justify why this is an appropriate baseline (versus other exploration methods). The authors argue they could not reproduce Osband’s bootstrapped DQN, which is also TS-based, but you could at least have reported their scores. \n\nOn these games versus (their implementation of) DDQN, the results seem encouraging. But it would be good to know whether the approach works well across games and is competitive against other stronger baselines. Alternatively, some evidence that interesting exploratory behavior is obtained (in Atari or even smaller domain) would help convince the reader that the approach does what it claims in practice.\n\nIn addition, your reported score on Atlantis of ~2M seems too big. Did you cap the max episode time to 30mins? As is done in the baselines usually.\n\n\nMinor things:\n-“TS finds the true Q-function very fast” But that contradicts the previous statements, I think you mean something different. If TS does not select certain actions, the Q-function would not be updated for these actions. It might find the optimal policy quickly though, even though it doesn’t resolve the entire value function completely.\n-Which epsilon did you use for evaluation of DDQN in the experiments? It’s a bit suspicious that it doesn’t achieve 20+ in Pong.\n-The history of how to go from a Bellman equation to a sample-based update seems a bit distorted. Sample-based RL did not originate in 2008. Also, DQN does not optimize the Bellman residual, it’s a TD update. \n", "\nThe authors describe how to use Bayesian neural networks with Thompson sampling\nfor efficient exploration in q-learning. The Bayesian neural networks are only\nBayesian in the last layer. That is, the authors learn all the previous layers\nby finding point estimates. The Bayesian learning of the last layer is then\ntractable since it consists of a linear Gaussian model. The resulting method is\ncalled BDQL. The experiments performed show that the proposed approach, after\nhyper-parameter tuning, significantly outperforms the epsilon-greedy\nexploration approaches such as DDQN.\n\nQuality:\n\nI am very concern about the authors stating on page 1 \"we sample from the\nposterior on the set of Q-functions\". I believe this statement is not correct.\nThe Bayesian posterior distribution is obtained by combining an assumed\ngenerative model for the data, data sampled from that model and some prior\nassumptions. In this paper there is no generative model for the data and the\ndata obtained is not actually sampled from the model. The data are just targets\nobtained by the q-learning rule. This means that the authors are adapting\nQ-learning methods so that they look Bayesian, but in no way they are dealing\nwith a principled posterior distribution over Q-functions. At least this is my\nopinion, I would like to encourage the authors to be more precise and show in\nthe paper what is the exact posterior distribution over Q-functions and show\nhow they approximate that distribution, taking into account that a posterior\ndistribution is obtained as $p(theta|D) \\propto p(D|theta)p(\\theta)$. In the\ncase addressed in the paper, what is the likelihood $p(D|\\theta)$ and what are\nthe modeling assumptions that explain how $D$ is generated by sampling from a\nmodel parameterized by \\theta?\n\nI am also concerned about the hyper-parameter tuning for the baselines. In\nsection 5 (choice of hyper-parameters) the authors describe a quite exhaustive\nhyper-parameter tuning procedure for BDQL. However, they do not mention whether\nthey perform a similar hyper-parameter tuning for DDQN, in particular for the\nparameter epsilon which will determine the amount of exploration. This makes me\nwonder if the comparison in table 2 is fair. Especially, because the authors\ntune the amount of data from the replay-buffer that is used to update their\nposterior distribution. This will have the effect of tuning the width of their\nposterior approximation which is directly related to the amount of exploration\nperformed by Thompson sampling. You can, therefore, conclude that the authors are\ntuning the amount of exploration that they perform on each specific problem.\nIs that also being done for the baseline DDQN, for example, by tuning epsilon in\neach problem?\n\nThe authors also report in table 2 the scores obtained for DDQN by Osband et\nal. 2016. What is the purpose of including two rows in table 2 with the same\nmethod? It feels a bit that the authors want to hide the fact that they only\ncompare with a singe epsilon-greedy baseline (DDQN). Epsilon-greedy methods\nhave already been shown to be less efficient than Bayesian methods with\nThompson sampling for exploration in q learning (Lipton et al. 2016).\n\nThe authors do not compare with variational approaches to Bayesian learning\n(Lipton et al. 2016). They indicate that since Lipton et al. \"do not\ninvestigate the Atari games, we are not able to have their method as an\nadditional baseline\". This statement seems completely unjustified. The authors\nshould clearly include a description of why Lipton's approach cannot be applied\nto the Atari games or include it as a baseline. \n\nThe method proposed by the authors is very similar to Lipton's approach. The\nonly difference is that Lipton et al. use variational inference with a\nfactorized Gaussian distribution to approximate the posterior on all the\nnetwork weights. The authors by contrast, perform exact Bayesian inference, but\nonly on the last layer of their neural network. It would be very useful to know\nwhether the exact linear Gaussian model in the last layer proposed by the\nauthors has advantages with respect to a variational approximation on all the\nnetwork weights. If Lipton's method would be expensive to apply to large-scale\nsettings such as the Atari games, the authors could also compare with that\nmethod in smaller and simpler problems.\n\nThe plots in Figure 2 include performance in terms of episodes. However, it\nwould also be useful to know how much is the extra computational costs of\nthe proposed method. One could imagine that computing the posterior\napproximation in equation 6 has some additional cost. How do BDQN and DDQN\ncompare when one takes into account running time and not episode count into\naccount?\n\nClarity:\n\nThe paper is clearly written. However, I found a lack of motivation for the\nspecific design choices made to obtain equations 9 and 10. What is a_t in\nequation 9? The parameters \\theta are updated just after equation 10 by\nfollowing the gradient of the loss in which the weights of the last layer are\nfixed to a posterior sample, instead of the posterior mean. Is this update rule\nguaranteed to produce convergence of \\theta? I could imagine that at different\ntimes, different posterior samples of the weights will be used to compute the\ngradients. Does this create any instability in learning? \n\nI found the paragraph just above section 5 describing the maze-like\ndeterministic game confusing and not very useful. The authors should improve\nthis paragraph.\n\nOriginality:\n\nThe proposed approach in which the weights in the last layer of the neural\nnetwork are the only Bayesian ones is not new. The same method was proposed in\n\nSnoek, J., Rippel, O., Swersky, K., Kiros, R., Satish, N., Sundaram, N., ... &\nAdams, R. (2015, June). Scalable Bayesian optimization using deep neural\nnetworks. In International Conference on Machine Learning (pp. 2171-2180).\n\nwhich the authors fail to cite. The use of Thompson sampling for efficient\nexploration in deep Q learning is also not new since it has been proposed by\nLipton et al. 2016. The main contribution of the paper is to combine these two\nmethods (equations 6-10) and evaluate the results in the large-scale setting of\nATARI games, showing that it works in practice.\n\nSignificance:\n\nIt is hard to determine how significant the work is since the authors only\ncompare with a single baseline and leave aside previous work on efficient\nexploration with Thompson sampling based on variational approximations.\n\nAs far as the method is described, I believe it would be impossible to\nreproduce their results because of the complexity of the hyper-parameter tuning\nperformed by the authors. I would encourage the authors to release code that can\ndirectly generate Figure 2 and table 2.\n", "(Last minute reviewer brought in as a replacement).\n\nThis paper proposed \"Bayesian Deep Q-Network\" as an approach for exploration via Thompson sampling in deep RL.\nThis algorithm maintains a Bayesian posterior over the last layer of the neural network and uses that as an approximate measure of uncertainty.\nThe agent then samples from this posterior for an approximate Thompson sampling.\nExperimental results show that this outperforms an epsilon-greedy baseline.\n\nThere are several things to like about this paper:\n- The problem of efficient exploration with deep RL is important and under-served by practical algorithms. This seems like a good algorithm in many ways.\n- The paper is mostly clear and well written.\n- The experimental results are impressive in their outperformance.\n\nHowever, there are also some issues, many of which have already been raised:\n- The poor performance of the DDQN baseline is concerning and does not seem to match the behavior of prior work (see Pong for example).\n- There are some loose and misleading descriptions of the algorithm computing \"the posterior\" when actually this is very much an approximation method... that's OK to have approximations but it shouldn't be hidden away.\n- The connection to RLSVI is definitely understated, since with a linear architecture this is precisely RLSVI. The sentiment that extending TS to larger spaces hasn't been fully explored is definitely valid... but this line of work should certainly be mentioned in the 4th paragraph. RLSVI is provably-efficient with a state-of-the-art regret bound for tabular learning - you would probably strengthen the case for this algorithm as an extension of RLSVI by building on this connection... otherwise it's a bit adhoc to justify this approximation method.\n- This paper spends a lot of time re-deriving Bayesian linear regression in a really standard way... and without much discussion of how/why this method is an approximation (it is) especially when used with deep nets.\n\nOverall, I like this paper and the approach of extending TS-style algorithms to Deep RL by just taking the final layer of the neural network.\nHowever, it also feels like there are some issues with the baselines + being a bit more clear about the approximations / position relative to other algorithms for approximate TS would be a better approach.\nFor example, in linear networks this is the same as RLSVI, bootstrapped DQN is one way to extend this idea to deep nets, but this is another one and it is much better because XYZ. (this discussion could perhaps replace the rather mundane discussion of BLR, for example).\n\nIn it's current state I'd say marginally above, but wouldn't be surprised if these changes turned it into an even better paper quite quickly.\n\n\n===============================================================\n\nRevising my review following the rebuttal period and also the (ongoing) revisions to the paper.\n\nI've been disappointed by the authors have incorporated the feedback/reviews - I expected something a little more clear / honest. Given the ongoing review decisions/issues I'm putting my review slightly below accept.\n\n## Relation to literature on \"randomized value functions\"\nIt's really wrong to present BDQN as is if it's the first attempt at large-scale approximations to Thompson sampling (and then slip in a citation to RLSVI as a BDQN-like algorithm). This algorithm is a form of RLSVI (2014) where you only consider uncertainty over the last (linear) layer - I think you should present it like this. Similarly *some* of the results for Bootstrapped DQN (2016) on Atari are presented without bootstrapping (pure ensemble) but this is very far from an essential part of the algorithm! If you say something like \"they did not estimate a true posterior\" then you should quantify this and (presumably) justify the implication that taking a gaussian approximation to the final layer is a *true* posterior. In a similar vein, you should be clear about the connections to Lipton et al 2016 as another method for approximate Bayesian posteriors in DQN.\n\n## Quality/science of experiments\nThe experimental results have been updated, and the performance of the baseline now seems much more reasonable. However, the procedure for \"selecting arbitrary number of frames\" to report performance seems really unnecessary... it would be clear that BDQN is outperforming DDQN... you should run them all for the same number of frames and then either compare (final score, cumulative score, #frames to human) or something else more fair/scientific. This type of stuff smells like overfitting!", "We thank AnonReviewer1 for a clear and constructive review. We are encouraged that you recognize the importance of the problem addressed and the novelty of the methods\n\nRegarding the posterior distribution, we apologize for being imprecise. As we noted to another reviewer, indeed this is not a true posterior but rather an approximation of the Q values that has an explicit smooth (approximate) representation of the uncertainty of the state-action values. We agree it can be off in the described situation but since BDQN runs BLR frequently and uses a moving window of replay buffer it cannot have a severe effect on BDQN performance. Indeed one interesting finding of our work is that this simple approach yields surprisingly large empirical benefits. \n\nThanks for raising this interesting point. We actually discuss the effect of sampling frequency in the appendix. In the episodic RL, it is enough to Thompson sample the model at the beginning of each episode (theoretically, more frequent sampling does not change the existing bounds). For Atari games we use T^{sample} equal to 1000 which is at the same order of game episode horizon. In the appendix, we added a further discussion about the effect of resampling frequency, the insight about how its design, and what T_sample may be best for the RL problems with shorter or longer horizon.\n\nWe would like to encourage the reviewer to look at the latest update of the draft where we added our results on more games and currently, we have 15 games. Unfortunately, due to the high cost of deep RL experiment, we were not able to run bdqn on all the Atari games. Regarding the baseline,\nwe choose DDQN as a simple baseline that is quite similar to BDQN except in the last layer and in the fit, and we have clarified this. \nWe ran Bootstrap DQN as another baseline for 5 games, but unfortunately, despite extensive experimentation and design choices, we were not able to reproduce their results. For some games, we received the return of less than the return of random policy. For honesty and clarity, since November, we put our Bootstrap DQN implementation public as well. \nCurrently, in the TS RL literature, it is one of the biggest challenges in the community to provide a significant improvement, as promised by TS, over DDQN, you can find almost all of the TS based cited literature in our paper, they compare against dqn and ddqn. It is roughly known, also discussed and confirmed with the authors of some of these Thompson sampling works, that neither of the proposed approaches produced much beyond the modest gains of DDQN on Atari games, except, as you correctly point out the current proposed BDQN approach which provides a significant improvement over this baseline. \n\n\nRegarding the game Atlantis, the behavior of bdqn on this game was interesting for us as well. We elaborate more on the scores of this game in the appendix. To get the score of 3.2M after 20M samples, we enforced the mentioned limit. When we removed this limit, the score gets to 62M after 15M samples. \n\nRegarding finding the optimal Q function and policy, the reviewer is right and we like to thank you also for your point on it. For the described grid world, in order to simplify the example, we assume that the game is deterministic, game horizon H is the episode length as well and only the true Q is the most optimistic function in the set. In this case, any other Q, even those which take the agent to the destination have a non zero Bellman error (defined in https://arxiv.org/abs/1610.09512), therefore the agent wants to eliminate them from the set. We have revised the statement.\n\nWe already discussed the experiments in the response to the AC. We ran the mentioned experiment again, and as AnonRev3 confirmed, the current scores are similar to the baseline. Furthermore, in our comparison tables, we compare against the scores of DDQN from its original papers during its evaluation time as well.\n\nWe apologize and we have corrected accordingly. We choose the 2008 paper due to its theoretical analysis and have updated accordingly.\n\n", "Thanks for your thoughtful review of our paper. We appreciate it\n\n“Sampling from posterior”: We apologize for being imprecise. As we noted to another reviewer, we have an approximate algorithm in which we use Bayesian linear regression to fit a function to the Q values in a way that allows some uncertainty over the resulting state-action values to be fit, and therefore sampled. We have updated our discussion accordingly. Other algorithms like Bootstrapped DQN also empirically compute an approximation over the posterior of Q values. We believe that our approach gains benefit from two features: (1) from computing an exact linear regression fit to the last layer, which can be more data efficient than single step updates (though in comparing to episodic replay it may vary); and (2) from an explicit (approximate) representation of uncertainty over the Q-values that can be easily sampled and used to inform exploration. \n\nRegarding the hyperparameters, as we also noted in the area chair review, the provided hyperparameter tuning is been used for the tuning of the extra parameters of BDQN and contains a few short run of BDQN. We revised the corresponding section of the paper and made it clear that the performed HPO was simple, quick, and indication of the robustness of BDQN. About per game parameter, we use a fixed size of replay sample for BLR among all the games.\n\nWe apologize if the tables have appeared misleading. We were merely trying to illustrate the performance of BDQN vs DDQN after the same number of samples, and in addition, the BDQN in its earlier stage vs DDQN after 200M samples during its evaluation time (reduced epsilon) which is reported at the original paper. We also have been asked and advised to add the comparison to human score, samples complexity of reaching the human scores and sample complexity of reaching DDQN^\\dagger score.\n\n\nRegarding BBQ work, as we mentioned above, we believe that our approach gains benefit from two features: (1) from computing an exact linear regression fit to the last layer and (2) from an explicit (approximate) representation of uncertainty over the Q-values. In the TS deep RL literature, mostly, two lines of works for Thompson sampling have been studied. One is variational inference based method,e.g.\nEfficient Dialogue Policy Learning with BBQ-Networks\nDropout as a Bayesian Approximation\n\nAnd the second one is the confidence based, e.g.\nDeep exploration via randomized value functions\nDeep exploration via bootstrapped dqn\nThe uncertainty bellman equation and exploration\nEtc.\nWhere the objective functions in these two lines of works, as the reviewer also mentioned, are different and BDQN is in the second category. In addition, these two following papers:\nDeep exploration via bootstrapped dqn\nDropout Inference in Bayesian Neural Networks with Alpha-divergences\nArgue that how the variational inference methods can severely underestimate model uncertainty.\n\nWe appreciate the reviewer for mentioning the typo in our figure2. The figure 2 is updated and the episode count was a typo, it is the step count. We compare the computation cost of BDQN and DDQN, but computing equation 6 just involves inverting a matrix of 512 by 512, every 100k time step, which is computationally negligible. We have a detailed discussion on the computation cost in the appendix.\n\na_\\tau is the action taken at time step \\tau, we restated it in the main text. \n\nRegarding the use of W in the update of theta, we are grateful to the reviewer for the careful review of our paper. The concern in use of W for feature representation update is really a keen observation and we elaborated it in the appendix. As the reviewer mentioned, we should not change this W too frequently, since it forces the network to spend the network capacity for providing a feature representation which is good for the variety of different draws of W. At the same time, it provides a noisy gradient since the cost surface changes fast and prevent the model from learning. On the other hand, the update of W, used for the update of theta, should not happen barely as well since every 100K the posterior gets updated and the feature representation should not overfit to a single W, and consequently overfit to a fixed set of skills. \n\nWe thank the reviewer for the feedback on the clarity of section 5, we apologize and we have updated this section.\n\nWe added Snoek et al to our paper. Thanks for mentioning it.\n\nWe agree that reproducibility is critical and we had released our code including hyperparameter settings in November. Also, the graphs, learned models (model parameters), returns per episode, any remaining output of the experiments and the required material for reproducing the plots are provided publicly.", "Thank you very much for your careful and constructive feedbacks, they helped us to carve the face of our paper in a strong way. We are glad that you appreciated both the method and the clarity of exposition. \n \nWe completely agree with your comments with respect to RLSVI and Bootstrapped DQN, we have revised the discussion accordingly and presented bdqn as a direct extension of RLSVI. \n\nYou are right about our approach is also an approximate method of a posterior. We apologize for being imprecise and we have updated our language to be more careful in our discussion and presentation of Bayesian linear regression.\n\nRegarding your point about the number of frames to run, for the game pong, we have run the experiments for a longer period of time, but in order to observe the difference between bdqn and ddqn performances, we had to plot just for the beginning of the game. For some games, we stopped when the plots reached the area around the plateau and for some games, we ran out resources to continue to 100M. \nWe would like to share with the reviewer that all the returns per episode, frame counts, and clipped reward return arrays are shared publicly and one can easily reproduce these plots and compare the score in different time steps. We also added a few more columns for the sample complexity and comparison to human scores. \n\nRegarding “running for the same number of samples”, we should mention that in the reported tables, the scores in the first column (BDQN) and the second column (DDQN) are the scores after the same number of samples (provided in the last column). For each game, both bdqn and ddqn are run for the same number of samples. The DDQN^\\dagger is also the score of DDQN reported in the original DDQN paper during the evaluation phase.\n\nRegarding the baseline, as you mentioned, the current plots and scores are updated.\n\nRegarding re-deviation of BLR, we tried to balance between deriving the standard method while making the paper self-contained. We received feedback that it is helpful to have BLR derivation, e.g. Rev2 suggested to elaborate more on BLR part.\n\n\nAgain thanks to your feedback, we made another constructive change, with help of other TS researchers, in the presentation of our paper and overview of related work. \nWe start off with PSRL, and how randomised value function and RLSVI leveraged it. Then how bootstrap dqn extend the idea to deep learning, followed by the noisy net, bbq, shallow UBE and LS-DQN. Finally, we explain that bdqn is an extension to RLSVI and follows the same idea. \n\nStill, we would be happy to get more feedback from you to further improve our paper.\n\n\n", "Area Chair:\nThanks for the review! We really appreciate it. \n\nRegarding the score of DDQN, we ran the experiments, e.g. for the game Pong, again and reported the result in the current draft. As the AnonReviewer1 also mentioned after visiting the current draft, the current results are similar to the original paper. We are grateful to all the reviewers for their constructive suggestion and we believe that it made the paper stronger.\n\nIt would be helpful to mention that, on the score tables in the main draft, we also compared against the score of DDQN^\\dagger, which is the reported score of DDQN from the original DDQN paper (copied) during the evaluation phase where evaluation epsilon is set to 0.001 after 200M samples. For BDQN, we did not design any evaluation phase. We already elaborated it more in the main text in order to make it more clear.\n\n\nFor the choice of hyper parameters, we used the hyper parameters used in DQN and DDQN, which are tuned through an exhaustive hyperparameter tuning procedure in the original papers for these algorithms.\n\nRegarding the hyperparameter tuning of extra parameters of BDQN, one of the main reason why we talked about it in the main text, aside from being honest about BDQN, was to deliver the point that it is not exhaustive and it is a simple tuning procedure as another proof of the superiority of Thompson sampling over epsilon-greedy. The whole process of hyperparameter tuning, including coding, contains a few runs of BDQN, each for a few hours. We stated that a further hyperparameter tuning can be done for BDQN to provide even more exciting results. We, already, discussed it in the main draft.\n\nThe BDQN code is available to the public and is online since November. To preserve double-blind status, we won't post the GitHub link here but it's not too hard to find.\n", "Thanks for the thoughtful review, follow-up of our paper and thank you for mentioning this interesting and related work. This work, which just came out recently and appeared at NIPS, is interestingly very similar to our last layer regression narrative and we already included a discussion about it in our paper. ", "Dear reviewers and area chair\n\nWe would like to thank the reviewers and area chair for their thoughtful responses to our paper. We are grateful to each of you for critical suggestions that helped us to significantly improve our paper. Please find individual replies to each of the reviews in the respective threads.\n\nFurthermore, since the Area Chair provided the abstract of the main concerns in the reviews, we would like to ask all the reviewers to consider looking at the area chair’s review and our reply.\n", "\"Shallow Updates for Deep Reinforcmeent Learning\"\nhttps://arxiv.org/pdf/1705.07461.pdf\n\nThis work on LS-DQN follows a (relatively) similar narrative of using a Least-Squares RL on the last layer of DQN.\nYou might consider that this paper on Bayesian DQN is similar to an RLSVI-version of LS-DQN.\n\nI was not aware of this until recently.", "Thank you for helping us in addressing this comment. We agree that adding a further discussion in a detailed comparison of UBE and BDQN makes the current draft stronger. \n\nAs the reviewer3 mentioned, UBE provides a Bellman-like equation for uncertainty and learns a shallow network in order to approximate the uncertainty. While in BDQN, the uncertainty is approximated using BLR. \n", "Both of these algorithms use a linear approximation to the final layer of a neural network and the covariance matrix (X^T X) to quantify confidence sets.\n\nThe question of whether you term this \"Bayesian\" or \"Frequentist\" uncertainty feels misleading and a little pretentious. In both cases they are being used to guide an agent's exploration, the epistemological roots of Bayes/Frequency don't seem like the pressing issue.\n\nThe relevant issue is how to propagate uncertainty over the value function over multiple time steps. One does this via a sampling procedure and the other by attempting to \"learn\" an approximating neural network from \"shallow\" = one step uncertainty. This would be good to discuss in the paper.", "Thanks for your interest in our paper!\nThe mentioned paper is an interesting line of work on uncertainty measure for exploration but the source of uncertainty in this paper is different from BDQN. As described in both papers, if one approximates Q(x , a), the mean of random return in state x after taking action a, as a linear function of features\n\nQ(x,a) = w*\\phi\n\nthen the random return is distributed as follows\n\nQ(x,a)+\\epsilon = w^\\top\\phi+epsilon\n\nFrom a frequentist perspective, given the data, one can minimize bellman residual (a mean square error in this setting), find a fixed point of it and estimate w^*. Due to the noise \\epsilon in the random return and the approximated generative model, the estimated w^* has a frequentist uncertainty and the authors use this uncertainty to randomize over the actions. As it is well known in the linear regression setting, if the noise gets big, the confidence interval gets big as well.\n\nOn the other hand, in our setting, the source of uncertainty comes from the agent posterior belief on the Q function and the Thompson sampling is applied over approximated posterior distribution. We approximate the generative model with the Bayesian approach where the parameter w is assigned a prior. The uncertainty on the Q function comes from the posterior belief of w (Eq6) where the randomness in the return is captured by Eq7. Our uncertainty comes from belief in w which is constructed from both randomnesses in return and the prior belief in w. \n\nTo be more abstract, the mentioned paper exploits the frequentist uncertainty while in our setting we exploit Bayesian belief to construct the uncertainty.\n\nCheers,\nAuthors\n\n", "All the reviews seem to question the empirical rigor of this work. AnonReviewer3 commented that the implemented baseline DDQN didn't seem to match prior work. AnonReviewer2 also had concerns that the hyperparameter tuning of the baseline DDQN was weaker than their method. Similarly AnonReviewer1 asked for stronger validation and stronger baselines and points out e.g. \"It’s a bit suspicious that it doesn’t achieve 20+ in Pong\". \n\nIn light of recent revelations in deep reinforcement learning (i.e. https://arxiv.org/pdf/1709.06560.pdf), this seems like a significant issue that is prevalent. Could the reviewers and authors comment about whether they feel that the empirical evidence presented in this work is strong enough to justify that this paper should not be subject to the criticism presented in the aforementioned paper?", "I think this is a very nice paper. I wanted to ask you about an interesting connection between this work and another recent work on RL exploring (https://arxiv.org/pdf/1709.05380.pdf), in particular the use of the last layer NN statistics to generate uncertainty combined with Thompson sampling. Can you comment on the differences and similarities between your work and this? Thanks.", "Thanks for the comment. For the baseline, we used the implementation described in DDQN paper ‘’Deep Reinforcement Learning with Double Q-learning’’, and the available code in \nhttps://github.com/kazizzad/Double-DQN-MxNet-Gluon.git\nSince there's always also a variance across runs, we ran the code again, where we got a score close to what mentioned in DDQN paper. We'll update the plot for the revision. Thanks for mentioning it.\n\nCheers,\nAuthors", "Hi Ian\nThank you for your comment. We are aware of your work on RLSVI, and we agree that if we make the BDQN episodic and do not update the feature representation it is exactly RLSVI. We will elaborate it more on the main draft. \n\nRegarding using RLSVI or Bayesian regression at each layer, that is an interesting extension to BDQN and we have left it for the future work. \n\nFor the revision, we are adding a further plot to the game Atlantis. In DDQN paper ‘’Deep Reinforcement Learning with Double Q-learning’’, the score for this game is 65k and you can notice that, Fig. 2, the BDQN agent suddenly starts to learn a significantly better policy which gives an average score of 3.2M, then stays there, with no improvement. We investigated this stopping in the improvement by looking at the episode length. We realized that the agent reaches the maximum episode length limit of openaigym which is 100k. After removing this limit, surprisingly it got a score of 62M after 15M samples which is almost 1000x higher than the reported one in DDQN paper.\n\nAbout resampling from posterior of W, actually in the older version of the algorithm, there was another parameter \\tilde{W} which was sampled from the same distribution as W, but more frequently (every \\tilde{T} time step). We used \\tilde{W} to make decisions and used W (samples every T^{sample}) in order to update the feature representation in Bellman residual equation (line 14 in the Alg). We tried \\tilde{T} of 1, 10, 100, and T^{sample} time steps for game of Assault during the hyperparameter tuning period, but did not observe any significant difference. That’s why we, for simplicity, removed it from the setting and just kept W. It looks for Atari games, sampling more frequency does not make much difference (T^{sample} is in the same (or less) order as episode length (H) for many Atari games), but for the RL problems with shorter horizon and especially deterministic transition, we believe it makes a difference. We are adding a further discussion in the appendix about sampling frequency of \\tilde{W} and address how crucial the choice of \\tilde{T} could be in different RL settings.\n\nCheers\nAuthors \n\n“To preserve double-blind status, we won't post the GitHub link here.”\n", "Cool work!\n\nI wanted to highlight a deeper connection between your work and the algorithm RLSVI that you already cite, but maybe didn't realize the deeper connection: https://arxiv.org/pdf/1402.0635.pdf.\n\nIf you run BDQN with a linear architecture and T^{sample} = H finite horizon problem, my understanding is that BDQN is exactly the same as RLSVI. Certainly RLSVI is presented in that paper for a linear architecture, but the general approach of Randomized Least Squares Value Iteration is not specific to that architecture https://searchworks.stanford.edu/view/11891201.\n\nIt is very interesting though that you get better performance using this \"last-layer\" approach to RLSVI, rather than something like Bootstrap/Ensemble. Maybe one way to present this is as an effective way to extend RLSVI to multi-layer architectures.\n\nBy the way, I find it surprising that resampling the noisy W in this way so infrequently is not simply learned away by the SGD... can you comment on this?", "Your baseline results of DDQN seem strange to me, particularly the result on Pong.\nIt seems like these results are quite different from (for example) https://github.com/openai/baselines\n\nCan you comment on this?" ]
[ 6, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Bk6qQGWRb", "iclr_2018_Bk6qQGWRb", "iclr_2018_Bk6qQGWRb", "SkMlZJ9gG", "rJVfxQ2lf", "HkHX9bm-M", "HkUmeK2WG", "BkisdVq7f", "iclr_2018_Bk6qQGWRb", "iclr_2018_Bk6qQGWRb", "B1Y_pIHfM", "ryfx6L7fM", "B1MgfN_WM", "iclr_2018_Bk6qQGWRb", "iclr_2018_Bk6qQGWRb", "ByuOaTteM", "rJnxeAYeM", "iclr_2018_Bk6qQGWRb", "iclr_2018_Bk6qQGWRb" ]
iclr_2018_By-IifZRW
Gaussian Process Neurons
We propose a method to learn stochastic activation functions for use in probabilistic neural networks. First, we develop a framework to embed stochastic activation functions based on Gaussian processes in probabilistic neural networks. Second, we analytically derive expressions for the propagation of means and covariances in such a network, thus allowing for an efficient implementation and training without the need for sampling. Third, we show how to apply variational Bayesian inference to regularize and efficiently train this model. The resulting model can deal with uncertain inputs and implicitly provides an estimate of the confidence of its predictions. Like a conventional neural network it can scale to datasets of arbitrary size and be extended with convolutional and recurrent connections, if desired.
rejected-papers
The authors propose the use of Gaussian processes as the prior over activation functions in deep neural networks. This is a purely mathematical paper in which the authors derive an efficient and scalable approach to their problem. The idea of having flexible distributions over activation functions is interesting and possibly impactful. One reviewer recommended acceptance with low confidence. The other two found the idea interesting and compelling but confidently recommended rejection. These reviewers are concerned that the paper is unnecessarily complex in terms of the mathematical exposition and that it repeats existing derivations without citation. It is very important that the authors acknowledge existing literature for mathematical derivations. Furthermore, the reviewers question the correctness of some of the statements (e.g. is the variational bound preserved?). These reviewers agreed that the paper is incomplete without any empirical validation. Pros: - A compelling and promising idea - The approach seems to be scalable and highly plausible Cons: - No experiments - Significant issues with citing of related work - Significant questions about the novelty of the mathematical work
val
[ "H1IrTpFxz", "Skf5I79gf", "Bkhq035gz", "B1WzCoQZz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "The paper addresses the problem of learning the form of the activation functions in neural networks. The authors propose to place Gaussian process (GP) priors on the functional form of each activation function (each associated with a hidden layer and unit) in the neural net. This somehow allows to non-parametrically infer from the data the \"shape\" of the activation functions needed for a specific problem. The paper then proposes an inference framework (to approximately marginalize out all GP functions) based on sparse GP methods that use inducing points and variational inference. The inducing point approximation used here is very efficient since all GP functions depend on a scalar input (as any activation function!) and therefore by just placing the inducing points in a dense grid gives a fast and accurate representation/compression of all GPs in terms of the inducing function values (denoted by U in the paper). Of course then inference involves approximating the finite posterior over inducing function values U and the paper make use of the standard Gaussian approximations. \n \nIn general I like the idea and I believe that it can lead to a very useful model. However, I have found the current paper quite preliminary and incomplete. The authors need to address the following: \n\nFirst (very important): You need to show experimentally how your method compares against regular neural nets (with specific fixed forms for their activation functions such relus etc). At the moment in the last section you mention \"We have validated networks of Gaussian Process Neurons in a set of experiments, the details of which we submit in a subsequent publication. In those experiments, our model shows to be significantly less prone to overfitting than a traditional feed-forward network of same size, despite having more parameters.\" ===> Well all this needs to be included in the same paper. \n\nSecondly: Discuss the connection with Deep GPs (Damianou and Lawrence 2013). Your method seems to be connected with Deep GPs although there appear to be important differences as well. E.g. you place GPs on the scalar activation functions in an otherwise heavily parametrized neural network (having interconnection weights between layers) while deep GPs model the full hidden layer mapping as a single GP (which does not require interconnection weights). \n\nThirdly: You need to better explain the propagation of uncertainly in section 3.2.2 and the central limit of distribution in section 3.2.1. This is the technical part of your paper which is a non-standard approximation. I will suggest to give a better intuition of the whole idea and move a lot of mathematical details to the appendix. \n", "In Bayesian neural networks, a deterministic or parametric activation is typically used. In this work, activation functions are considered random functions with a GP prior and are inferred from data.\n\n\n- Unnecessary complexity\n\nThe presentation of the paper is unnecessarily complex. It seems that authors spend extra space creating problems and then solving them. Although some of the derivations in Section 3.2.2 are a bit involved, most of the derivations up to that point (which is already in page 6) follow preexisting literature.\n\nFor instance, eq. (3) proposes one model for p(F|X). Eq. (8) proposes a different model for p(F|X), which is an approximation to the previous one. Instead, the second model could have been proposed directly, with the appropriate citation from the literature, since it isn't new. Eq. (13) is introduced as a \"solution\" to a non-existent problem, because the virtual observations are drawn from the same prior as the real ones, so it is not that we are \"coming up\" with a convenient GP prior that turns out to produce a computationally tractable solution, we are just using the prior on the observations consistently.\n\nIn general, the authors seem to use \"approximately equal\" and \"equal\" interchangeably, which is incorrect. There should be a single definition for p(F|X). And there should be a single definition for L_pred. The expression for L_pred given in eq. (20) (exact) and eq. (41) (approximate) do not match and yet both are connected with an equality (or proportionality), which they shouldn't.\n\nQ(A) is sometimes taken to mean the true posterior (i.e., eq. (31)), sometimes a Gaussian approximation (i.e., eq (32) inside the integral), and both are used interchangeably.\n\n\n- Incorrect references to the literature\n\nPage 3: \"using virtual observations (originally proposed by Quiñonero-Candela & Rasmussen (2005) for sparse approximations of GPs)\"\n\nThe authors are citing as the origin of virtual observations a survey paper on the topic. Of course, that survey paper correctly attributes the origin to [1].\n\nPage 4: \"we apply the technique of variational inference Wainwright et al. (2008)\".\n\nHow can variational inference be attributed to (again) a survey paper on the topic from 2008, when for instance [2] appeared in 2003?\n\n\n- Correctness of the approach\n\nCan the authors guarantee that the variational bound that they are introducing (as defined in eqs. (19) and (41)) is actually a variational bound? It seems to me that the approximations made to Q(A) to propagate the uncertainty are breaking the bounding guarantee. If it is no longer a lower bound, what is the rationale behind maximizing it?\n\nThe mathematical basis for this paper is actually introduced in [3] and a single-layer version of the current model is developed in [4]. However, in [4] the authors manage to avoid the additional Q(A) approximation that breaks the variational bound. The authors should contrast their approach with [4] and discuss if and why that additional central limit theorem application is necessary.\n\n\n- No experiments\n\nThe use of a non-parametric definition for the activation function should be contrasted with the use of a parametric one. With enough data, both might produce similar results. And the parameter sharing in the parametric one might actually be beneficial. With no experiments at all showing the benefit of this proposal, this paper cannot be considered complete.\n\n\n- Minor errors:\n\nEq. (4), for consistency, should use the identity matrix for the covariance matrix definition.\nEq. (10) uses subscript d where it should be using subscript n\nEq. (17) includes p(X^L|F^L) in the definition of Q(...), but it shouldn't. That was particularly misleading, since if we take eq. (17) to be correct (which I did at first), then p(X^L|F^L) cancels out and should not appear in eq. (20).\nEq. (23) uses Q(F|A) to mean the same as P(F|A) as far as I understand. Then why use Q?\n\n\n- References\n\n[1] Edward Snelson and Zoubin Ghahramani. Sparse Gaussian processes using pseudo-inputs.\n[2] Beal, M.J. Variational Algorithms for Approximate Bayesian Inference.\n[3] M.K. Titsias and N.D. Lawrence. Bayesian Gaussian process latent variable model. \n[4] M. Lázaro-Gredilla. Bayesian warped Gaussian processes.\n", "This paper investigates probabilistic activation functions that can be structured in a manner similar to traditional neural networks whilst deriving an efficient implementation and training regime that allows them to scale to arbitrarily sized datasets.\n\nThe extension of Gaussian Processes to Gaussian Process Neurons is reasonably straight forward, with the crux of the paper being the path taken to extend GPNs from intractable to tractable.\nThe first step, virtual observations, are used to provide stand ins for inputs and outputs of the GPN.\nThese are temporary and are later made redundant.\nTo avoid the intractable marginalization over latent variables, the paper applies variational inference to approximate the posterior within the context of given training data.\nOverall the process by which GPNs are made tractable to train leverages many recent and not so recent techniques.\n\nThe resulting model is theoretically scalable to arbitrary datasets as the total model parameters are independent of the number of training samples.\nIt is unfortunate but understandable that the GPN model experiments are confined to another paper.", "I agree with this reviewer. Much of the mathematical derivation has been worked out before, even much of the uncertainty propagation part. I would add that [1] reviews many of the papers relying on these derivations.\n\nWhile the paper proposes an interesting model, I believe the paper can't really be accepted without any experimental verification.\n\n[1] http://jmlr.org/papers/volume17/damianou16a/damianou16a.pdf" ]
[ 5, 4, 7, -1 ]
[ 4, 5, 2, -1 ]
[ "iclr_2018_By-IifZRW", "iclr_2018_By-IifZRW", "iclr_2018_By-IifZRW", "Skf5I79gf" ]
iclr_2018_BJlrSmbAZ
Bayesian Uncertainty Estimation for Batch Normalized Deep Networks
Deep neural networks have led to a series of breakthroughs, dramatically improving the state-of-the-art in many domains. The techniques driving these advances, however, lack a formal method to account for model uncertainty. While the Bayesian approach to learning provides a solid theoretical framework to handle uncertainty, inference in Bayesian-inspired deep neural networks is difficult. In this paper, we provide a practical approach to Bayesian learning that relies on a regularization technique found in nearly every modern network, batch normalization. We show that training a deep network using batch normalization is equivalent to approximate inference in Bayesian models, and we demonstrate how this finding allows us to make useful estimates of the model uncertainty. Using our approach, it is possible to make meaningful uncertainty estimates using conventional architectures without modifying the network or the training procedure. Our approach is thoroughly validated in a series of empirical experiments on different tasks and using various measures, showing it to outperform baselines on a majority of datasets with strong statistical significance.
rejected-papers
This paper shows that batch normalization can be cast as approximate inference in deep neural networks. This is an appealing result as batch normalization is used in practice in a wide variety of models. The reviewers found the paper well written and easy to understand and were motivated by underlying idea. However, they found the empirical analysis lacking and found that there was not enough detail in the main text to verify whether the claims were true. The authors empirically compared to a recent method showing that dropout can be cast as approximate inference with the claim that by transitivity they were comparing to a variety of recent methods. AnonReviewer1 casts significant doubt on the results of that work. This is very unfortunate and not the fault of the authors of this paper. The authors have since gone to great length to compare to Louizos and Welling, 2017. Unfortunately, that comparison doesn't appear to be complete in the manuscript. The main text was also lacking specific detail relating to fundamental parts of the proposed method (noted by all reviewers). Overall, this paper seems to be tremendously promising and the underlying idea potentially very impactful. However, given the reviews, it doesn't seem that this paper would achieve its potential impact. The response from the authors is appreciated and goes a long way to improving the paper. Taking the reviews into account, adding specific detail about the methodology and model (e.g. the prior) and completing careful empirical analysis will make this a strong paper that should be much more impactful.
train
[ "Hk7HI4h1G", "Bk8cjwFgz", "Bkw2_15xz", "S12zvunmz", "Hk6h1Rt7G", "rJXxD0YXG", "rkPNCRYmM", "BknqaaYmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper proposes an approximate method to construct Bayesian uncertainty estimates in networks trained with batch normalization.\n\nThere is a lot going on in this paper. Although the overall presentation is clean, there are few key shortfalls (see below). Overall, the reported functionality is nice, although the experimental results are difficult to intepret (despite laudable effort by the authors to make them intuitive).\n\nSome open questions that I find crucial:\n\n* How exactly is the “stochastic forward-pass” performed that gives rise to the moment estimates? This step is the real meat of the paper, yet I struggle to find a concrete definition in the text. Is this really just an average over a few recent weights during optimization? If so, how is this method specific to batch normalization? Maybe I’m showing my own lack of understanding here, but it’s worrying that the actual sampling technique is not explained anywhere. This relates to a larger point about the paper's main point: What, exactly, is the Bayesian interpretation of batch normalization proposed here? In Bayesian Dropout, there is an explicit variational objective. Here, this is replaced by an implicit regularizer. The argument in Section 3.3 seems rather weak to me. To paraphrase it: If the prior vanishes, so does the regularizer. Fine. But what's the regularizer that's vanishing? The sentence that \"the influence of the prior diminishes as the size of the training data increases\" is debatable for something as over-parametrized as a DNN. I wouldn't be surprised that there are many directions in the weight-space of a trained DNN along which the posterior is dominated by the prior.\n\n* I’m confused about the statements made about the “constant uncertainty” baseline. First off, how is this (constant) width of the predictive region chosen? Did I miss this, or is it not explained anywhere? Unless I misunderstand the definition of CRPS and PLL, that width should matter, no? Then, the paragraph at the end of page 8 is worrying: The authors essentially say that the constant baseline is quite close to the estimate constructed in their work because constant uncertainty is “quite a reasonable baseline”. That can hardly be true (if it is, then it puts the entire paper into question! If trivial uncertainty is almost as good as this method, isn't the method trivial, too?). \nOn a related point: What would Figure 2 look like for the constand uncertainty setting? Just a horizontal line in blue and red? But at which level?\n\nI like this paper. It is presented well (modulo the above problems), and it makes some strong points. But I’m worried about the empirical evaluation, and the omission of crucial algorithmic details. They may hide serious problems.", "*Summary*\n\nThe paper proposes using batch normalisation at test time to get the predictive uncertainty. The stochasticity of the prediction comes from different minibatches of training data that were used to normalise the activity/pre-activation values at each layer. This is justified by an argument that using batch norm is doing variational inference, so one should use the approximate posterior provided by batch norm at prediction time. Several experiments show Monte Carlo prediction at test time using batch norm is better than dropout.\n\n*Originality and significance*\n\nAs far as I understand, almost learning algorithms similar to equation 2 can be recast as variational inference under equation 1. However, the critical questions are what is the corresponding prior, what is the approximating density, what are the additional approximations to obtain 2, and whether the approximation is a good approximation for getting closer to the posterior/obtain better prediction. \n\nIt is not clear to me from the presentation what the q(w) density is -- whether this is explicit (as in vanilla Gaussian VI or MC dropout), or implicit (the stochasticity on the activity h due to batch norm induces an equivalence q on w).\n\nFrom a Bayesian perspective, it is also not satisfying to ignore the regularisation term by an empirical heuristic provided in the batch norm paper [small \\lambda] -- what is the rationale of this? Can this be explained by comparing the variational free-energy. \n\nThe experiments also do not compare to modern variational inference methods using the reparameterisation trick with Gaussian variational approximations (see Blundell et al 2016) or richer variational families (see e.g. Louizos and Welling, 2016, 2017). The VI method included in the PBP paper (Hernandez-Lobato and Adams, 2015) does not use the reparameterisation trick, which has been found to reduce variance and improve over Graves' VI method.\n\n*Clarity*\nThe paper is in general well written and easy to understand. \n\n*Additional comments*\n\nPage 2: Monte Carlo Droput --> Dropout\nPage 3 related work: (Adams, 2015) should be (Hernandez-Lobato and Adams, 2015)", "The authors show how the regularization procedure called batch normalization,\ncurrently being used by most deep learning systems, can be understood as\nperforming approximate Bayesian inference. The authors compare this approach to\nMonte Carlo dropout (another regularization technique which can also be\nconsidered to perform approximate Bayesian inference). The experiments\nperformed show that the Bayesian view of batch normalization performs similarly\nas MC dropout in terms of the estimates of uncertainty that it produces.\n\nQuality:\n\nI found the quality to be low in some aspects. First, the description of what\nis the prior used by batch normalization in section 3.3 is unsatisfactory. The\nauthors basically refer to Appendix 6.4 for the case in which the weight decay\npenalty is not zero. The details in that Appendix are almost none, they just\nsay \"it is thus possible to derive the prior...\".\n\nThe results in Table 2 are a bit confusing. The authors should highlight in\nbold face the results of the best performing method.\n\nThe authors indicate that they do not need to compare to variational methods\nbecause Gal and Ghahramani 2015 compare already to those methods. However, Gal\nand Ghahramani's code used Bayesian optimization methods to tune\nhyper-parameters and this code contains a bug that optimizes hyper-parameters\nby maximizing performance on the test data. In particular for hyperparameter\nselection, they average performance across (subsets of) 5 of the training sets\nfrom the 20x train/test split, and then using the tau which got the best\naverage performance for all of 20x train/test splits to evaluate performance:\n\nhttps://github.com/yaringal/DropoutUncertaintyExps/blob/master/bostonHousing/net/experiment_BO.py#L54\n\nTherefore, the claim that \n\n\"Since we have established that MCBN performs on par with MCDO, by proxy we\nmight conclude that MCBN outperforms those VI methods as well.\"\n\nis not valid.\n\nAt the beginning of section 4.3 the authors indicate that they follow in their\nexperiments the setup of Gal and Ghahramani (2015). However, Gal and Ghahramani\n(2015) actually follow Hernández-Lobato and Adams, 2015 so the correct\nreference should be the latter one.\n\nClarity:\n\nThe paper is clearly written and easy to follow and understand.\n\nI found confusing how to use the proposed method to obtain estimates of\nuncertainty for a particular test data point x_star. The paragraph just above\nsection 4 says that the authors sample a batch of training data for this, but\nassume that the test point x_star has to be included in this batch.\nHow is this actually done in practice?\n\nOriginality:\n\nThe proposed contribution is original. This is the first time that a Bayesian\ninterpretation has been given to the batch normalization regularization\nproposal.\n\nSignificance:\n\nThe paper's contributions are significant. Batch normalization is a very\npopular regularization technique and showing that it can be used to obtain\nestimates of uncertainty is relevant and significant. Many existing deep\nlearning systems can use this to produce estimates of uncertainty in their\npredictions.\n", "We have updated the paper with: \n- added normalized CRPS and PLL results from MNF (Louizos & Welling) on three additional datasets (Table 2 in Section 4.4). These results are in line with what we have observed for MCBN and MCDO.\n- Raw (non-normalized) PLL and CRPS results for MCBN and MCDO (Table 5 in Appendix 6.6)\n\nAs we have mentioned in our responses, please note that the evaluation of MNF is performed the same way as for MCBN and MCDO, with the exception of the initial grid search hyperparameter selection. For the camera-ready version of the paper we will make sure to apply proper hyperparameter selection to MNF as well.", "Thank you for your comments. We appreciate the feedback and address your concerns below.\n\nWe have clarified the description on how MCBN is used in section (3.4). Below we first describe how the model is used, then discuss the rationale.\n\nThe network is trained as a regular BN model. The difference is in using the model for prediction. We estimate the mean and variance of the predictive distribution at a new input x by MC sampling:\n\nFor a total of T times:\n- Sample a batch B from the training data D (with the same batch size M that was used during training).\n- Update the BN units’ means and variances with B. This corresponds to sampling from the approximate predictive distribution q_theta(omega).\n- Perform a forward pass to get the output y_t with this particular sample of omega.\n\nFrom the T output samples y_t we estimate:\n- The mean of the predictive distribution as the sample mean of y. \n- The variance of the predictive distribution (for regression) as sum of the sample variance of y and variance from constant observation noise, tau^-1*I.\n\nDerivations of these estimates are given in Appendix 6.4.\n\nNote that these moments do not disclose any information about the form of the approximate posterior distribution p*. It is likely multimodal, but we have added a proof in section (3.4) that it can be approximated by a Gaussian for each output dimension (similar to Wang & Manning’s motivation for a Gaussian approximation of Dropout in Fast dropout training). We may therefore fit a Gaussian distribution to the estimated moments as an estimate of the predictive distribution of new input x.\n\nWhat is the implied prior?\nWe have added derivations of the implied prior for networks with L2-regularization (summarized in Section 3.3 and fully derived in Appendix 6.5). The derivations assume fully connected layers with ReLU activations as used in most modern batch-normalized networks. We use the modeled approximate posterior q_theta(omega) from Appendix 6.3. We assumes a factorized Gaussian distribution over all stochastic variables, and that only parameters in the current layer affects the distribution of its stochastic variables.\n\nThe implied prior on BN units’ std. dev. terms are Gaussian, with arbitrary moments.\nThe implied prior on all BN units’ means for each layer are:\n\np(mu) = N(0, (J * x_bar^2) / (2 * N * tau * lambda))\n\nJ: n.o. input units to the layer\nx_bar: average input from all input units, across training data D\nN: Size of training dataset\ntau: inverse variance from constant observation noise\nlambda: the layer’s L2 regularization coefficient\n\nThis prior is an approximation, and is only accurate if the average input for each input unit over D is identical (which is the case if the scale and shift transformation is identical for all units). In the absence of scale and shift transformations from the previous BN layer, it converges towards an exact prior for large training datasets and deep networks (under the assumptions of q_theta(omega) and the factorized Gaussian).\n\nResults comparison to other models\nWe have removed claims of proxy comparison. Instead, we have adapted Louizos & Welling’s implementation of Multiplicative Normalizing Flows for Variational Bayesian Neural Network (MNF) for our evaluation. With this we are able to compare our results with a model highly capable of producing complex approximate posteriors. we have included results for three finished datasets in Table 2, and will be continuing to update the results as evaluations finish. So far, the normalized scores are in line with what we observe for MCBN and MCDO - less than 10% for Boston and Concrete, and inconsistent between the metrics for Yacht. The evaluation is performed the same way as for MCBN and MCDO with the exception of the initial grid search hyperparameter selection - we will make sure to apply proper hyperparameter selection to MNF for the camera-ready version of the paper.\n\nOther comments\nRegarding the tables, we have marked in bold the model that performs best relative to its constant uncertainty baseline in Table 2, as well as in Appendix 6.6 Table 3 and 4. In Table 5 (RMSE) we have marked in bold the best performing model overall. We have also corrected the reference regarding the experiment setup to Hernandez-Lobato & Adams (2015)\n\n\n\n", "Thank you for your comments. We hope to address your concerns below.\n\nWe have added derivations of the implied prior for networks with L2-regularization (summarized in Section 3.3 and fully derived in Appendix 6.5). The derivations assume fully connected layers with ReLU activations as used in most modern batch-normalized networks. We use the modeled approximate posterior q_theta(omega) from Appendix 6.3. We assumes a factorized Gaussian distribution over all stochastic variables, and that only parameters in the current layer affects the distribution of its stochastic variables.\n\nThe implied prior on BN units’ std. dev. terms are Gaussian, with arbitrary moments.\nThe implied prior on all BN units’ means for each layer are:\n\np(mu) = N(0, (J * x_bar^2) / (2 * N * tau * lambda))\n\nJ: n.o. input units to the layer\nx_bar: average input from all input units, across training data D\nN: Size of training dataset\ntau: inverse variance from constant observation noise\nlambda: the layer’s L2 regularization coefficient\n\nThis prior is an approximation, and is only accurate if the average input for each input unit over D is identical (which is the case if the scale and shift transformation is identical for all units). In the absence of scale and shift transformations from the previous BN layer, it converges towards an exact prior for large training datasets and deep networks (under the assumptions of q_theta(omega) and the factorized Gaussian).\n\nWith this implied prior, strong regularization corresponds to a prior over BN unit means with small variance. From a VA perspective, too strong a regularization for a given dataset size could be seen as constraining the prior distribution of BN units’ means, effectively narrowing the approximate posterior.\n\nWhat exactly is the Bayesian interpretation of batch normalization proposed here (and what is the density q)?\nFrom a Bayesian perspective, sampling a batch and updating the stochastic parameters (all BN units’ mean and std dev. parameters) during training means that the trained network is equivalent to having minimized the KL divergence of KL(approximate posterior || true posterior) wrt theta. Therefore q_theta(omega) (the joint distribution of the network’s stochastic parameters) is an approx. of the true posterior, restricted to lie within the domain of our parametric network, and source of randomness (sampling batches of size M from D). q_theta(omega) is an approximation of the true posterior under these restrictions, and by the limitations intrinsic to KL divergence minimization. The definition of q_theta(omega) has been clarified in section (3.2), and its equivalence to KL divergence minimization is discussed in section (3.4).\n\nIt is correct that q_theta(omega) is defined implicitly, by our network architecture but also M and D. This means that the approximate posterior q_theta(omega) must be consistent during and after training. This means that the mini-batch size M and the dataset from which B is sampled (i.e. the training data D) must be kept after training when taking omega samples for estimating the predictive distribution. Alternatively, one could use our modeled q_theta(omega) as factorized Gaussians - but we leave this as suggestions for future research.\n\nWhat are the approximations to obtain the approximate posterior, and is our approximation close to the true posterior?\nThe modeling of q_theta(omega) from BN as Gaussian over all the network’s stochastic parameters is an approximation that by CLT relies on a large enough n.o. input units, as shown in Appendix (6.3). We additionally assume that this factorizes over all individual stochastic parameters, for the derivations of the implied prior in Appendix 6.5. How suitable this simplification of q_theta(omega) is for sampling in the predictive distribution is difficult to say without evaluating the quality of the predictive distribution empirically. However, the modeling allows us to study the implied prior, which would be difficult with the random variable as a selection of mini-batch members.\n\nResults comparison to other models\nWe have adapted Louizos & Welling’s implementation of Multiplicative Normalizing Flows for Variational Bayesian Neural Network (MNF) for our evaluation. With this we are able to compare our results with a model highly capable of producing complex approximate posteriors. we have included results for three finished datasets in Table 2, and will be continuing to update the results as evaluations finish. So far, the normalized scores are in line with what we observe for MCBN and MCDO - less than 10% for Boston and Concrete, and inconsistent between the metrics for Yacht. The evaluation is performed the same way as for MCBN and MCDO with the exception of the initial grid search hyperparameter selection - we will make sure to apply proper hyperparameter selection to MNF for the camera-ready version of the paper.\n\nWe have also corrected the typo on Dropout, and the erroneous reference.", "Thank you for your comments. We hope the to answer your concerns below.\n\nWe have clarified the description on how MCBN is used in section (3.4). Below we first describe how the model is used, then discuss the rationale.\n\nThe network is trained as a regular BN model. The difference is in using the model for prediction. We estimate the mean and variance of the predictive distribution at a new input x by MC sampling:\n\nFor a total of T times:\n- Sample a batch B from the training data D (with the same batch size M that was used during training).\n- Update the BN units’ means and variances with B. This corresponds to sampling from the approximate predictive distribution q_theta(omega).\n- Perform a forward pass to get the output y_t with this particular sample of omega.\n\nFrom the T output samples y_t we estimate:\n- The mean of the predictive distribution as the sample mean of y. \n- The variance of the predictive distribution (for regression) as sum of the sample variance of y and variance from constant observation noise, tau^-1*I.\n\nDerivations of these estimates are given in Appendix 6.4.\n\nThese moments do not disclose the form of the approximate posterior distribution p*. It is likely multimodal, but we have added a proof in section (3.4) that it can be approximated by a Gaussian for each output dimension. We may therefore fit a Gaussian distribution to the estimated moments as an estimate of the predictive distribution of new input x.\n\nWhat is the Bayesian interpretation of batch normalization?\nFrom a Bayesian perspective, sampling a batch and updating the stochastic parameters omega (all BN units’ mean and std dev. parameters) during training means that the trained network is equivalent to having minimized the KL divergence of KL(approximate posterior || true posterior) wrt theta. Therefore q_theta(omega) (the joint distribution of the network’s stochastic parameters) is an approximation of the true posterior, restricted to lie within the domain of our parametric network, and source of randomness (sampling batches of size M from D). q_theta(omega) is an approximation of the true posterior under these restrictions, and by the limitations intrinsic to KL divergence minimization. The definition of q_theta(omega) has been clarified in section (3.2), and its equivalence to KL divergence minimization is discussed in section (3.4).\n\nIt is correct that q_theta(omega) is defined implicitly, by our network architecture but also M and D. Note that the approximate posterior q_theta(omega) must be consistent during and after training. This means that the mini-batch size M and the dataset from which B is sampled (i.e. the training data D) must be kept after training when taking omega samples for estimating the predictive distribution. (we have not evaluated our modeled Gaussian approximation from Appendix 6.3)\n\nWe agree that dropping the regularizer/prior is hard to motivate from a Bayesian perspective. We have removed this discussion. We now model an approximate prior in Appendix 6.5. Our implied prior over batch means is p(mu) = N(0, (J * x_bar^2) / (2 * N * tau * lambda)). From a VA perspective, too strong a regularization for a given dataset size could be seen as constraining the prior distribution of BN units’ means, effectively narrowing the approximate posterior.\n\nEvaluation baselines\nWe evaluate MCBN and MCDO using two standard metrics of predictive distribution quality: PLL and CRPS. It is difficult though to directly compare different models based on these metrics alone unless the models produce the same means at every test point (which does not happen in practice). If we were to compare MCBN to MCDO and find that e.g. PLL was in MCBN’s favor, we would not be able to say whether the predictive distribution of MCBN makes sense or not – the outperformance could simply be a result of BN fitting the model better to the data.\n\nWe normalize the measures with an upper- and lower bound. CUBN and CUDO represent the lower bound. These models produce the same means as MCBN and MCDO respectively, but always estimate a constant (validation-optimized) variance. This is the best we can do, if we were to always assume the same predictive variance. Any improvement indicates that the MC models estimate uncertainty in a sensible way. This has been clarified in section 4.2\n\nThe upper bound also produce the same target estimates, but the predicted varianceoptimizes CRPS and PLL respectively, for each test data point. This is the best-case scenario - any change for a single test data point would yield a lower score. By normalizing the scores achieved by MCBN and MCDO between these bounds, we not only verify that the models are better than the constant uncertainty baselines (i.e. model input-dependent variance sensibly), but also achieve an estimate of how close the modeled variance is to the absolute best case.\n\nIn Figure 2, we have included the CU- models’ constant uncertainty as one standard deviation, given by the dashed line. ", "We would like to thank the reviewers for their detailed comments and clear questions. Their feedback helped us improve the quality of the paper. \n\nBefore addressing individual comments, we would like to reiterate the contributions of this work which we feel are significant and of broad interest to the ML community:\n1) Treat batch normalization as a stochastic regularization and thereby consider a batch-normalized network training procedure as approximate Bayesian modeling.\n2) Extensive empirical evidence for the efficacy of obtained predictive uncertainty from such a perspective on batch-normalized networks.\n3) Analytical study of the induced prior of the stochastic variables.\n4) Novel quantitative and qualitative evaluation of the predictive uncertainties.\n\nConsidering the fact that nearly all modern networks use batch normalization, our proposed method is of broad interest as it opens the door to uncertainty estimation in existing conventional networks without modifying the network or the training procedure.\n\nOur response to the reviewer comments appear below. We have thoroughly attended to **all** the raised issues. Also, the manuscript has been revised to include additional studies and explanations as requested by the reviewers; most notably an analytical study of the prior and additional experiments. \n\nWe had originally addressed all questions in one response, but this far exceeded the character limitation. We will address individual reviewers below. In the interest of swift response now that we have uploaded a revised paper there will be some repetition, we hope you don’t mind this." ]
[ 5, 5, 6, -1, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJlrSmbAZ", "iclr_2018_BJlrSmbAZ", "iclr_2018_BJlrSmbAZ", "BknqaaYmM", "Bkw2_15xz", "Bk8cjwFgz", "Hk7HI4h1G", "iclr_2018_BJlrSmbAZ" ]
iclr_2018_SknC0bW0-
Continuous-fidelity Bayesian Optimization with Knowledge Gradient
While Bayesian optimization (BO) has achieved great success in optimizing expensive-to-evaluate black-box functions, especially tuning hyperparameters of neural networks, methods such as random search (Li et al., 2016) and multi-fidelity BO (e.g. Klein et al. (2017)) that exploit cheap approximations, e.g. training on a smaller training data or with fewer iterations, can outperform standard BO approaches that use only full-fidelity observations. In this paper, we propose a novel Bayesian optimization algorithm, the continuous-fidelity knowledge gradient (cfKG) method, that can be used when fidelity is controlled by one or more continuous settings such as training data size and the number of training iterations. cfKG characterizes the value of the information gained by sampling a point at a given fidelity, choosing to sample at the point and fidelity with the largest value per unit cost. Furthermore, cfKG can be generalized, following Wu et al. (2017), to settings where derivatives are available in the optimization process, e.g. large-scale kernel learning, and where more than one point can be evaluated simultaneously. Numerical experiments show that cfKG outperforms state-of-art algorithms when optimizing synthetic functions, tuning convolutional neural networks (CNNs) on CIFAR-10 and SVHN, and in large-scale kernel learning.
rejected-papers
This paper combines multiple existing ideas in Bayesian optimization (continuous-fidelity, use of gradient information and knowledge gradient) to develop their proposed cfKG method. While the methodology seems neat and effective, the reviewers (and AC) found that the presented approach was not quite novel enough in light of existing work to justify acceptance to ICLR. Continuous fidelity Bayesian optimization is well studied and knowledge gradient + derivative information was presented at NIPS. The combination of these things seems quite sensible but not sufficiently novel (unless the empirical results were *really* compelling). Pros: - The paper is clear and writing is of high quality - Bayesian optimization is interesting to the community and compelling methods are potentially practically impactful - Outperforms existing methods on the chosen benchmarks Cons: - Is an incremental combination of existing methods - The paper claims too much
test
[ "H1Dw9y51z", "Sy4mWsOeG", "ryZpu-qlM", "BJBvNdaXz", "ryq3aP37M", "Bkq8aP27M", "HkXHnv2XM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper studies hyperparameter-optimization by Bayesian optimization, using the Knowledge Gradient framework and allowing the Bayesian optimizer to tune fideltiy against cost.\n\nThere’s nothing majorly wrong with this paper, but there’s also not much that is exciting about it. As the authors point out very clearly in Table 1, this setting has been addressed by several previous groups of authors. This paper does tick a previously unoccupied box in the problem-type-vs-algorithm matrix, but all the necessary steps are relatively straightforward.\n\nThe empirical results look good in comparison to the competing methods, but I suspsect an author of those competitors could find a way to make their own method look better in those plots, too.\n\nIn short: This is a neat paper, but it’s novelty is low. I don't think it would be a problem if this paper were accepted, but there are probably other, more groundbreaking papers in the batch.\n\nMinor question: Why are there no results for 8-cfKG and Hyperband in Figure 2 for SVHN?", "\nMany black-box optimization problems are \"multi-fidelity\", in which it\nis possible to acquire data with different levels of cost and\nassociated uncertainty. The training of machine learning models is a\ncommon example, in which more data and/or more training may lead to\nmore precise measurements of the quality of a hyperparameter\nconfiguration. This has previously been referred to as a special case\nof \"multi-task\" Bayesian optimization, in which the tasks can be\nconstructed to reflect different fidelities. The present paper\nexamines this construction with three twists: using the knowledge\ngradient acquisition function, using batched function evaluations, and\nincorporating derivative observations. Broadly speaking, the idea is\nto allow fidelity to be represented as a point in a hypercube and then\ninclude this hypercube as a covariate in the Gaussian process. The\nknowledge gradient acquisition function then becomes \"knowledge\ngradient per unit cost\" the KG equivalent to the \"expected improvement\nper unit cost\" discussed in Snoek et al (2012), although that paper\ndid not consider treating fidelity separately.\n\nI don't understand the claim that this is \"the first multi-fidelity\nalgorithm that can leverage gradients\". Can't any Gaussian process\nmodel use gradient observations trivially, as discussed in the\nRasmussen and Williams book? Why can't any EI or entropy search\nmethod also use gradient observations? This doesn't usually come up\nin hyperparameter optimization, but it seems like a grandiose claim.\nSimilarly, although I don't know of a paper that explicitly does \"A +\nB\" for multi-fidelity BO and parallel BO, it is an incremental\ncontribution to combine them, not least because no other parallel BO\nmethods get evaluated as baselines.\n\nFigure 1 does not make sense to me. How can the batched algorithm\noutperform the sequential algorithm on total cost? The sequential\ncfKG algorithm should always be able to make better decisions with its\nremaining budget than 8-cfKG. Is the answer that \"cost\" here means\n\"wall-clock time when parallelism is available\"? If that's the case,\nthen it is necessary to include plots of parallelized EI, entropy\nsearch, and KG. The same is true for Figure 2; other parallel BO\nalgorithms need to appear.", "Minor comments:\n- page 7. “Then, observe that the same reasoning we used to develop the cfKG acquistion function\nin (3.2) can be used when when we observe gradients to motivate the acquisition function…” - some misprints, e.g. double “when”\n- The paper lacks theoretical analysis of convergence of the proposed modification of the knowledge gradient criterion.\n\nMajor comments:\n\nCurrent approaches to optimisation of expensive functions are mainly based on Gaussian process model. Such approaches are important for Auto ML algorithms.\n\nThere are a lot of cases, when for an expensive function we can obtain measurements of its values with continuous fidelity by leveraging costs for evaluation vs. fidelity of the obtained values. E.g. as fidelity we can consider a size of the training set used to train a deep neural network.\n\nThe paper contains a some new algorithm to perform Bayesian optimisation of a function with continuous fidelity. Using modification of the knowledge gradient acquisition function the authors obtained black box optimisation method taking into account continuous fidelity. \n\nDue to some reason the authors forgot to take the cost function into account when formulating the algorithm 1 in 3.3.2 and corresponding formula (3.7).\n\nSo, the logic of the definition of q-cfKG is understandable, but the issue with the missing denominator, containing cost function, remains.\n\nThe approach, proposed in section 3.3.2, looks as follows:\n- the authors used formulas from [Wu t al (2017) - https://arxiv.org/abs/1703.04389] \n- and include additional argument in the mean function of the Gaussian process.\nHowever, in Wu t al (2017) they consider usual knowledge gradient, but in this paper they divide by the value of max(cost(z)), which is not differentiable.\n\nOther sections of the paper are sufficiently well written, except \n- the section 3.3.2, \n- section with results of experiments: I was not able to understand how the authors defined cost function in sections 4.2 and 4.3 for their neural network and large scale kernel learning.\n\nIn principle, the paper contains some new results, but it should be improved before publishing.", "In addition, in response to these questions about EI and ES, we have added a brief discussion of EI and ES in comparison with KG, and its level of appropriateness for multi-fidelity optimization, in the introduction.", "As in our reply to AnonReviewer3, we would like to emphasize the practical value of a method that effectively leverages continuous fidelities of multiple dimensions (training data size and training iterations) in a batch setting, especially in light of the difficulty of parallelizing other competitive methods in the sequential setting.\n \nIn reference to your minor question, we did not add 8-cfKG because (sequential) cfKG already finds an extremely good solution within a single complete training run, and adding parallelism could not improve this. One can view the performance of 8-cfKG in this example as the same as sequential cfKG.", "Regarding our claim that this is the first multi-fidelity algorithm to leverage gradients, we searched the literature, and believe that this is indeed true: we were unable to find a paper that uses gradients in a multi-fidelity setting. At the same time, we do agree that it should be possible to add gradient observations into the inference used by an existing multi-fidelity method, although this is not discussed elsewhere. For this reason, we have removed this sentence.\n\nWe also note that Wu et al. 2017 cited in our paper shows that simply adding gradient observation into GP inference with a standard acquisition function such as EI in the single-fidelity setting is not sufficient to provide a substantial performance improvement over the setting without gradient observations. It is important to additionally modify the acquisition function to sample at points where gradient observations are particularly helpful. For this reason we suspect that our method would outperform an existing multi-fidelity method whose inference but not acquisition function was modified to use gradients. Since we do not do numerical experiments to confirm this fact, we do not discuss it in the paper.\n\nRegarding our claim that this is the first parallel multi-fidelity method, our response is similar: we believe this is true, as we searched the literature and did not find an existing paper that does this, but we do agree that this point does not need to be emphasized, and so we removed the sentence that claimed it. Regarding the absence of parallel BO baselines, we have corrected this and now have two parallel BO baselines in our synthetic experiments as discussed below.\n\nAt the same time, when thinking about parallelizing an existing multi-fidelity method based on expected improvement or entropy search, ES tends to outperform EI in multi-fidelity settings, and ES is challenging to parallelize. As far as we know, [1] is the only paper to do so, and its method incurs significant computational cost with no code publicly available. It is perhaps for this reason that we were unable to find any previous papers or any publicly available software that supported multi-fidelity BO with batch evaluations, despite the apparent practical importance of this problem class. Moreover, Hyperband is also difficult to parallelize due to its communication overhead, as discussed by the authors when leaving parallelization to future work. Thus, we view developing an effective parallel multi-fidelity method, as we have done in this paper, as an important contribution.\n\nRegarding Figure 1: \nYes, cost here is wall-clock time. Thus, batched algorithms tend to have smaller wall-clock times than sequential algorithms. As per your suggestion we have added two additional parallel BO benchmarks: parallel EI and parallel KG. We include them in Figure 1 in the revised version of the paper. Batch cfKG outperforms both of these batch benchmarks.\n\n[1] Shah, Amar, and Zoubin Ghahramani. \"Parallel predictive entropy search for batch global optimization of expensive objective functions.\" Advances in Neural Information Processing Systems. 2015.", "Thanks for pointing out the mistake in formula (3.7). We have not made the error in the implementation and the experiments are valid. We have corrected it in the revised version of the paper. This should make section 3.3.2 more clear.\n\nThe cost function max_{1 <= i <= q} (cost(z_i)) is differentiable almost everywhere if cost(z) is differentiable everywhere. The points of non-differentiability are those points where there are ties in the maximum. Because they have measure 0, and our stochastic gradient estimator has continuous support (when the predictive distribution at the proposed points to sample is not degenerate), our stochastic gradient ascent algorithm encounters these points with probability 0 as long as it does not start at such a point. We now discuss this in the revised version of the paper.\n\nWe take the cost function for tuning a neural network in section 4.2 and 4.3 to be the number of training examples used during the training process divided by the original number of training examples. For example, if we subsample 10,000 training points (out of 50,000) per epoch, and train with 20 epochs, then the cost is 10,000*20/50,000 = 4. This definition is analogous to the resource R in the Hyperband paper. We take the cost function for kernel learning in the synchronous setting to be the wall-clock time. In batch settings, we take the cost to be max_{1 <= i <= q} (cost(z_i)), modeling the wall-clock that one would have when running jobs in parallel." ]
[ 5, 4, 6, -1, -1, -1, -1 ]
[ 4, 5, 5, -1, -1, -1, -1 ]
[ "iclr_2018_SknC0bW0-", "iclr_2018_SknC0bW0-", "iclr_2018_SknC0bW0-", "Bkq8aP27M", "H1Dw9y51z", "Sy4mWsOeG", "ryZpu-qlM" ]
iclr_2018_rk8R_JWRW
Gating out sensory noise in a spike-based Long Short-Term Memory network
Spiking neural networks are being investigated both as biologically plausible models of neural computation and also as a potentially more efficient type of neural network. While convolutional spiking neural networks have been demonstrated to achieve near state-of-the-art performance, only one solution has been proposed to convert gated recurrent neural networks, so far. Recurrent neural networks in the form of networks of gating memory cells have been central in state-of-the-art solutions in problem domains that involve sequence recognition or generation. Here, we design an analog gated LSTM cell where its neurons can be substituted for efficient stochastic spiking neurons. These adaptive spiking neurons implement an adaptive form of sigma-delta coding to convert internally computed analog activation values to spike-trains. For such neurons, we approximate the effective activation function, which resembles a sigmoid. We show how analog neurons with such activation functions can be used to create an analog LSTM cell; networks of these cells can then be trained with standard backpropagation. We train these LSTM networks on a noisy and noiseless version of the original sequence prediction task from Hochreiter & Schmidhuber (1997), and also on a noisy and noiseless version of a classical working memory reinforcement learning task, the T-Maze. Substituting the analog neurons for corresponding adaptive spiking neurons, we then show that almost all resulting spiking neural network equivalents correctly compute the original tasks.
rejected-papers
The reviewers agreed that the paper was somewhat preliminary in terms of the exposition and empirical work. They all find the underlying problem quite interesting and challenging (i.e. spiking recurrent networks). However, the manuscript failed to motivate the approach. In particular, everyone agrees that spiking networks are very interesting, but it's unclear what problem the presented work is solving. The authors need to be more clear about their motivation and then close the loop with empirical validation that their approach is solving the motivating problem (i.e. do we learn something about biological plausibility, are spiking networks better than traditional LSTMs at modeling a particular kind of data, or are they more efficiently implemented on hardware?). Motivating the work with one of these followed by convincing experiments would make this a much stronger paper. Pros: - Tackles an interesting and challenging problem at the intersection of neuroscience and ML - A novel method for creating a spiking LSTM Cons: - The motivation is not entirely clear - The empirical analysis is too simple and does not demonstrate the advantages of this approach - The paper seems unfocused and could use rewriting
train
[ "BkeiHSFxz", "SyWwzQceM", "BkQ6S3QZz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "First the authors suggest an adaptive analog neuron (AAN) model which can be trained by back-propagation and then mapped to an Adaptive Spiking Neuron (ASN). Second, the authors suggest a network module called Adaptive Analog LSTM Cell (AA-LSTM) which contains input cells, input gates, constant error carousels (CEC) and output cells. Jointly with the AA-LSTM, the authors describe a spiking model (AS-LSTM) that is meant to reproduce its transfer function. It is shown quantitatively that the transfer functions of isolated AAN and AA-LSTM units are well approximated by their spiking counterparts. Two sets of experiments are reported, a sequence prediction task taken from the original LSTM paper and a T-maze task solved with reward based learning.\n\nIn general, the paper presents an interesting idea. However, it seems that the main claims of the introduction are not sufficiently well proven later. Also, I believe that the tasks are rather simple and therefore it is not demonstrated that the approach performs well on practically relevant tasks.\n\nOn general level, it should be clarified whether the model is meant to reproduce features of biology or whether the model is meant to be efficient. If the model is meant to reproduce biology, some features of the model are problematic. In particular, that the CEC is modeled with an infinitely long integration time constant of the input current. This would produce infinitely long EPSPs. However, I think there is a chance that minor changes of the model could still work while being more realistic. For example, I would find it more convincing to put the CEC into the adaptation time constants by using a large tau_gamma or tau_eta.\n\nIf the model is meant to provide efficient spiking neural networks, I find the tasks too simple and too artificial. This is particularly true in comparison to the speech recognition tasks VAD and TIMIT which were already solved in Esser et al. with spiking and efficient feedforward networks. \n\nThe authors say in the introduction that they target to model recurrent neural networks. This is an important open question. The usage of the CEC is an interesting idea toward this goal.\nHowever, beside the presence of CEC I do not see any recurrence in the used networks. This seems in contradiction with what is implicitly claimed in the introduction, title and abstract. There are only input-output neuron connections in the sequence prediction task, and a single hidden layer for the T-maze (which does not seem to be recurrently connected). This is problematic as the authors mention that their goal is to reproduce the functionality of LSTMs with spiking neurons for which the network recurrence is an important feature. \n\n\nRegarding more low-level comments:\n\n- The authors used a truncated version of RTRL to train LSTMs and standard back-propagation for single neurons. I wonder why two different algorithms were used, as, in principle, they compute the same gradient\neither forward or backward.\nIs there a reason for this? Did the truncated RTRL bring any\nadditional benefit compared to the exact backpropagation already\nimplemented in automatic differentiation software?\n\n- The sigma-delta neuron model seems quite ad-hoc and incompatible\nwith most simulators and dedicated hardware. I wonder whether the\nAS-LSTM model would still be valid if the ASN model is replaced with a\nstandard SRM model for instance.\n\n- The authors claim in the introduction that they made an analytical conversion from discrete to continuous time. I did not find this in the main text.\n\n- The axes in Figure 1 are not defined (what is Delta S?) and the\ncaption does not match. \"Average output signal [...] as a function of its incoming PSC I\" output signal is not defined, and S is presented in the graph, but not I.", "The authors propose a first implementation of spiking LSTMs. This is an interesting and open problem. However, the present work somewhat incomplete, and requires further experiments and clarifications.\n\nPros:\n1. To my best knowledge, this is the first mapping of LSTMs to spiking networks\n2. The authors tackle an interesting and challenging problem.\n\nCons:\n1. In the abstract the authors mention that another approach has been taken, but is never stated what’s the problem that this new one is trying to address. Also, H&S 1997 tested several tasks, which is the one that the authors are referring to?\n2. Figure 1 is not very easy to read. The authors can spell out the labels of the axis (e.g. S could be input, S)\n3. Why are output and forget gates not considered here?\n4. A major point in mapping LSTMs to spiking networks is its biological plausibility. However, the authors do not seem to explore this. Of particular interest is its relationship to a recent proposal of a cortical implementation of LSTMs (Cortical microcircuits as gated-RNNs, NIPS 2017).\n5. The text should be improved, for example in the abstract: “that almost all resulting spiking neural network equivalents correctly..”, please rephrase.\n6. Current LSTMs are applied in much more challenging problems than the original ones. It would be important to test one of this, perhaps the relatively simple pixel-by-pixel MNIST task. If this is not feasible, please comment.\n\nMinor comments:\n1. Change in the abstract “can be substituted for” > “can be substituted by”\n2. A new body of research aims at using backprop in spiking RNNs (e.g. Friedemann and Ganguli 2017). The present work gets around this by training the analog version instead. It would be of interesting to discuss how to train spiking-LSTMs as this is an important topic for future research. \n3. As the main promise of using spiking nets (instead of rate) is their potential efficiency in neuromorphic systems, it would be interesting to contrast in the text the two options for LSTMs, and give some more quantitative analyses on the gain of spiking-LSTM versus rate-LSTMs in terms of efficiency.", "Here the authors propose a variant of an analog LSTM and then further propose a mechanism by which to convert it to a spiking network, in what a computational neuroscientist would call a 'mean-field' approach. The result is a network that communicates using only spikes. In general I think that the problem of training or even creating spiking networks from analog networks is interesting and worthy of attention from the ML community. However, this manuscript feels very early and I believe needs further focus and work before it will have impact in the community. \n\nI can see three directions in which this work could be improved to provide wider interest:\n1. Neurophysiological realism - It appears the authors are not interested in this direction given the focus of the manuscript ( other than mentioning the brain as motivation).\n\n2. ML interest - From a pure ML point of view some interesting questions relate to training / computations / representations / performance. However, in the manuscript the tasks trained are exceedingly simple and unconvincing from either a representations or performance perspective. Since the main novelty of the manuscript is the 'spikification' algorithm, little is learned about how spiking networks function, or how spiking networks might represent data or implement computations. \n\n3. Hardware considerations - There is no analysis of what has been made more efficient, more sped-up, how to meaningfully implement the algorithm, etc., etc. A focus in this direction could find an applied audience.\n\nAs a minor comment, the paper could stand to be improved in terms of exposition. In particular, the paper relies on ideas from other papers and the assumption is largely made that the reader is familiar with them, although the paper is self-contained." ]
[ 5, 5, 4 ]
[ 4, 3, 4 ]
[ "iclr_2018_rk8R_JWRW", "iclr_2018_rk8R_JWRW", "iclr_2018_rk8R_JWRW" ]
iclr_2018_SyxCqGbRZ
Learning to Treat Sepsis with Multi-Output Gaussian Process Deep Recurrent Q-Networks
Sepsis is a life-threatening complication from infection and a leading cause of mortality in hospitals. While early detection of sepsis improves patient outcomes, there is little consensus on exact treatment guidelines, and treating septic patients remains an open problem. In this work we present a new deep reinforcement learning method that we use to learn optimal personalized treatment policies for septic patients. We model patient continuous-valued physiological time series using multi-output Gaussian processes, a probabilistic model that easily handles missing values and irregularly spaced observation times while maintaining estimates of uncertainty. The Gaussian process is directly tied to a deep recurrent Q-network that learns clinically interpretable treatment policies, and both models are learned together end-to-end. We evaluate our approach on a heterogeneous dataset of septic spanning 15 months from our university health system, and find that our learned policy could reduce patient mortality by as much as 8.2\% from an overall baseline mortality rate of 13.3\%. Our algorithm could be used to make treatment recommendations to physicians as part of a decision support tool, and the framework readily applies to other reinforcement learning problems that rely on sparsely sampled and frequently missing multivariate time series data.
rejected-papers
This paper brings recent innovations in reinforcement learning to bear on a tremendously important application, treating sepsis. The reviewers were all compelled by the application domain but thought that the technical innovation in the work was low. While ICLR welcomes application papers, in this instance the reviewers felt that the technical contribution was not justified well enough. Two of the reviewers asked for a more clear discussion of the underlying assumptions of the approach (i.e. offline policy evaluation and not missing at random). Unfortunately, lack of significant revisions to the manuscript over the discussion period seem to have precluded changes to the reviewer scores. Overall, this could be a strong submission to a conference that is more closely tied to the application domain. Pros: - Very compelling application that is well motivated - Impressive (possibly impactful) results - Thorough empirical comparison Cons: - Lack of technical innovation - Questions about the underlying assumptions and choice of methodology
train
[ "BJlGBMKVz", "rJlaKw9lG", "rkM5HcFxf", "Hycqpx9lM", "HJ4SAuFXz", "SyQ5tOFmM", "rJ8O5vYQG", "SJLV9PKmG", "rkeR64qfM", "H1ghDZfMz", "SJO3UDJGM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "public" ]
[ "I appreciate the authors' in-depth and very thoughtful responses to all of the reviews. I really REALLY like this work, and contrary to the other (IMO, overly negative) reviews, I feel that it fits at ICLR, which has recent history of accepting very solid clinical application work, even without significant methods novelty.\n\nThe reason I could not justify raising my score is that the response, while thoughtful, did little to help me understand the paper in a new light. Most of my critiques (and those in the other reviews) focused on exposition and discussion, but the authors did not provide a revised manuscript, so it's difficult to imagine how much the paper would be improved with some of the content written in the responses. I realize that making substantial revisions over the holidays is a drag (and that revising a manuscript during reviews is not normal practice in machine learning academia), but I typically do not feel comfortable assigning a higher score without seeing the revised manuscript.\n\nFor what it's worth, I can imagine one circumstance where I'd make an exception: if the authors provided a new batch of experimental results on an open data seat like MIMIC.", "This paper presents an important application of modern deep reinforcement learning (RL) methods to learning optimal treatments for sepsis from past patient encounters. From a methods standpoint, it offers nothing new but does synthesize best practice deep RL methods with a differentiable multi-task Gaussian Process (GP) input layer. This means that the proposed architecture can directly handle irregular sampling and missing values without a separate resampling step and can be trained end-to-end to optimize reward -- patient survival -- without a separate ad hoc preprocessing step. The experiments are thorough and the results promising. Overall, strong application work, which I appreciate, but with several flaws that I'd like the authors to address, if possible, during the review period. I'm perfectly willing to raise my score at least one point if my major concerns are addressed.\n\nQUALITY\n\nAlthough the core idea is derivative, the work is executed pretty well. Pros (+) and cons (-) are listed below:\n\n+ discussion of the sepsis application is very strong. I especially appreciated the qualitative analysis of the individual case shown in Figure 4. While only a single anecdote, it provides insight into how the model might yield clinical insights at the bedside.\n+ thorough comparison of competing baselines and clear variants -- though it would be cool to apply offline policy evaluation (OPE) to some of the standard clinical approaches, e.g., EGDT, discussed in the introduction.\n\n- \"uncertainty\" is one of the supposed benefits of the MTGP layer, but it was not at all clear how it was used in practice, other than -- perhaps -- as a regularizer during training, similar to data augmentation.\n- uses offline policy evaluation \"off-the-shelf\" and does not address or speculate the potential pitfalls or dangers of doing so. See \"Note on Offline Policy Evaluation\" below.\n- although I like the anecdote, it tells us very little about the overall policy. The authors might consider some coarse statistical analyses, similar to Figure 3 in Raghu, et al. (though I'm sure you can come up with more and better analyses!). \n- there are some interesting patterns in Table 1 that the authors do not discuss, such as the fact that adding the MGP layer appears to reduce expected mortality more (on average) than adding recurrences. Why might this be (my guess is data augmentation)?\n\nCLARITY\n\nPaper is well-written, for the most part. I have some nitpicks about the writing, but in general, it's not a burden to read.\n\n+ core ideas and challenges of the application are communicated clearly\n\n- the authors did not detail how they chose their hyperparameters (number of layers, size of layers, whether to use dropout, etc.). This is critical for fully assessing the import of the empirical results.\n- the text in the figures are virtually impossible to read (too small)\n- the image quality in the figures is pretty bad (and some appear to be weirdly stretched or distorted)\n- I prefer the X-axis labels that Raghu uses in their Figure 4 (with clinically interpretable increments) over the generic +1, +2, etc., labels used in Figure 3 here\n\nSome nitpicks on the writing\n\n* too much passive voice. Example: third paragraph in introduction (\"Despite the promising results of EGDT, concerns arose.\"). Avoid passive voice whenever possible.\n* page 3, sec. 2.2 doesn't flow well. You bounce back and forth between discussion of the Markov assumption and full vs. partial observability. Try to focus on one concept at a time (and the solution offered by a proposed approach). Note that RNNs do NOT relax the Markov assumption -- they simply do an end run around it by using distributed latent representations.\n\nORIGINALITY\n\nThis work scores relatively low in originality. It really just combines ideas from two MLHC 2017 papers [1][2]. One could read those two papers and immediately conclude this paper's findings (the GP helps; RL helps; GP + RL is the best). This paper adds few (if any) new insights.\n\nOne way to address this would be to discuss in greater detail some potential explanations for why their results are stronger than those in Raghu and why the MTGP models outperform their simpler counterparts. Perhaps they could run some experiments to measure performance as a function of the number of MC samples (if perhaps grows with the number of samples, then it suggests that maybe it's largely a data augmentation effect).\n\nSIGNIFICANCE\n\nThis paper's primary significance is that it provides further evidence that RL could be applied successfully to clinical data and problems, in particular sepsis treatment. However, this gets undersold (unsurprising, given the ML community's disdain for replication studies). It is also noteworthy that the MTGP gives such a large boost in performance for a relatively modest data set -- this property is worth exploring further, since clinical data are often small. However, again, this gets undersold.\n\nOne recommendation I would make is that the authors directly compare the results in this paper with those in Raghu and to point out, in particular, the confirmatory results. Interestingly, the shapes of the action vs. mortality rate plots (Figure 4 in Raghu, Figure 3 here) are quite similar -- that's not precisely replication, but it's comforting.\n\nNOTE ON OFFLINE POLICY EVALUATION\n\nThis work has the same flaw that Raghu, et al., has -- neither justifies the use of offline policy evaluation. Both simply apply Jiang, et al.'s doubly robust approach [3] \"off the shelf\" without commenting on its accuracy in practice or discussing potential pitfalls (neither even considers [4] which seems to be superior in practice, especially with limited data). As far as I can tell (I'm not an RL expert), the DR approach carries stronger consistency guarantees and reduced variance but is still only as good the data it is trained on, and clinical data is known to have significant bias, particularly with respect to treatment, where clinicians are often following formulaic guidelines. Can we trust the mortality estimates in Table 1? Why or why not? Why shouldn't I think that RL is basically guaranteed to outperform non-RL approaches under an evaluation that is itself an RL model learned from the same training data!\n\nWhile I'm willing to accept that this is the best we can do in this setting (we can't just try the learned policy on new patients!), I think this paper (and similar works, like Raghu, et al.) *must* provide a sober and critical discussion of its results, rather than simply applaud itself for getting the best score among competing approaches.\n\nREFERENCES\n\n[1] Raghu, et al. \"Continuous State-Space Models for Optimal Sepsis Treatment - a Deep Reinforcement Learning Approach.\" MLHC 2017.\n[2] Futoma, et al. \"An Improved Multi-Output Gaussian Process RNN with Real-Time Validation for Early Sepsis Detection.\" MLHC 2017.\n[3] Jiang, et al. \"Doubly robust off-policy value evaluation for reinforcement learning.\" ICML 2016.\n[4] Thompson and Brunskill. \"Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning.\" ICML 2016.", "The paper presents an application of deep learning to predict optimal treatment of sepsis, using data routinely collected in a hospital. The paper is very clear and well written, with a thorough review of related work. However, the approach is mainly an application of existing methods and the technical novelty is low. Further, the methods are applied to only a single dataset and there is no comparison against the state of the art, only between components of the method. This makes it difficult to assess how much of an improvement this collection of methods provides and how much it would generalize to data from other hospitals or applications. As written, the paper may be more appropriate for an application-focused venue.", "The paper presents a reinforcement learning method that uses Q-learning with deep neural networks and a multi-output Gaussian process for imputation, all for retrospective analysis of treatment decisions for preventing mortality among patient with sepsis.\n\nWhile the work represents a combination of leading methods in the machine learning literature, key details are missing: most importantly, that the reinforcement learning is based on observational data and in a setting where the unconfoundedness assumption is very unlikely to hold. For example, an MGP imputation implicitly assumes MAR (missing at random) unless otherwise specified, e.g. through informative priors. The data is almost certainly MNAR (missing not at random). These concerns ought to be discussed at length.\n\nThe clarity of the work would be improved with figures describing the model (e.g. plate/architecture diagram) and pseudocode. E.g. as it stands, it is not clear how the doubly-robust estimation is being used and if it is appropriate given the above concerns. Similar questions for Dueling Double-Deep Q-network, Prioritized Experience Replay.\n\nThe medical motivation does frame the clinical problem well. The paper does serve as a way to generate hypotheses, e.g. greater use of abx and vasopressors but less IVF.\n\nThe results in Table 1 suggest that the algorithmic policy would prevent the death of ~1 in 12 individuals (ARR 8.2%) that a physician takes care of in your population. The text says \"might reduce mortality by as much as 8%\". The authors might consider expanding on this. What can/should be done convince the reader this number is real.\n\nAdditional questions: what is the sensitivity of the analysis to time interval and granularity of the action space (here, 4 hours; 3x3x5 treatments)? How would this work for whole order sets? In the example, abx 1 and abx 2 are recommended in the next 4 hours even after they were already administered. How does this relate to pharmacologic practice, where abx are often dosed at specific, wider intervals, e.g. vancomycin q12h? How could the model be updated for a clinician who acknowledges the action suggestion but dismisses it as incorrect?", "Thank you for the comments and feedback. \n\nHowever, we strongly disagree about the novelty of our work and the overall contribution. Although the constituent methods we rely on in this work are not novel, their combination together in this setting is novel. As noted by R1, an important takeaway from our work is confirmation and replication of existing work, as we show that we can use RL to improve upon current clinical practice in treatment of sepsis. \n\nIt is a valid criticism that we only applied our methods to a single dataset. In the future we plan to apply our methods to the more readily available MIMIC data as well, as a second dataset. However, it takes an enormous amount of manual effort to clean and prepare raw clinical data from an electronic health record for analysis. Even the MIMIC data requires a lot of preprocessing before modeling. In our particular application of interest at an academic university hospital, MIMIC data is not that useful because it only contains data from an ICU setting, and we are interesting in treating sepsis in other clinical settings as well (in our data, only about 20% of sepsis cases first present in the ICU).\n\nThe DQN baseline method we compare against is extremely similar to Raghu et al, which to our knowledge is state of the art. We will make this distinction more clear. There are not many papers published that apply modern reinforcement learning methods to observational clinical data, to our knowledge. If the reviewer has a specific state of the art method that we should compare against, we would be happy to compare against it and add it to our results. We believe the ablation study we performed examining the utility of the MGP and the recurrent architecture is convincing, as (see R1 comment) all our baseline methods shared the same architecture setup and hyperparameters. \n\nWe strongly disagree that it is difficult to assess what improvement these methods provide; our results show clear improvements to using an MGP for interpolation and data augmentation, and recurrence to learn the latent state. It is true that the particular learned policy may not generalize well if it were to be applied to very different patient populations, but the overall method would still apply and a new policy could be learned from that population instead. In future work, we could compare how the policy learned on our institutional data might perform on MIMIC (but there is no reason to suspect it would be great, as MIMIC is only ICU patients, which is an extremely different population). As R1 notes, it is an important finding in and of itself that we can use RL and it seems to work well in solving a real clinical problem, confirming the previous findings of Raghu et al.\n\nWe believe that this work is appropriate for ICLR, as one of the relevant topics on the conference website is applications, which our work would clearly fall under.\n", "Thank you for the insightful comments and constructive feedback. We are revising the paper to address your concerns, and the revision will be posted shortly. \n\n- These concerns about observational data and unconfoundedness are fair, and we will make these assumptions more explicit. Although the MGP makes a MAR assumption, in our models we explicitly model the missingness structure of the time series data by using indicator variables representing whether or not a particular variable was recently sampled. Empirically in past work we found modeling this missing data structure to be very helpful, and we will make it more clear that we are doing this. We do not feel that the underlying MAR assumption that comes with the MGP is overly restrictive, as the MGP primarily functions as a preprocessing step to do better interpolation and function as a form of data augmentation to reduce overfitting. As for the unconfoundedness assumption, this is an extremely common assumption in causal inference and off-policy reinforcement learning; we will make this more explicit.\n- We will add a schematic diagram detailing the model architecture and how the MGP feeds into the downstream DRQN. As discussed by R1, we have updated our discussion on off-policy evaluation. In all baseline methods we used dueling double-deep Q-networks and prioritized experience replay, as in Raghu et al. We will make these modifications more clear.\n- As noted by R1, there is some discussion warranted regarding the use of off-policy evaluation, so these concerns are valid. Thus our final mortality reduction estimates may be somewhat optimistic. Rather than dwelling on exact quantities, the take-home message is that the proposed architectures seem to offer improvements, and it seems that simply using an RL approach can improve over current physician policy.\n- We did not do an extensive sensitivity analysis of the time interval size and granularity of the action space, but in our experience changing these did not seem to greatly impact performance. We chose a fairly long (4 hour) time interval, in order to reduce the number of times the \"no treatment\" action would be taken, as of course this gets more common with a finer time window. We chose the action space somewhat heuristically, aiming to make it as fine as possible, while checking to make sure that almost all of the 45 different actions occur at least a reasonable number of times. It might be worth checking to see what the coarsest possible action space would be that does not compromise performance. \n- For whole order sets, we might want to change the action space to have each action be a different (commonly ordered) order set, rather than the way we've broken down the actions. This would certainly be a more actionable and more directly applicable problem setup. This is an interesting idea, but for now we leave it to future work.\n- Abx 1 and Abx 2 simply refers to number of antibiotics given, not specific classes. Eg Abx 1 is simply \"1 abx given in this 4 hour window\", and Abx 2 is \"2 or more abx given in this 4 hour window\". As with the previous comment, in future work we will aim to build a more directly actionable action space. Probably in this particular example, since WBC continued to rise, the RL model continued to recommend more abx be given, even after they were already administered. This might be reasonable, since in practice there were 5/7 4 hour windows where at least one abx was given. For dosing specific drugs in wider intervals, we'd want to increase the action space to more directly take into account timing and dosing of different drugs, eg if we want a 6h or a 12h version of a drug.\n- This is a great suggestion, and a very interesting avenue for future work. We would want a clinical decision support tool to log what clinicians actually ended up doing after viewing the RL recommended action, and also log whether or not a clinician views an action suggestion as wrong. Given a dataset of clinician actions and responses to the RL suggestions, we could retrain the model and explicitly penalize cases where the RL model made an incorrect suggestion. There is definitely room for future work here - a caveat is that we don't want to entirely discredit what the model recommends, as there is still room for physician error in judging the model suggestions, and there may be cases where the suggestion actually would have been good.", "\"Originality\" comments:\n- We feel that this combination of GP + RL is interesting and useful. While constituent pieces are not themselves novel, we emphasize that the loss function that we optimize is in fact novel. The use of the GP acts as a form of data augmentation and extra regularization that helped empirically.\n- This is a good idea, and we will try to expand upon this more with experiments comparing MC sample sizes.\n\n\"Significance\":\n- These are both excellent points, and we have updated our discussion to better emphasize them.\n- This is also a good point. It is not feasible in the short term to make a direct comparison against Raghu, et al on the same dataset. However, our DQN baseline method is roughly equivalent to their methodology, and we will make this comparison more explicit. \n\nIn future work, we will also run our method on MIMIC data, so we can have a more direct comparison. It is worth reiterating that our dataset, unlike MIMIC, is not constrained to only ICU patients, but includes patients in every area of the hospital, including Emergency Department, general wards, and ICU.\n\n\"Note on Offline Policy Evaluation\" comments:\n- This is an important point. We will implement and use both [4] and Jiang et al for estimating off-policy values, and show the results of both. We will also update the discussion to be more frank about limitations here.\n- Fair point, and we will revise our discussion and conclusions to be more critical about the limitations of RL in clinical settings, and how difficult it is to evaluate. The best way to see if the learned policies are actually useful is to try using them in practice, but barring that, we can conduct extensive clinical chart reviews to see if they recommend sensible treatments and if they are over-treating.\n", "Thank you for the insightful comments and constructive feedback. We are revising the paper to address your concerns, and the revision will be posted shortly. \n\n\"Quality\" comments:\n+ We believe an important practical use for our method is in identifying treatments earlier than they were actually given, as evidenced in our single example. \n+ Practically these comparisons would be difficult to directly make, since in our observational data there is no guarantee about how often standard clinical approaches such as EGDT are actually followed. Although possible in principle to define a computable treatment strategy to try to mimic, eg EGDT (if A then give X, if B then give Y, ...), in practice this would be pretty hard to define. This is a cool idea though that we'll leave to future work.\n- The uncertainty mostly acts as a regularizer, yes. It is also a form of data augmentation, since from a single set of patient clinical time series we get multiple draws from the MGP. Empirically and in past work we've found it reduces overfitting compared to using the MGP mean. It is possible to utilize the associated uncertainty as well in learning the policy, although we did not explore this much. The uncertainty in time series inputs captured by the MGP can be propagated forwards through the DRQN to the learned Q-values, giving some notion of uncertainty in Q-values due to uncertain inputs. Combining this with other Bayesian deep learning methods might give improved uncertainty quantification, which could be useful in learning an optimal (potentially stochastic) policy.\n- Will address OPE below.\n- It is hard to concisely summarize a policy - much room for future work here! Since we have 3 types of actions instead of 2 in Raghu et al it is hard to reproduce this figure, since our policy would need to be visualized as a 3d tensor not a matrix. We explored using a histogram that enumerates all 45 possible actions, but it was very cluttered. An analysis that we have added is checking how often the learned RL policy makes recommendations for treatments before they were actually given by a physician. This gives some notion of how timely a learned policy is, if in many cases it is recommending the same treatments that were eventually given, only sooner.\n- Yes, probably due to the reduction in overfitting associated with the data augmentation effect of the MGP, as during training the MGP provides many inputs by drawing samples from a single set of patient data.\n\n\"Clarity\" comments:\n+ Thank you!\n- We address this more explicitly in the revision. We did not do much widespread experimentation with hyperparameters. We used the same neural network architecture across all methods, in terms of number of layers, layer size, learning rates, etc so it is unlikely our observed results are due to hyperparameters, though improved performance for some methods may be possible by more careful tuning.\n- Will correct text size.\n- Will fix image resolution (we initially tried to fit everything in 8 pages)\n- Will edit these axis labels for IV. For antibiotics and vasopressors, however, our action space depends on quantity and not actual dosing: +1, +2 refers to number of times a drug in a class was given within the time window, and not dosing. Our clinical collaborators advised that this makes more sense, especially since for vasopressors the dosing is very unclear and would be hard to quantify numerically due to differences in drugs; we're not sure how Raghu et al assessed vasopressor dosing.\n* Have made some edits to the writing\n* Have revised sec 2.2 to flow better, making more explicit your correct point that we are not relaxing the Markov assumption, but instead use a latent representation that depends on the full history so far.", "Dear Authors,\n\nI am part of a team at University of Technology Sydney participating in the ICLR 2018 Reproducibility Challenge. We have chosen to reproduce your study and are wondering if you would like to share some or all of the code you used please!!\n\nemail: lu.liu-10@student.uts.edu.au\n\nThank you!", "The idea is novel and the experiment results are the best among the papers in treating sepsis.\nHowever, some details of the model design and experiment implementations need to be clarified with help of source code and the author mentioned source code will be released via Github. \nSo would it be okay that the author of this paper release the source code please?\n", "Dear author,\nThank you for you great work!!!\nYou almost include all the method I encountered in this area and I was so impressed by the mortality rate you have reduced via the policy generated by your frame work. And I am very interested in your data preprocessing method and model.\nWould you please release you source code for me to have a glimpse of you model?\nAnd would you please give me more information of the specific data used in your article if that is possible?\nLastly, could you please tell me what features did you choose in your POMDP?\nEmail: zhuowei.wang.cs.uts@gmail.com" ]
[ -1, 6, 3, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "rJlaKw9lG", "iclr_2018_SyxCqGbRZ", "iclr_2018_SyxCqGbRZ", "iclr_2018_SyxCqGbRZ", "rkM5HcFxf", "Hycqpx9lM", "SJLV9PKmG", "rJlaKw9lG", "iclr_2018_SyxCqGbRZ", "iclr_2018_SyxCqGbRZ", "iclr_2018_SyxCqGbRZ" ]
iclr_2018_rkTBjG-AZ
DeepArchitect: Automatically Designing and Training Deep Architectures
In deep learning, performance is strongly affected by the choice of architecture and hyperparameters. While there has been extensive work on automatic hyperpa- rameter optimization for simple spaces, complex spaces such as the space of deep architectures remain largely unexplored. As a result, the choice of architecture is done manually by the human expert through a slow trial and error process guided mainly by intuition. In this paper we describe a framework for automatically designing and training deep models. We propose an extensible and modular lan- guage that allows the human expert to compactly represent complex search spaces over architectures and their hyperparameters. The resulting search spaces are tree- structured and therefore easy to traverse. Models can be automatically compiled to computational graphs once values for all hyperparameters have been chosen. We can leverage the structure of the search space to introduce different model search algorithms, such as random search, Monte Carlo tree search (MCTS), and sequen- tial model-based optimization (SMBO). We present experiments comparing the different algorithms on CIFAR-10 and show that MCTS and SMBO outperform random search. We also present experiments on MNIST, showing that the same search space achieves near state-of-the-art performance with a few samples. These experiments show that our framework can be used effectively for model discov- ery, as it is possible to describe expressive search spaces and discover competitive models without much effort from the human expert. Code for our framework and experiments has been made publicly available
rejected-papers
This paper introduces a framework for specifying the model search space for exploring over the space of architectures and hyperparameters in deep learning models (often referred to as architecture search). Optimizing over complex architectures is a challenging problem that has received significant attention as deep learning models become more exotic and complex. This work helps to develop a methodology for describing and exploring the complex space of architectures, which is a challenging problem. The authors demonstrate that their method helps to structure the search over hyperparameters using sequential model based optimization and Monte Carlo tree search. The paper is well written and easy to follow. However, the level of technical innovation is low and the experiments don't really demonstrate the merits of the method over existing strategies. One reviewer took issue with the treatment of related work. The underlying idea is compelling and addresses an open question that is of great interest currently. However, without experiments demonstrating that this works better than, e.g., the specification in the hyperopt package, it is difficult to assess the contribution. The authors must do a better job of placing this contributing in the context of existing literature and empirically demonstrate its advantages. The presented experiments show that the method works in a limited setting and don't explore optimization over complex spaces (i.e. over architectures - e.g. number of layers, regularization for each layer, type of each layer, etc.). There's nothing presented empirically that hasn't been possible with standard Bayesian optimization techniques. This is a great start, but it needs more justification empirically (or theoretically). Pros: - Addresses an important and pertinent problem - architecture search for deep learning - Provides an intuitive and interesting solution to specifying the architecture search problem - Well written and clear Cons: - The empirical analysis does not demonstrate the advantages of this approach over existing literature - Needs to place itself better in the context of existing literature
train
[ "Skp1e-zgM", "H1ZsTiUEf", "B1BvqGB4f", "Sy4q4vBgf", "ByV24asxM", "r1RljwaQz", "rk6l68T7G", "BJS0iIpQG", "rJHFv8pXG" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The author present a language for expressing hyperparameters (HP) of a network. This language allows to define a tree structure search space to cover the case where some HP variable exists only if some previous HP variable took some specific value. Using this tool, they explore the depth of the network, when to apply batch-normalization, when to apply dropout and some optimization variables. They compare the search performance of random search, monte carlo tree search and a basic implementation of a Sequential Model Based Search. \n\nThe novelty in this paper is below what is expected for a publication at ICLR. I recommend rejection.", "Saying that a DSL is not interesting for this community is your subjective opinion. Work on code, frameworks, and APIs has been published at top ML conferences and/or been hugely influential in ML; for example: \n- http://download.tensorflow.org/paper/whitepaper2015.pdf\n- https://papers.nips.cc/paper/5872-efficient-and-robust-automated-machine-learning.pdf\n- https://papers.nips.cc/paper/6986-on-the-fly-operation-batching-in-dynamic-computation-graphs.pdf,\n- https://openreview.net/forum?id=ryrGawqex\n- https://arxiv.org/pdf/1701.03980.pdf\n- https://papers.nips.cc/paper/6256-a-credit-assignment-compiler-for-joint-prediction.pdf).\n\nYou can always take the position that something is not interesting because it does not let you do anything new. Using that line of thought, high-level programming languages are not interesting because you already can accomplish the same in low-level programming languages; Tensorflow, PyTorch, and similar framework are not interesting because you can write your own deep learning code from scratch in C++ or Python. \n\nMuch of the recent progress in ML was greatly facilitated by the existence of high-level tools such Tensorflow and Pytorch. One can only wonder how much more far behind as a community we would be if everyone was still writing their own backpropagation implementations. While Tensorflow or Pytorch do not allow you to do things that could not be done in C++ or Python, they make expressing interesting ML programs drastically easier, and as a result, researchers are able to think about and approach problems differently than they would if there were no such deep learning frameworks. \n\nOur work allows us to think about architecture search differently. Namely, splitting the problem of architecture search into three parts (model search space specification language, model search algorithm, model evaluation algorithm) is a useful perspective as the different parts can be changed and/or improved independently (e.g., a more expressive model search space specification language, a more sample efficient model search algorithm, a different model evaluation algorithm). Additionally, our model search space specification language has interesting ideas on how to induce these complex compositional architecture search spaces (which end up being in complex implicit hyperparameter spaces) by writing expressions in a DSL. The fact that all this could be expressed differently is of limited relevance. The truth is that our DSL allows researchers to think differently about the problem and gives them interesting tools to write expressive search spaces over architectures. That being said, we believe that our paper introduces a sufficient number of new and interesting ideas to be of value to this community.\n\nCan you clarify (e.g., give an example) of what you mean by \"directed acyclic graph\" in this context (i.e., in the context of hyperparameters spaces). \n", "A domain specific language would be more appreciated by a different audience, where writing in Python is seen as a drawback and where the DSL is more exciting. As a reviewer for ICLR I am looking for something new that it lets me do. I don't see it yet, and random search wouldn't be compelling.\n\nAlso, there has been some mis-characterization of Hyperopt here: Hyperopt has always supported graph-structured (rather than just tree-structured) compositions in the computational graph / search space input to fmin. The \"Tree of Parzen Windows\" is something of a misnomer in this regard, as the implementation of that algorithm in Hyperopt works on directed acyclic graphs.", "This paper introduces a DeepArchitect framework to build and train deep models automatically. Specifically, the authors proposes three components, i.e., model search specification language, model search algorithm, model evaluation algorithm. The paper is written well, and the proposed framework provides us with a systematical way to design deep models.\n\nHowever, my concern is mainly about its importance in practice. The experiments and computational modules are basic and small-scale, i.e., it may be restricted for large-scale computer vision problems. \n", "Monte-Carlo Tree Search is a reasonable and promising approach to hyperparameter optimization or algorithm configuration in search spaces that involve conditional structure.\n\nThis paper must acknowledge more explicitly that it is not the first to take a graph-search approach. The cited work related to SMAC and Hyperopt / TPE addresses this problem similarly. The technique of separating a description language from the optimization algorithm is also used in both of these projects / lines of research. The [mis-cited] paper titled “Making a science of model search …” is about using TPE to configure 1, 2, and 3 layer convnets for several datasets, including CIFAR-10. SMAC and Hyperopt have been used to search large search spaces involving pre-processing and classification algorithms (e.g. auto-sklearn, autoweka, hyperopt-sklearn). There have been near-annual workshops on AutoML and Bayesian optimization at NIPS and ICML (see e.g. automl.org).\n\nThere is a benchmark suite of hyperparameter optimization problems that would be a better way to evaluate MCTS as a hyperparameter optimization algorithm: http://www.ml4aad.org/automl/hpolib/", "Our work is novel for the reasons elaborated in our other replies. \n\nSince this work has been made available on ArXiv, its ideas have been used or suggested future work on architecture search. For example, the ideas of compositionality for the representation of a search space are used in Hierarchical Representations for Efficient Architecture Search (https://openreview.net/forum?id=BJQRKzbA-); using MCTS to do model search is used in \"Finding Competitive Network Architectures Within a Day Using UCT\"; using SMBO to do model search is used in \"Progressive Neural Architecture Search\", among others.\n\nThe framework set forth by this paper provides a foundation for thinking about architecture search. Progress in architecture search can be made by developing a better model search space representation language, giving more expressive tools to a deep learning expert to represent search spaces over models. Progress can also be made by developing better model search algorithms that search spaces of models more efficiently. The fact that this framework is modular is a big advantage, as research can be focused on each of the components rather that having to having to develop a monolithic system from scratch each time. \n", "The reviewer presents the following criticism:\n1. The decomposition of the framework into model search space specification language, model search algorithm, and model evaluation algorithm is interesting, but there is concern about its importance in practice.\n2. There is concern that the framework may be restrictive for large-scale problems.\n\nResponse: \n1. The decomposition into these three components allows us to think clearly about each of them rather than dealing with all aspects simultaneously. For example, future contributions may focus on extending the model search space specification language or on developing better model search algorithms. These components interact only through a very minimal interface. This decomposition of the problem will be useful for future research on architecture search.\n\n2. There is no fundamental reason why this framework should be restrictive in the way the reviewer is concerned. We discuss in Section 4 the properties required by a module. These are quite general, and therefore we can easily introduce useful new basic and composite modules. See also Appendix B for examples of modules that we defined. As the different components in our framework are highly extensible and modular, our work will be very useful in approaching future problems in architecture search. Namely, the model search space specification language is expressive enough to capture many relevant high-performance search spaces, as discussed in paragraph 4 of section 4.1 and in appendix D.\n\nWe do not have the resources available to conduct experiments on the scale of some other recent papers (e.g., those coming out of major corporate research labs such as Google). Nonetheless, our smaller scale experiments are enough to support our claims: expressive search spaces over architectures can be represented easily by writing expressions in our model search space specification language (see Appendix A, Figure 4, and Figure 5); the search spaces induced can be effectively searched by random search; using model search algorithms attuned to the structure of the search space results in improved search performance.\n\nOne of the main focus of this work is the representation power of the model search space specification language. We also note that due to the flexibility of the search space specification language, architecture search can be easily integrated in a ML workflow, as the expert only has to design a reasonable search space and provide a way of evaluating models.\n", "Final remarks: \nThe introduction of the model search space specification language along with just random search experiments would by itself be interesting enough to warrant publication. This domain-specific language is extensible and compositional, allowing the user to easily represent search spaces over architectures and compile them to computational graphs. The model search algorithms proposed, while simple, are well-suited to the resulting search spaces and are a good start to design more complex and performant ones for this setting. This is, to the best of our knowledge, the first work to propose such a DSL for architecture search and explore its benefits; as such, it provides an extensible platform for future research on architecture search. To support this benefit, we have made all code available and will continue to extend it.\n", "The reviewer presents the following criticism:\n1. TPE/SMAC also allows us to describe search spaces with conditional structure. \n2. “Making a science of model search …” uses TPE to search over simple convolutional architectures.\n3. The performance of MCTS for hyperparameter optimization would be better evaluated in the benchmarks pointed out by the reviewer.\n\nResponse:\n1. We do not claim to be the first to present a method that works with conditional structure. We clearly state in our paper that there are general-purpose hyperparameter optimization algorithms such as TPE, however these are harder to use for architecture search because they require the user to write more code and single out what are the hyperparameters to search over. In contrast, in our, writing an expression in our DSL (domain-specific language) automatically induces the search space. Furthermore, this language allows us to directly compile the resulting model to the corresponding computational graph. Note that our work in focused on architecture search for deep learning and not general hyperparameter optimization. We are in the same line of work as Zoph and Le (ICLR 2017). \n\nExpressions written in our model search space specification language encode trees; paths through the encoding correspond to fully specified models that can be compiled to computational graphs. Note that this tree is implicit; we only require functionality to traverse the tree, and not a full explicit representation. This is important when there are exponentially many paths from the root to leaves. This contrasts with TPE: e.g., \"Making a science of model search\" uses a simple representation, and therefore is constrained to simple trees. For example, the following (toy) search space is hard to represent in Hyperopt, but poses no problem in our language: \n\n(Repeat \n (Optional \n (Repeat \n (Affine [32, 64]) \n [1, 2, 4])\n )\n[1, 2, 4]) \n\nThe problem arises from the interaction between Optional and Repeat. These problems are exacerbated by deeper nesting. Representing this search space in Hyperopt would require writing a cumbersome cases expression for each of the different combinations of hyperparameter values for Repeat and Optional modules. See https://github.com/jaberg/hyperopt/wiki/FMin for information on the cases construct in Hyperopt: hp.choice. By contrast, our language imposes no such burden on the user.\n\nDue to these significant differences, it is incorrect to say that our DSL for specifying search spaces over architectures is not novel when compared to something such as Hyperopt. \n\nWe explore how the introduction of the search space specification language allows us to construct a integrated framework for architecture search. The main focus of this paper is not to propose new hyperparameter optimization algorithms in current general settings. The model search algorithms that we propose are adjusted to the structures that arise from our model search space specification language. The experiments study the potential of different model search algorithms with structures of this type.\n\n2. The paper mentioned does indeed search over simple convolutional architectures on CIFAR-10, nonetheless, the search space is hard-coded and does not make use of the compositionality resulting from the model search space specification language. This point is addressed in the related work section (e.g., paragraph 6 of section 2). One important aspect of our framework is that it allows the user to easily write search spaces over architectures, functioning as a tool to support model discovery. Much of the recent progress in deep learning was supported by the existence of tools that allow experts rapid experimentation and exploration (e.g., Tensorflow, Pytorch). We need to build these tools for architecture search in deep learning, i.e., specific tools for architecture search rather than existing general-purpose hyperparameter optimization tools.\n\n3. Our work focus on the development of a framework for architecture search in deep learning. Currently, there are no standard benchmarks for architecture search. In particular, due to the focus on architecture search, the suggested generic datasets for pipeline tuning (e.g., auto-sklearn, autoweka) would fit well the message of this paper (e.g., from paragraph 3 of section 1 to the end of that section). Our goal is not to propose MCTS as an algorithm for general purpose hyperparameter optimization; rather we propose random, MCTS, and SMBO as simple baseline algorithms for model search that are well-suited to the structures arising from the search spaces induced by our model search space specification language." ]
[ 4, -1, -1, 5, 4, -1, -1, -1, -1 ]
[ 5, -1, -1, 3, 5, -1, -1, -1, -1 ]
[ "iclr_2018_rkTBjG-AZ", "B1BvqGB4f", "BJS0iIpQG", "iclr_2018_rkTBjG-AZ", "iclr_2018_rkTBjG-AZ", "Skp1e-zgM", "Sy4q4vBgf", "rJHFv8pXG", "ByV24asxM" ]
iclr_2018_HyBbjW-RW
Open Loop Hyperparameter Optimization and Determinantal Point Processes
Driven by the need for parallelizable hyperparameter optimization methods, this paper studies \emph{open loop} search methods: sequences that are predetermined and can be generated before a single configuration is evaluated. Examples include grid search, uniform random search, low discrepancy sequences, and other sampling distributions. In particular, we propose the use of k-determinantal point processes in hyperparameter optimization via random search. Compared to conventional uniform random search where hyperparameter settings are sampled independently, a k-DPP promotes diversity. We describe an approach that transforms hyperparameter search spaces for efficient use with a k-DPP. In addition, we introduce a novel Metropolis-Hastings algorithm which can sample from k-DPPs defined over spaces with a mixture of discrete and continuous dimensions. Our experiments show significant benefits over uniform random search in realistic scenarios with a limited budget for training supervised learners, whether in serial or parallel.
rejected-papers
The idea of using the determinant of the covariance matrix over inputs to select experiments to run is a foundational concept of experimental design. Thus it is natural to think about extending such a strategy to sequential model based optimization for the hyperparameters of machine learning models, using recent advances in determinantal point processes. The idea of sampling from k-DPPs to do parallel hyperparameter search, balancing quality and diversity of expected outcomes, seems neat. While the reviewers found the idea interesting, they saw weaknesses in the approach and most importantly were not convinced by the empirical results. All reviewers thought that the baselines were inappropriate given recent work in hyperparameter optimization (and classic work in statistics). Pros: - Useful to a large portion of the community (if it works) - An interesting idea that seems timely Cons: - Only slightly outperforms baselines that are too weak - Not empirically compared to recent literature - Some of the design and methodology require more justification - Experiments are limited to small scale problems
train
[ "ryH3y2Oxf", "SyRrSKFef", "ry7VxCKlM", "rkIuDU27G", "r18Y8I2mM", "SyUfLUhmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "\nThis paper considers hyperparameter searches in which all of the\ncandidate points are selected in advance. The most common approaches\nare uniform random search and grid search, but more recently\nlow-discrepancy sequences have sometimes been used to try to achieve\nbetter coverage of the space. This paper proposes using a variant of\nthe determinantal point process, the k-DPP to select these points.\nThe idea is that the DPP provides an alternative form of diversity to\nlow-discrepancy sequences.\n\nSome issues I have with this paper:\n\n1. Why a DPP? It's pretty heavyweight. Why not use any of the other\n(potentially cheaper) repulsive point processes that also achieve\ndiversity? Is there anything special about it that justifies this\nwork?\n\n2. What about all of the literature on space-filling designs, e.g.,\nlatin hypercube designs? Statisticians have thought about this for a\nlong time.\n\n3. The motivation for not using low-discrepancy sequences was discrete\nhyperparameters. In practice, people just chop up the space or round.\nIs a simple kernel with one length scale on a one-hot coding adding\nvalue? In this setup, each parameter can only contribute \"same or\ndifferent\" to the diversity assessment. In any case, the evaluations\ndidn't have any discrete parameters. Given that the discrete setting\nwas the motivation for the DPP over LDS, it seems strange not to even\nlook at that case.\n\n4. How do you propose handling ordinal variables? They're a common\ncase of discrete variables but it wouldn't be sensible to use a\none-hot coding.\n\n5. Why no low discrepancy sequence in the experimental evaluation of\nsection 5? Since there's no discrete parameters, I don't see what the\nlimitation is.\n\n6. Why not evaluate any other low discrepancy sequences than Sobol?\n\n7. I didn't understand the novelty of the MCMC method relative to\nvanilla M-H updates. It seems out of place.\n\n8. The figures really need error bars --- Figure 3 in particular. Are\nthese differences statistically significant?\n", "The authors propose k-DPP as an open loop (oblivious to the evaluation of configurations) method for hyperparameter optimization and provide its empirical study and comparison with other methods such as grid search, uniform random search, low-discrepancy Sobol sequences, BO-TPE (Bayesian optimization using tree-structured Parzen estimator) by Bergstra et al. (2011). The k-DPP sampling algorithm and the concept of k-DPP-RBF over hyperparameters are not new, so the main contribution here is the empirical study. \n\nThe first experiment by the authors shows that k-DPP-RBF gives better star discrepancy than uniform random search while being comparable to low-discrepancy Sobol sequences in other metrics such as distance from the center or an arbitrary corner (Fig. 1).\n\nThe second experiment shows surprisingly that for the hard learning rate range, k-DPP-RBF performs better than uniform random search, and moreover, both of these outperform BO-TPE (Fig. 2, column 1).\n\nThe third experiment shows that on good or stable ranges, k-DPP-RBF and its discrete analog slightly outperform uniform random search and its discrete analog, respectively.\n\nI have a few reservations. First, I do not find these outcomes very surprising or informative, except for the second experiment (Fig. 2, column 1). Second, their study only applies to a small number like 3-6 hyperparameters with a small k=20. The real challenge lies in scaling up to many hyperparameters or even k-DPP sampling for larger k. Third, the authors do not compare against some relevant, recent work, e.g., Springenberg et al. (http://aad.informatik.uni-freiburg.de/papers/16-NIPS-BOHamiANN.pdf) and Snoek et al. (https://arxiv.org/pdf/1502.05700.pdf) that is essential for this kind of empirical study. \n\n ", "In this paper, the authors consider non-sequential (in the sense that many hyperparameter evaluations are done simultaneously) and uninformed (in the sense that the hyperparameter evaluations are chosen independent of the validation errors observed) hyperparameter search using determinantal point processes (DPPs). DPPs are probability distributions over subsets of a ground set with the property that subsets with more \"diverse\" elements have higher probability. Diverse here is defined using some similarity metric, often a kernel. Under the RBF kernel, the more diverse a set of vectors is, the closer the kernel matrix becomes to the identity matrix, and thus the larger the determinant (and therefore probability under the DPP) grows. The authors propose to do hyperparameter tuning by sampling a set of hyperparameter evaluations from a DPP with the RBF kernel.\n\nOverall, I have a couple of concerns about novelty as well as the experimental evaluation for the authors to address. As the authors rightly point out, sampling hyperparameter values from a DPP is equivalent to sampling proportional to the posterior uncertainy of a Gaussian process, effectively leading to a pure exploration algorithm. As the authors additionally point out, such methods have been considered before, including methods that directly propose to batch Bayesian optimization by choosing a single exploitative point and sampling the remainder of the batch from a DPP (e.g., [Kathuria et al., 2016]). The default procedure for parallel BayesOpt used by SMAC [R2] is (I believe) also to choose a purely explorative batch. I am unconvinced by the argument that \"while this can lead to easy parallelization within one iteration of Bayesian optimization, the overall algorithms are still sequential.\" These methods can typically be expanded to arbitrarily large batches and fully utilize all parallel hardware. Most implementations of batch Bayesian optimization in practice (SMAC and Spearmint as examples) will even start new jobs immediately as jobs finish -- these implementations do not wait for the entire batch to finish typically.\n\nAdditionally, while there has been some work extending GP-based BayesOpt to tree-based parameters [R3], at a minimum SMAC in particular is known well suited to the tree-based parameter search considered by the authors. I am not sure that I agree that TPE is state-of-the-art on these problems: SMAC typically does much better in my experience. \n\nUltimately, my concern is that--considering these tools are open source and relatively stable software at this point--if DPP-only based hyperparameter optimization is truly better than the parallelization approach of SMAC, it should be straightforward enough to download SMAC and demonstrate this. If the argument that BayesOpt is somehow \"still sequential\" is true, then k-DPP-RBF should outperform these tools in terms of wall clock time to perform optimization, correct?\n\n[R1] Kathuria, Tarun and Deshpande, Amit and Kohli, Pushmeet. Batched Gaussian Process Bandit Optimization via Determinantal Point Processes, 2016.\n\n[R2] Several papers, see: http://www.cs.ubc.ca/labs/beta/Projects/SMAC/\n\n[R3] Jenatton, R., Archambeau, C., González, J. and Seeger, M., 2017, July. Bayesian Optimization with Tree-structured Dependencies. In International Conference on Machine Learning (pp. 1655-1664).", "We thank AnonReviewer3 for their response. We address the points in order. \n\n1: We show how to efficiently draw samples from arbitrary tree-structured hyperparameter spaces which include continuous and discrete dimensions, which is not clear for other point processes. We also highlight connections between our approach and GP-based BO, which is the most common approach used currently.\n\n2: Bergstra and Bengio, 2012 (http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf) found that the Sobol sequence outperformed latin hypercube sampling, and that the Niederreiter and Halton sequences were similar to Sobol. We will add this point to our paper. \n\n3a: The primary motivation for not using a low-discrepancy sequence is actually the tree-structured hyperparameters. It isn't immediately obvious how to map from a sequence in [0,1]^d to arbitrary tree structures. \n\n3b: The experiments in section 5.1 include a categorical hyperparameter (whether or not to regularize), and the experiments in section 5.2 include a number of discrete hyperparameters (e.g. the number of kernels). We will make this more clear in the paper. \n\n4: Great question. As our approach doesn't get any information about the function values (i.e. we're not learning the kernel), we propose to use a unary encoding which evenly spaces the values for ordinal variables. A three-value ordinal variable would then be [1,0,0], [1,1,0], [1,1,1]. If we have any a priori knowledge about how far apart the ordinal variables are, it's simple to include that in this encoding.\n\n5: see response 3c.\n\n6: see response 2.\n\n7: The novelty in Algorithm 2 is in the proposal distribution: instead of sampling uniformly from a discrete set of items (as in Algorithm 1), we draw samples directly from the space over which the DPP is defined (which potentially has continuous and discrete dimensions). It is correct that it uses M-H updates. We can make the novelty more explicit. \n\n8: All experimental results are statistically significant (when k > ~15). For example, in figure 3, k-DPP-RBF has a 99% confidence interval of [82.342,82.344], while uniform has a 99% confidence interval of [82.266,82.268]. We excluded confidence intervals only for readability, but will add this to the text of the paper. \n", "We thank AnonReviewer1 for their response. We will address their points in order. \n\n- We chose our experiments to represent common machine learning setups, but sampling from a k-DPP extends well into higher dimensions and larger k. This can be seen through the connection to GPs (section 4.5): one can generate a draw from a k-DPP by sequentially sampling from (and updating) the posterior variance of a GP. This scales with O(d*k^3), where d is the dimension. A quick experiment shows drawing samples up to k=500 takes less than twenty minutes in small dimensions on a c4.8xlarge AWS EC2 instance (which has 36 cores). As noted in the paper, as k increases, all methods approach the true max, so differences between methods are primarily found with smaller k. \n\n- We thank the reviewer for the suggestions for further comparisons. Both given citations use the same parallelization approach (from Snoek et al., 2012, equation 7). In the open loop case, their approach (which approximates the posterior using MCMC then uses their acquisition function to choose the next point) reduces to a variant of k-DPP-RBF that has less repulsive properties. We will add this to our experiment section. \n", "We thank AnonReviewer2 for their review. We will address their points in order.\n\n- The work of Kathuria et al., 2016 is quite related to our experiment section, but has a few differences. While Kathuria et al., 2016 do sample from a DPP within each batch, they only evaluate their approach as part of a sequential BO algorithm, and it is unclear how much of their improvement is from the DPP and how much is from the acquisition function. Their approach is to first maximize their acquisition function (EST), then draw a batch sample using a k-DPP defined only in a relevance region around that point. They also discretize the space, which our results show leads to worse results. We will describe their work in further detail in our paper, so the differences are more clear. \n\n- Parallel optimization in SMAC is described in Hutter et al., 2012. First, they sample points using latin hypercube sampling (which Bergstra and Bengio 2012 [http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf] found was outperformed by the Sobol sequence). Then, at each iteration, they choose k points by maximizing the \"optimistic confidence bound\" (-mu + lambda * sigma), for a set of k values for lambda independently drawn from an exponential distribution, where mu and sigma are the posterior mean and variance predicted by their decision tree. In the fully parallel setup our work addresses, mu and sigma are not updated, so fully parallel SMAC is equivalent to uniform sampling.\n\n- Our work addresses the fully parallel case (e.g. running experiments on Amazon EC2 instances, where running one instance for ten hours costs the same as running ten instances for one hour). Thus, starting new jobs immediately as other jobs finish is actually inefficient and not desirable -- if we have the budget to run more jobs, we start them all at the same time (before any jobs finish). \n\n- We will run experiments using sequential SMAC in addition to our experiments using TPE, but we remind the reviewer that both of these approaches are using information unavailable to our open-loop methods. We focus on the fully parallel case because we have practically unlimited parallelization hardware (AWS EC2 instances). For a fixed budget of k evaluations, TPE and sequential SMAC take k times longer than open loop methods (using the simplifying assumption that all evaluations take the same amount of time). Even batch methods are at least twice as slow (with the fastest batch method running two iterations of batch size k/2). \n" ]
[ 4, 4, 4, -1, -1, -1 ]
[ 5, 5, 5, -1, -1, -1 ]
[ "iclr_2018_HyBbjW-RW", "iclr_2018_HyBbjW-RW", "iclr_2018_HyBbjW-RW", "ryH3y2Oxf", "SyRrSKFef", "ry7VxCKlM" ]
iclr_2018_H1Nyf7W0Z
Alpha-divergence bridges maximum likelihood and reinforcement learning in neural sequence generation
Neural sequence generation is commonly approached by using maximum- likelihood (ML) estimation or reinforcement learning (RL). However, it is known that they have their own shortcomings; ML presents training/testing discrepancy, whereas RL suffers from sample inefficiency. We point out that it is difficult to resolve all of the shortcomings simultaneously because of a tradeoff between ML and RL. In order to counteract these problems, we propose an objective function for sequence generation using α-divergence, which leads to an ML-RL integrated method that exploits better parts of ML and RL. We demonstrate that the proposed objective function generalizes ML and RL objective functions because it includes both as its special cases (ML corresponds to α → 0 and RL to α → 1). We provide a proposition stating that the difference between the RL objective function and the proposed one monotonically decreases with increasing α. Experimental results on machine translation tasks show that minimizing the proposed objective function achieves better sequence generation performance than ML-based methods.
rejected-papers
The reviewers agreed that this paper is not quite ready for publication at ICLR. One of the reviewers thought the paper was well written and easy to follow while the two others said the opposite. One of the main criticisms was issues with the composition. The paper seems to lack a clear formal explanation of the problem and the proposed methodology. The reviewers in general weren't convinced by the experiments, complaining about the lack of a required baseline and that the proposed method doesn't seem to significantly help in the experiment presented. Pros: - The proposed idea is interesting - The problem is timely and of interest to the community - Addresses multiple important problems at the intersection of ML and RL in sequence generation Cons: - Novel but somewhat incremental - The experiments are not compelling (i.e. the results are not strong) - A necessary baseline is missing - Significant issues with the writing - both in terms of clarity and correctness.
train
[ "HyTf0MygM", "BkwRmW1Wz", "HyiwD8N-M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes another training objective for training neural sequence-to-sequence models. The objective is based on alpha-divergence between the true input-output distribution q and the model distribution p. The new objective generalizes Reward-Augmented Maximum Likelihood (RAML) and entropy-regularized Reinforcement Learning (RL), to which it presumably degenerates when alpha goes to 1 or to 0 respectively.\n\nThe paper has significant writing issues. In Paragraph “Maximum Likelihood”, page 2, the formalization of the studied problem is unclear. Do X and Y denote the complete input/output spaces, or do they stand for the training set examples only? In the former case, the statement “x is uniformly sampled from X” does not make sense because X is practically infinite. Same applies to the dirac distribution q(y|x), the true conditional distribution of outputs given inputs is multimodal even for machine translation. If X and Y were meant to refer to the training set, it would be worth mentioning the existence of the test set. Furthermore, in the same Section 2 the paper fails to mention that reinforcement learning training also does not completely correspond to the evaluation approach, at which stage greedy search or beam search is used.\n\nThe proposed method is evaluated on just one dataset. Crucially, there is no comparison to a trivial linear combination of ML and RL, which in one way or another was used in almost all prior work, including GNMT, Bahdanau et al, Ranzato et al. The paper does not argue why alpha divergence is better that the aforementioned combination method and also does not include it in the comparison.\n\nTo sum up, I can not recommend the paper to acceptance, because (a) an important baseline is missing (b) there are serious writing issues.\n", "This paper considers a dichitomy between ML and RL based methods for sequence generation. It is argued that the ML approach has some \"discrepancy\" between the optimization objective and the learning objective, and the RL approach suffers from bad sample complexity. An alpha-divergence formulation is considered to combine both methods.\n\nUnfortunately, I do not understand main points made in this paper and am thus not able to give an accurate evaluation of the technical content of this paper. I therefore have no option but to vote for reject of this paper, based on my educated guess. \n\nBelow are the points that I'm particularly confused about:\n\n1. For the ML formulation, the paper made several particularly confusing remarks. Some of them are blatantly wrong to me. For example, \n\n1.1 The q(.|.) distribution in Eq. (1) *cannot* really be the true distribution, because the true distribution is unknown and therefore cannot be used to construct estimators. From the context, I guess the authors mean \"empirical training distribution\"?\n\n1.2 I understand that the ML objective is different from what the users really care about (e.g., blue score), but this does not seem a \"discrepancy\" to me. The ML estimator simply finds a parameter that is the most consistent to the observed sequences; and if it fails to perform well in some other evaluation criterion such as blue score, it simply means the model is inadequate to describe the data given, or the model class is so large that the give number of samples is insufficient, and as a result one should change his/her modeling to make it more apt to describe the data at hand. In summary, I'm not convinced that the fact that ML optimizes a different objective than the blue score is a problem with the ML estimator.\n\nIn addition, I don't see at all why this discrepancy is a discrepancy between training and testing data. As long as both of them are identically distributed, then no discrepancy exists.\n\n1.3 In point (ii) under the maximum likelihood section, I don't understand it at all and I think both sentences are wrong. First, the model is *not* trained on the true distribution which is unknown. The model is trained on an empirical distribution whose points are sampled from the true distribution. I also don't understand why it is evaluated using p_theta; if I understand correctly, the model is evaluated on a held-out test data, which is also generated from the underlying true distribution.\n\n2. For the RL approach, I think it is very unclear as a formulation of an estimator. For example, in Eq. (2), what is r and what is y*? It is mentioned that r is a \"reward\" function, but I don't know what it means and the authors should perhaps explain further. I just don't see how one obtains an estimated parameter theta from the formulation in Eq. (2), using training examples.", "Summary of the paper: \n\nThis paper presents a method, called \\alpha-DM (the authors used this name because they are using \\alpha-Divergence to measure the distance between two distributions), that addresses three important problems simultaneously: \n(a) Objective score discrepancy: i.e., in ML we minimize a cost function but we measure performance using something else, e.g., minimizing cross entropy and then measuring performance using BLEU score in Machine Translation (MT). \n(b) Sampling distribution discrepancy: The model is trained using samples from true distribution but evaluated using samples from the learned distribution\n(c) Sample inefficiency: The RL model might rarely draw samples with high rewards which makes it difficult to compute gradients accurately for objective function’s optimization \n\nThen the authors present the results for machine translation task and also analysis of their proposed method.\n\nMy comments / feedback: \n\nThe paper is well written and the problem addressed by the paper is an important one. My main concerns about this work are have two aspects: \n(a)\tNovelty\n1.\tThe idea is a good one and is great incremental research building on the top of previous ideas. I do not agree with statements like “We demonstrate that the proposed objective function generalizes ML and RL objective functions …” that authors have made in the abstract. There is not enough evidence in the paper to validate this statement.\n(b)\tExperimental Results\n2.\tThe performance of the proposed method is not significantly better than other models in MT task. I am also wondering why authors have not tried their method on at least one more task? E.g., in CNN+LSTM based image captioning, the perplexity is minimized as cost function but the performance is measured by BLEU etc. \n\nSome minor comments: \n\n1.\tIn page 2, 6th line after eq (1), “… these two problems” --> “… these three problems” \n2.\tIn page 2, the line before the last line, “… resolbing problem” --> “… resolving problem”\n" ]
[ 4, 4, 4 ]
[ 5, 1, 3 ]
[ "iclr_2018_H1Nyf7W0Z", "iclr_2018_H1Nyf7W0Z", "iclr_2018_H1Nyf7W0Z" ]
iclr_2018_ryk77mbRZ
Noise-Based Regularizers for Recurrent Neural Networks
Recurrent neural networks (RNNs) are powerful models for sequential data. They can approximate arbitrary computations, and have been used successfully in domains such as text and speech. However, the flexibility of RNNs makes them susceptible to overfitting and regularization is important. We develop a noise-based regularization method for RNNs. The idea is simple and easy to implement: we inject noise in the hidden units of the RNN and then maximize the original RNN's likelihood averaged over the injected noise. On a language modeling benchmark, our method achieves better performance than the deterministic RNN and the variational dropout.
rejected-papers
This paper proposes a regularizer for recurrent neural networks, based on injecting random noise into the hidden unit activations. In general the reviewers thought that the paper was well written and easy to understand. However, the major concern among the reviewers was a lack of empirical evidence that the method works consistently. Essentially, the reviewers were not compelled by the presented experiments and demanded more rigorous empirical validation of the approach. Pros: - Well written and easy to follow - An interesting idea - Regularizing RNNs is an interesting and active area of research in the community Cons: - The experiments are not compelling and are questioned by all the reviewers - The writing does not cite relevant related work - The work seems underexplored (empirically and methodologically)
train
[ "Syr4moYxG", "Sk6_QZcgM", "ry22qzclM", "H17Hs5e-z", "ByHyFBcef", "ry5oc0bxM", "SJuWpvGAZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "author", "public" ]
[ "The authors of the paper advocate injecting noise into the activations of recurrent networks for regularisation. This is done by replacing the deterministic units with stochastic ones.\n\nThe paper has several issues with respect to the method and related work. \n\n- The paper needs to mention [Graves 2011], which is one of the first works to inject noise into the dynamics of an RNN. It is also important to know how these two approaches differ. E.g.: Under what conditions are the two approaches equivalent? How do they compare experimentally?\n- While [Bayer & Osendorfer, 2014] and [Chung et al, 2015] appear in the list of references, these works are not discussed in the main text. I personally think these are extremely related, pioneering the use of stochastic units in a recurrent context. In the end, the original paper can be cast in these frameworks approximately by removing the KL term of the ELBO. This might be ok by itself, but that the authors are apparently aware of the work (as it is in the list of references) and not discussing them in the main text makes me highly skeptical.\n- The method is introduced for general exponential families, but a) not empirically evaluated for more than the Gaussian case and b) not a complete algorithm for e.g. the Bernoulli case. More specifically, the reader is left alone with the problem of estimating the gradients in the Bernoulli case, which is an active area of research by itself.\n- The paper makes use of the reparameterisation trick, but does not cite the relevant literature, e.g. [Kingma 2013, Rezende 2014, and another one I currently struggle to find].\n- The desiderate for noise seem completely arbitrary to me and are not justified. I don’t see why violation of any of them would lead to an inferior regularisation method.\n\n### References\n[Graves 2011] Graves, Alex. \"Practical variational inference for neural networks.\" Advances in Neural Information Processing Systems. 2011.\n[Kingma 2013] Kingma, Diederik P., and Max Welling. \"Auto-encoding variational bayes.\" arXiv preprint arXiv:1312.6114 (2013).\n[Rezende 2014] Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra. \"Stochastic backpropagation and approximate inference in deep generative models.\" arXiv preprint arXiv:1401.4082 (2014).\n", "The RNN transition function is: h_t+1 = f(h_t,x_t)\nThis paper proposes using a stochastic transition function instead of a deterministic one.\ni.e h_{t+1} \\sim expfam(mean = f(h_t,x_t), gamma) where expfam denotes a distribution from the exponential family.\n\nThe experimental results consider text modeling (evaluating on perplexity) on Penn Treebank and Wikitext-2. The method of regularization is compared to a reimplementation of Variational Dropout and no regularization.\n\nThe work is written clearly and easy to follow.\n\nOverall, the core idea in this work is interesting but underexplored. \n\n* As of when I read this paper, all results on this work used 200 hidden units realizing results that were well off from the state of the art results on Penn Tree Bank (as pointed out by the external reader).\nThe authors responded by stating that this was done to achieve a relative comparison. A more interesting comparison, in addition to the ones presented, would be to see how well each method performs while not controlling for hidden layer size. Then, it might be that restricting the number of hidden dimensions is required for the RNN without any regularization but for both Variational Dropout and Noisin, one obtains better results with a larger the hidden dimension.\n\n* The current experimental setup makes it difficult to assess when the proposed regularization is useful. Table 2 suggests the answer is sometimes and Table 3 suggests its marginally useful when the RNN size is restricted.\n\n* How does the proposed method's peformance compare to Zoneout https://arxiv.org/pdf/1606.01305.pdf?\n\n* Clarifying the role of variational inference: I could be missing something but I don't see a good reason why the prior (even if learned) should be close to the true posterior under the model. I fear the bound in Section (3) [please include equation numbers in the paper] could be quite loose.\n\n* What is the rationale for not comparing to the model proposed in [Chung et. al] where there is a stochastic and deterministic component to the transition function? In what situations do we expect the fully stochastic transition here to work better than a model that has both? Presumably, some aspect of the latent variable + RNN model could be expressed by having a small variance for a subset of the dimensions and large one for the others\nbut since gamma is the same across all dimensions of the model, I'm not sure this feature can be incorporated into the current approach. Such a comparison would also empirically verify what happens when learning with the prior versus doing inference with an approximate posterior helps.\n\n* The regularization is motivated from the point of view of sampling the hidden states to be from the exponential family, but all the experiments provided seem to use a Gaussian distribution. This paper would be strengthened by a discussion and experimentation with other kinds of distributions in the exponential family.", "In order to regularize RNNs, the paper suggests to inject noise into hidden units. More specifically, the suggested technique resembles optimizing the expected log likelihood under the hidden states prior, a lower bound to the data log-likelihood.\n\nThe described approach seems to be simple. Yet, several details are unclear, or only available implicitly. For example, on page 5, the Monte Carlo estimation of Lt is given (please use equation number on every equation). What is missing here are some details on how to compute the gradient for U and Wl. A least zt is sampled from zt-1, so some form of e.g. reparameterization has to happen for gradient computation? Are all distributions from the exponential family amendable to this type of reparamterization? With respect to the Exp. Fam.: During all experiments, only Gaussians are used? why cover this whole class of distributions? Experiments seem to be too small: After all the paper is about regularization, why are there no truely large models, e.g. like state-of-the-art instances? What is the procedure at test time?", "Greetings to the authors of this paper,\n\nYour paper is very interesting and insightful. As part of a reproducibility challenge (http://www.cs.mcgill.ca/~jpineau/ICLR2018-ReproducibilityChallenge.html) , our team of students would like to attempt at reproducing the results of your paper. We are not affiliated with the official reviewers.\n\nIf it would be possible, it would be incredibly helpful if you are interested in providing parts of the code used in your implementations.\n\nIf you are interested, please comment below, and we can arrange to contact each other in private.\n\nThank you\n", "This paper proposed to inject noise in the hidden units of the RNN and then maximize the original RNN’s likelihood averaged over the injected noise.\n\nIt seem closely related to [*], which injects noise on the weight of the RNN during training (enhancing exploration of the model-parameter space) and model averaging when testing. [*] also performs regularization, as it yields a principled Bayesian learning algorithm. It is curious to see the performance comparison.\n\n[*] Scalable Bayesian Learning of Recurrent Neural Networks for Language Modeling, ACL 2017\n\n\n", "Hi Aaron,\n\nThank you for your questions and remarks. This paper proposes a method for reducing overfitting in RNNs. This consists in injecting noise judiciously into the hidden units of the RNN. We perform training by integrating over the injected noise. This has an ensembling effect to it just like dropout. Our purpose was to compare to the non-regularized RNN and to variational dropout as implemented in Gal et al. 2016 (https://arxiv.org/abs/1512.05287). \n\nIn table 2 we used 200 hidden units so it is expected that we don't match the state-of-the-art numbers which correspond to way bigger networks. Again, the purpose was to compare performance of the three types of networks: deterministic, regularized with dropout, regularized with noisin. The benefits we are seeing with our method will be even better with bigger networks where regularization is beneficial. We are adding those results in the revision.", "I suspect that something may be wrong with the way that you are training your language model. Is it possible that you are overfitting? You don't say the dimensionality of the RNN you used. Knowing that would help interpret your numbers better.\n\nThe perplexities in Table 2 seem to get worse when you add more layers. Also, in general these numbers are way higher than expected. The perplexity from your RNN baseline is higher than a 5-gram LM. Your NOSIN model is much worse than Mikolov's 2011 RNN and the state-of-the-art has improved a lot since then." ]
[ 2, 5, 3, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1 ]
[ "iclr_2018_ryk77mbRZ", "iclr_2018_ryk77mbRZ", "iclr_2018_ryk77mbRZ", "iclr_2018_ryk77mbRZ", "iclr_2018_ryk77mbRZ", "SJuWpvGAZ", "iclr_2018_ryk77mbRZ" ]
iclr_2018_r1vccClCb
Neighbor-encoder
We propose a novel unsupervised representation learning framework called neighbor-encoder in which domain knowledge can be trivially incorporated into the learning process without modifying the general encoder-decoder architecture. In contrast to autoencoder, which reconstructs the input data, neighbor-encoder reconstructs the input data's neighbors. The proposed neighbor-encoder can be considered as a generalization of autoencoder as the input data can be treated as the nearest neighbor of itself with zero distance. By reformulating the representation learning problem as a neighbor reconstruction problem, domain knowledge can be easily incorporated with appropriate definition of similarity or distance between objects. As such, any existing similarity search algorithms can be easily integrated into our framework. Applications of other algorithms (e.g., association rule mining) in our framework is also possible since the concept of ``neighbor" is an abstraction which can be appropriately defined differently in different contexts. We have demonstrated the effectiveness of our framework in various domains, including images, time series, music, etc., with various neighbor definitions. Experimental results show that neighbor-encoder outperforms autoencoder in most scenarios we considered.
rejected-papers
The paper proposes a form of autoencoder that learns to predict the neighbors of a given input vector rather than the input itself. The idea is nice but there are some reviewer concerns about insufficient evaluation and the effect of the curse of dimensionality. The revised paper does address some questions and includes additional helpful experiments with different types of autoencoders. However, the work is still a bit preliminary. The area of auto-encoder variants, and corresponding experiments on CIFAR-10 and the like, is crowded. In order to convince the reader that a new approach makes a real contribution, it should have very thorough experiments. Suggestions: try to improve the CIFAR-10 numbers (they need not be state-of-the-art but should be more credible), adding more data sets (especially high-dimensional ones), and analyzing the effects of factors that are likely to be important (e.g. dimensionality, choice of distance function for neighbor search).
train
[ "Hk4qYw7eG", "HJDy-RKef", "S1JBxOqlz", "SJzuXLaXz", "r1nqmLp7M", "rJ-BQUaXM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper describes a generalization of autoencoders that are trained to reconstruct a close neighbor of its input, instead of merely the input itself. Experiments on 3 datasets show that this yields better representations in terms of post hoc classification with a linear classifier or clustering, compared to a regular autoencoder.\n\nAs the authors recognize, there is a long history of research on variants of autoencoders. Unfortunately this paper compares with none of them. While the authors suggest that, since these variations can be combined with the proposed neighbor reconstruction variant, it's not necessary to compare with these other variations, I disagree. It could very well be that this neighbor trick makes other methods worse for instance. \n\nAt the very least, I would expect a comparison with denoising autoencoders, since they are similar if one thinks of the use of neighbors as a structured form of noise added to the input. It could very well be in fact that simply adding noise to the input is sufficient to force the autoencoder to learn a valuable representation, and that the neighbor reconstruction approach is simply an overly complicated approach of achieving the same results. This is an open question right now that I'd expect this paper to answer.\n\nFinally, I think results would be more impressive and likely to have impact if the authors used datasets that are more commonly used for representation learning, so that a direct performance comparison can be made with previously published results. CIFAR 10 and SVHN would be good alternatives.\n\nOverall, I'm afraid I must recommend that this paper be rejected.\n", "A representation learning framework from unsupervised data, based not on auto-encoding (x in, x out), but on neighbor-encoding (x in, N(x) out, where N(.) denotes the neighbor(s) of x) is introduced. \n\nThe underlying idea is interesting, as such, each and every degree of freedom do not synthesize itself similar to the auto-encoder setting, but rather synthesize a neighbor, or k-neighbors. The authors argue that this form of unsupervised learning is more powerful compared to the standard auto-encoder setting, and some preliminary experimental proof is also provided. \n\nHowever, I would argue that this is not a completely abstract - unsupervised representation learning setting since defining what is \"a neighbor\" and what is \"not a neighbor\" requires quite a bit of domain knowledge. As we all know, the euclidian distance, or any other comparable norm, suffers from the \"Curse of Dimensionality\" as the #-of-Dimensions increase. \n\nFor instance, in section 4.3, the 40-dimensional feature vector space is used to define neighbors in. It would be great how the neighborhood topology in that space looks like.\n\nAll in all, I do like the idea as a concept but I am wary about its applicability to real data where defining a good neighborhood metric might be a major challenge of its own. ", "This paper presents a variant of auto-encoder that relaxes the decoder targets to be neighbors of a data point. Different from original auto-encoder, where data point x and the decoder output \\hat{x} are forced to be close, the neighbor-encoder encourage the decoder output to be similar to the neighbors of the input data point. By considering the neighbor information, the decoder targets would have smaller intra-class distances, thus larger inter-class distances, which helps to learn better separated latent representation of data in terms of data clusters. The authors conduct experiments on several real but relative small-scale data sets, and demonstrate the improvements of learned latent representations by using neighbors as targets. \n\nThe method of neighbor prediction is a simple and small modification of the original auto-encoder, but seems to provide a way to augment the targets such that intra-class distance of decoder targets can be tightened. Improvements in the conducted experiments seem significant compared to the most basic auto-encoder.\n\nMajor issues:\n\nThere are some unaddressed theoretical questions. The optimal solution to predict the set of neighbor points in mean-squared metric is to predict the average of those points, which is not well justified as the averaged image can easily fall off the data manifold. This may lead to a more blurry reconstruction when k increases, despite the intra-class targets are tight. It can also in turn harm the latent representation when euclidean neighbors are not actually similar (e.g. images in cifar10/imagenet that are not as simple as 10 digits). This seems to be a defect of the neighbor-encoder method and is not discussed in the paper.\n\nThe data sets used in the experiments are relatively small and simple, larger-scale experiments should be conducted. The fluctuations in Figure 9 and 10 suggest the significant variances in the results. Also, more complicated data/images can decrease the actual similarities of euclidean neighbors, thus affecting the results.\n\nThe baselines are weak. Only the most basic auto-encoder is compared, no additional variants or other data augmentation techniques are compared. It is possible other variants improve the basic auto-encoder in similar ways. \n\nSome results are not very well explained. It seems the performance increases monotonically as the number of neighbors increases (Figure 5, 9, 10). Will this continue or when will the performance decrease? I would expect it to decrease as the far away neighbors will be dissimilar. The authors can either attach the nearest neighbors figures or their statistics, and provide explanations on when and why the performance decrease is expected.\n\nSome notations are confusing and need to be improved. For example, X and Y are actually the same set of images, the separation is a bit confusing; y_i \\in y in last paragraph of page 4 is incorrect, should use something like y_i in N(y).", "Dear Reviewer,\n\nThank you for your helpful review and kind words! We are glad that you like the idea.\n\nIn the review, you have argued that the neighbor-encoder method is not a completely abstract-unsupervised representation learning method as it requires domain knowledge to define the neighbor relationship. This statement is certainly valid, as we do need some domain knowledge. However, the amount of domain knowledge required by neighbor-encoder is minimal in comparison to what is required by a typical supervised representation learning method: we only need a \"neighbor\" to be defined, the \"non-neighbor\" information is not needed. In other words, we only need to know what is \"similar\" (and this information can be very sparse), but not what is \"not similar\" (the key information needed to divide objects into different classes/clusters). \nFurthermore, note that the domain knowledge provided do not need to be precise. Our MINST example in Section 4.1 simply use Euclidean distance in raw pixel space as the similarity measure to find the neighbors. For the newly added CIFAR10 data set Section 4.2, we use Euclidean distance in a common computer vision feature space as the similarity measure; the feature selected does not have much discriminative power for this data set and only 22% of the object-neighbor pairs are from the same class. Nevertheless, the results (Figure 9 and Table 2) show that all three variants of neighbor-encoder outperform their autoencoder counterparts in both semi-supervised classification (when number of labeled data is small) and clustering tasks.\n\nTo clarify, our claim is not that neighbor-encoder is a purely unsupervised representation learning method. Instead, our claim is that even a tiny amount of domain knowledge can greatly improve unsupervised representation learning, and neighbor-encoder is an effective way to incorporate such domain knowledge into the unsupervised representation learning framework.\n\nFor any comparable norm based neighbor definition, \"curse of dimensionality\" indeed would be a problem. To quantify the severity of such problem, we measured the percentage of object-neighbor pairs being in the same class. For example, in Section 4.4 (originally Section 4.3), about 49% of the object-neighbor pairs in the 40-dimensional feature vector space are in the same class (note that this is relatively high, as the default rate for randomly assigned neighbor is just ~9% for this data set). Another way we envision that can further increase this percentage is to use side information to define a neighbor (as introduced in Section 3.4). For instance, images/document on the same webpage or reviews of the same paper/movie/music could be declared being neighbor of each other. Such side information would much less sensitive to the curse of dimensionality.\n\nThanks,\nAuthors", "Dear Reviewer,\n\nWe really appreciate your valuable review! We have modified our paper based on your feedback by:\n \n1) adding denoising and variational autoencoder (and their neighbor-encoder counterparts) to all experiments, and …\n2) adding a new set of experiment on CIFAR 10 in Section 4.2. In all the experiments, we observed that neighbor-encoder and its variants outperform their autoencoder counterparts when applied in semi-supervised classification (when the number of labeled data available is small) and clustering tasks.\n\nThanks,\nAuthors", "Dear Reviewer,\n\nThank you very much for your valuable review. Addressing your concerns has made our paper much stronger. Our responses to the major issues are listed below:\n\nIssue 1: There are some unaddressed theoretical questions. The optimal solution to predict the set of neighbor points in mean-squared metric is to predict the average of those points, which is not well justified as the averaged image can easily fall off the data manifold. This may lead to a more blurry reconstruction when k increases, despite the intra-class targets are tight. It can also in turn harm the latent representation when euclidean neighbors are not actually similar (e.g. images in cifar10/imagenet that are not as simple as 10 digits). This seems to be a defect of the neighbor-encoder method and is not discussed in the paper.\nResponse to Issue 1: Thank you for raising this concern. The issue is addressed by removing the original configuration in question (in which we randomly selected one of the k nearest neighbors as the target to predict) as it does have this \"averaging\" problem. All the experiments are rerun with the most basic neighbor-encoder setting, in which we predict only the nearest neighbor of each object. As the target to predict is fixed, we no longer suffer from the \"averaging neighbors\" problem.\n\nIssue 2: The data sets used in the experiments are relatively small and simple, larger-scale experiments should be conducted. The fluctuations in Figure 9 and 10 suggest the significant variances in the results. Also, more complicated data/images can decrease the actual similarities of euclidean neighbors, thus affecting the results.\nResponse to Issue 2: After we rerun all the experiments described in response to Issue 1, we no longer see significant variance in the results. A new set of experiment on CIFAR 10 is performed and reported in Section 4.2. We also included experiments comparing three variants of neighbor-encoder (vanilla, denoising, variational) with their autoencoder counterparts.\n\nIssue 3: The baselines are weak. Only the most basic auto-encoder is compared, no additional variants or other data augmentation techniques are compared. It is possible other variants improve the basic auto-encoder in similar ways.\nResponse to Issue 3: We added comparison to two more popular variants of autoencoder, the denoising and variational autoencoder, in all of our experiments.\n\nIssue 4: Some results are not very well explained. It seems the performance increases monotonically as the number of neighbors increases (Figure 5, 9, 10). Will this continue or when will the performance decrease? I would expect it to decrease as the far away neighbors will be dissimilar. The authors can either attach the nearest neighbors figures or their statistics, and provide explanations on when and why the performance decrease is expected.\nResponse to Issue 4: We believe that Figure 6 addresses this issue. A new set of experiments is performed by using neighbors that are further away (i.e., changing 1st neighbor to the ith nearest neighbor). The performance decreases as expected when i is larger than 16 because the performance is crippled by lower quality neighbors. Figure 15 shows example neighbor pairs under different proximity settings.\n\nIssue 5: Some notations are confusing and need to be improved. For example, X and Y are actually the same set of images, the separation is a bit confusing; y_i \\in y in last paragraph of page 4 is incorrect, should use something like y_i in N(y).\nResponse to Issue 5: The notation is improved as suggested.\n\nThanks,\nAuthors" ]
[ 5, 6, 4, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_r1vccClCb", "iclr_2018_r1vccClCb", "iclr_2018_r1vccClCb", "HJDy-RKef", "Hk4qYw7eG", "S1JBxOqlz" ]
iclr_2018_SkYMnLxRW
Weighted Transformer Network for Machine Translation
State-of-the-art results on neural machine translation often use attentional sequence-to-sequence models with some form of convolution or recursion. Vaswani et. al. (2017) propose a new architecture that avoids recurrence and convolution completely. Instead, it uses only self-attention and feed-forward layers. While the proposed architecture achieves state-of-the-art results on several machine translation tasks, it requires a large number of parameters and training iterations to converge. We propose Weighted Transformer, a Transformer with modified attention layers, that not only outperforms the baseline network in BLEU score but also converges 15-40% faster. Specifically, we replace the multi-head attention by multiple self-attention branches that the model learns to combine during the training process. Our model improves the state-of-the-art performance by 0.5 BLEU points on the WMT 2014 English-to-German translation task and by 0.4 on the English-to-French translation task.
rejected-papers
The paper proposes a modification to the Transformer network, which mostly consists in changing how the attention heads are combined. The contribution is incremental, and its novelty is limited. The results demonstrate an improvement over the baseline at the cost of a more complicated training procedure with more hyper-parameters, and it is possible that with similar tuning the baseline performance could be improved in a similar way.
train
[ "SJQVdQ5lG", "Hy_tscFgf", "SyIxIgcxf", "r18Flq4mz", "HJZvfPMzf", "BybJGDfGz", "B16UZDGMM", "HkFhFG1-f", "ByZykz-ez", "HJ0Fy8egG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "public" ]
[ "This paper describes an extension to the recently introduced Transformer networks which shows better convergence properties and also improves results on standard machine translation benchmarks. \n\nThis is a great paper -- it introduces a relatively simple extension of Transformer networks which only adds very few parameters and speeds up convergence and achieves better results. It would have been good to also add a motivation for doing this (for example, this idea can be interpreted as having a variable number of attention heads which can be blended in and out with a single learned parameter, hence making it easier to use the parameters where they are needed). Also, it would be interesting to see how important the concatenation weight and the addition weight are relative to each other -- do you possibly get the same results even without the concatenation weight? \n\nA suggested improvement: Please check the references in the introduction and see if you can find earlier ones -- for example, language modeling with RNNs has been done for a very long time, not just since 2017 which are the ones you list; similar for speech recognition etc. (which probably has been done since 1993!).\n\nAddition to the original review: Your added additional results table clarifies a lot, thank you. As for general references for RNNs, I am not sure Hochreiter & Schmidhuber 1997 is a good reference as this only points to a particular type of RNN that is used today a lot. For speech recognition there are many better citations as well, check the conference proceedings from ICASSP for papers from Microsoft, Google, IBM, which are the leaders in speech recognition technology. However, I know citations can be difficult to get right for everybody, just try to do your best. ", "The paper presentes a small extension to the Neural Transformer model of Vaswani et al 2017:\nthe multi-head attention computation (eq. 2,3):\nhead_i = Attention_i(Q,K,W)\nMultiHead = Concat_i(head_i) * W = \\sum_i head_i * W_i\n\nis replaced with the so-called BranchedAttention (eq. 5,6,7,4):\nhead_i = Attention_i(Q,K,W) // same as in the base model\nBranchedAttention = \\sum_i \\alpha_i max(0, head_i * W_i * kappa_i * W^1 + b^1) W^2 + b^2\n\nThe main difference is that the results of application of each attention head is post-processed with a 2-layer ReLU network before being summed into the aggregated attention vector.\n\nMy main problem with the paper is understanding what really is implemented: the paper states that with alpha_i=1 and kappa_i=1 the two attention mechanism are equivalent. The equations, however, tell a different story: the original MultiHead attention quickly aggregates all attention heads, while the proposed BranchedAttention adds another processing step, effectively adding depth to the model.\n\nSince the BranchedAttention is the key novelty of the paper, I am confused by this contradiction and treat it as a fatal flaw of this paper (I am willing to revise my score if the authors explain the equations) - the proposed attention either adds a small amount of parameters (the alphas and kappas) that can be absorbed by the other weights of the network, and the added alphas and kappas are easier/faster to optimize, as the authors state in the text, or the BranchedAttention works as shown in the equations, and effectively adds depth to the network by processing each attention's result with a small MLP before combining multiple attention heads. This has to be clarified before the paper is published.\n\nThe experiment show that the proposed change speeds convergence and improves the results by about 1 BLEU point. However, this requires a different learning rate schedule for the introduced parameters and some non-standard tricks, such as freezing the alphas and kappas during the end of the training.\n\nI also have a questions about the presented results:\n1) The numbers for the original transformer match the ones in Vaswani et al 2017, am I correct to assume that the authors did not rerun the tensor2tensor code and simply copied them from the paper?\n2) Is all of the experimental setup the same as in Vaswani et al 2017? Are the results obtained using their tensor2tensor implementation, or are some hyperparameters different?\n\nDetailed review:\nQuality:\nThe equations and text in the paper contradict each other.\n\nClarity:\nThe language is clear, but the main contribution could be better explained.\n\nOriginality:\nThe proposed change is a small extension to the Neural Transformer model.\n\nSignificance:\nRather small, the proposed addition adds little modeling power to the network and its advantage may vanish with more data/different learning rate schedule.\n\nPros and cons:\n+ the proposed approach is a simple way to improve the performance of multihead attentional models.\n- it is not clear from the paper how the proposed extension works: does it regularize the model or dies it increase its capacity?", "TL;DR of paper: they modify the Transformer architecture of Vaswani et al. (2017) to used branched attention with learned weights instead of concatenated attention, and achieve improved results on machine translation.\n\nUsing branches instead of a single path has become a hot architecture choice recently, and this paper applies the branching concept to multi-head attention. Weirdly, they propose using two different sets of weights for each branch: (a) kappa, which premultiplies the head before fully connected layers, and (b) alpha, which are the weights of the sum of the heads after the fully connected layers. Both weights have simplex constraints. A couple of questions about this:\n\n* What is the performance of only using kappa? Only alpha? Neither? What happens if I train only of them?\n* What happens if you remove the simplex constraints (i.e., don't have to sum to one, or can be negative)?\n* Why learn a global set of weights for the branch combiners? What happens if the weights are predicted for each input example? This is the MoE experiment, but where k = M (i.e., no discrete choices made).\n* Are the FFN layer parameters shared across the different heads?\n* At the top of page 4, it is said \"all bounds are respected during each training step by projection\". What does this mean? Is projected gradient descent used, or is a softmax used? If the former, why not use a softmax?\n* In Figure 3, it looks like the kappa and alpha values are still changing significantly before they are frozen. What happens if you let them train longer? On the same note, the claim is that Transformer takes longer to train. What is the performance of Transformer if using the same number of steps as the weighted Transformer?\n* What are the Transformer variants A, B, and C?\n\nWhile the results are an improvement over the baseline Transformer, my main concern with this paper is that the improved results are because of extensive hyperparameter tuning. Design choices like having a separate learning rate schedule for the alpha and kappa parameters, and needing to freeze them at the end of training stoke this concern. I'm happy to change my score if the authors can provide empirical evidence for each design choice", "Focusing on the FFN layers, while they are separately applied and added up, each of them is of commensurately smaller size and hence does not add to the number of parameters. Hence, the memory and compute requirements are comparable to the baseline Transformer network. ", "We thank you for your insightful review. We clarify the questions below. \n\n“the paper states that with alpha_i=1 and kappa_i=1 the two attention mechanism are equivalent. The equations, however, tell a different story: the original MultiHead attention quickly aggregates all attention heads, while the proposed BranchedAttention adds another processing step, effectively adding depth to the model.”\nWe found that the equivalence statement is indeed untrue given the nonlinearity of the FFN layer and have removed it from our updated manuscript.. However, we emphasize that our modification does not add depth to the network. Each layer of the original Transformer network consists of three operations (ignoring the residual connections and layer normalization): computation of the attention head, concatenation of the heads and finally the FFN layer. Our proposed architecture is a replacement for the entire layer (consisting of all three operations) and not just the multi-head attention. In doing so, we also change the order of operations; unlike the Transformer which concatenates the heads and then projects them through a single FFN layer, we first use $M$ FFN layers, one for each of the heads, and then combine them. The FFN layers, in this case, will be of commensurately reduced sizes (the original dimension divided by the number of the branches). Owing to this, the original and proposed Transformer networks have the same depth and, barring the scalar (alpha, kappa) values, exactly the same number of trainable parameters. We apologize for the lack of clarity and have fixed our notation and added an explanation in our updated manuscript. Specifically, we have expressed and contrasted the original Transformer network in the notation of (5)--(7).\n\n“The numbers for the original transformer match the ones in Vaswani et al 2017, am I correct to assume that the authors did not rerun the tensor2tensor code and simply copied them from the paper?”\nThat is true. Having said that, we have experimented with the baseline on our (home-grown) implementation and found very similar metrics for the original Transformer as those reported by the authors. \n\n“Is all of the experimental setup the same as in Vaswani et al 2017? Are the results obtained using their tensor2tensor implementation, or are some hyperparameters different?”\nIt is the same as Vaswani et. al. We used the tensor2tensor code as foundation for our modified architecture and, other than the proposed branching mechanism, all else stayed the same. \n\n", "We thank you for your insightful review. We answer your questions below.\n\n“What is the performance of only using kappa? Only alpha? Neither? What happens if I train only of them? What happens if you remove the simplex constraints (i.e., don't have to sum to one, or can be negative)?”\nWe experimented with these changes and found that they lead to inferior performance. Here is our summary:\n+-----------------------------------------+---------------------------------------+\n| Model | Performance on Config (C) |\n+------------------------------------------+--------------------------------------+\n| Weighted Transformer | 24.8 |\n+------------------------------------------+--------------------------------------+\n| Only kappa, alpha=1 | 24.5 |\n+------------------------------------------+--------------------------------------+\n| Only alpha, kappa=1 | 23.9 |\n+------------------------------------------+--------------------------------------+\n| alpha=1, kappa=1 | 23.6 |\n+------------------------------------------+--------------------------------------+\n| No simplex constraints | 24.5 |\n+------------------------------------------+--------------------------------------+\n| Without freezing of weights | 24.7 |\n+------------------------------------------+--------------------------------------+\nWe expect similar results on other configurations given our previous experiments and have added this Table to our paper. \n\n“Why learn a global set of weights for the branch combiners...”\nWe learn the (alpha, kappa) for each layer individually and not for the entire network. \n\n“Are the FFN layer parameters shared across the different heads?”\nNo, each head has a separate set of parameters as in the case of the original Transformer network. However, our FFN weight matrices are commensurately smaller in size. \n\n“At the top of page 4, it is said \"all bounds are respected during each training step by projection...”\nWe employ projected gradient descent; we experimented with using a softmax weighting but found it be slower, more noisy in its updates, performed worse by a significant margin over a projected score. \n\n“In Figure 3, it looks like the kappa and alpha values are still changing significantly before they are frozen...”\nTraining the original Transformer network for fewer iterations led to inferior performance in all cases. On the other hand, training our Weighted Transformer beyond the described threshold did not help our BLEU scores. Both observations were also buttressed by our training curves. \n\n“What are the Transformer variants A, B, and C?”\nThey are Weighted Transformers of three different configurations (which we describe in Table 2). We have expanded upon this notation in the text in our updated manuscript.\n", "We thank you for your review and your assessment. We answer your questions below. \n\n“...it would be interesting to see how important the concatenation weight and the addition weight ...”\nWe experimented with these and other changes and found that they lead to inferior performance. Here is our summary:\n\n+-----------------------------------------+---------------------------------------+\n| Model | Performance on Config (C) |\n+------------------------------------------+--------------------------------------+\n| Weighted Transformer | 24.8 |\n+------------------------------------------+--------------------------------------+\n| Only kappa, alpha=1 | 24.5 |\n+------------------------------------------+--------------------------------------+\n| Only alpha, kappa=1 | 23.9 |\n+------------------------------------------+--------------------------------------+\n| alpha=1, kappa=1 | 23.6 |\n+------------------------------------------+--------------------------------------+\n| No simplex constraints | 24.5 |\n+------------------------------------------+--------------------------------------+\n| Without freezing of weights | 24.7 |\n+------------------------------------------+--------------------------------------+\nWe expect similar results on other configurations given our previous experiments and have added this Table to our paper. \n\n“...language modeling with RNNs has been done for a very long time, not just since 2017 which are the ones you list; similar for speech recognition etc...”\nWe agree and have fixed said references in our updated manuscript.", "Based on our understanding of Weighted Transformer, the FFN is applied separately to each of the heads before they are summed up. As a result, our implementation of Weighted Transformer requires M=8 times more memory and time to compute this part. (In practice our implementation of the Weighted Transformer performs 1.5 steps/s vs the standard Transformer’s 2.5 steps/s on a P100).\n\nIs our understanding correct? And if so, should one interpret the claim that the Weighted Transformer “converges 15-40% faster” in terms of *training steps* and not wall clock time?\n\nFurthermore, because the FFN in Branched Attention is applied separately to each branch and the outputs are then summed (eqn 7), i.e. sum_i alpha_i (FFN(\\bar{head}_i)) as opposed to FFN(sum_i (head_i)) for the standard transformer, we don’t see how setting kappas and alphas to 1 reduces to the multi-head attention of the Transformer (due to the non-linearity in the FFN). Can you please clarify?", "The Masked Attention and the Linear modules are implemented as mentioned in the original Transformer paper \"Attention Is All You Need\" by Vaswani et al. (2017). For the original implementation by Vaswani et al., please visit this github link: https://github.com/tensorflow/tensor2tensor/blob/75b75f2e2281101b9b3637e14ef519afd6a11b68/tensor2tensor/layers/common_attention.py", "It is not clear to me how the multi-branch decoder layer is constructed. Specific equations for this layer are not present in the paper. It is clear from figure 1 how equations 5-7 are implemented in the encoder and the decoder side (Dot-Product Attention + Linear * \\kappa), but it is unclear how exactly the Masked Attention + Linear modules are implemented (compared to the original Transformer model; is it the same or different, and if so how?). Can you please provide the equations for these modules as well?" ]
[ 9, 4, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkYMnLxRW", "iclr_2018_SkYMnLxRW", "iclr_2018_SkYMnLxRW", "HkFhFG1-f", "Hy_tscFgf", "SyIxIgcxf", "SJQVdQ5lG", "iclr_2018_SkYMnLxRW", "HJ0Fy8egG", "iclr_2018_SkYMnLxRW" ]
iclr_2018_rJBiunlAW
Training RNNs as Fast as CNNs
Common recurrent neural network architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU) architecture, a recurrent unit that simplifies the computation and exposes more parallelism. In SRU, the majority of computation for each step is independent of the recurrence and can be easily parallelized. SRU is as fast as a convolutional layer and 5-10x faster than an optimized LSTM implementation. We study SRUs on a wide range of applications, including classification, question answering, language modeling, translation and speech recognition. Our experiments demonstrate the effectiveness of SRU and the trade-off it enables between speed and performance.
rejected-papers
The paper presents Simple Recurrent Unit, which is characterised by the lack of state-to-gates connections as used in conventional recurrent architectures. This allows for efficient implementation, and leads to results competitive with the recurrent baselines, as shown on several benchmarks. The submission lacks novelty, as the proposed method is essentially a special case of Quasi-RNN [Bradbury et al.], published at ICLR 2017. The comparison in Appendix A confirms that, as well as similar results of SRU and Quasi-RNN in Figures 4 and 5. Quasi-RNN has already been demonstrated to be amenable to efficient implementation and perform on a par with the recurrent baselines, so this submission doesn’t add much to that.
train
[ "BJsMKkGgf", "SyjjOZ5gM", "HyMadv_bz", "SyHeKNHmG", "B10wVlpGM", "SyT4TchzG", "Bkavi5nGM", "rycUihjGz", "HJ99HQzMf", "ryHvm3qZG", "BJ_3Gnc-M", "BkTOG25bf", "Hya4QnxWz", "SyAqQBqyz", "H10L7B91G", "BJABIzckG", "r1XmNCKkz", "HkeDMwt1f", "r1Yz8IYkG", "BJR39j4yM", "B1HgFDfJf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "public", "author", "author", "author", "public", "author", "author", "public", "official_reviewer", "author", "public", "author", "public" ]
[ "This work presents the Simple Recurrent Unit architecture which allows more parallelism than the LSTM architecture while maintaining high performance.\n\nSignificance, Quality and clarity:\nThe idea is well motivated: Faster training is important for rapid experimentation, and altering the RNN cell so it can be paralleled makes sense. \nThe idea is well explained and the experiments convince that the new architecture is indeed much faster yet performs very well.\n\nA few constructive comments:\n- The experiment’s tables alternate between “time” and “speed”, It will be good to just have one of them.\n- Table 4 has time/epoch yet only time is stated", "The authors introduce SRU, the Simple Recurrent Unit that can be used as a substitute for LSTM or GRU cells in RNNs. SRU is much more parallel than the standard LSTM or GRU, so it trains much faster: almost as fast as a convolutional layer with properly optimized CUDA code. Authors perform experiments on numerous tasks showing that SRU performs on par with LSTMs, but the baselines for these tasks are a little problematic (see below).\n\nOn the positive side, the paper is very clear and well-written, the SRU is a superbly elegant architecture with a fair bit of originality in its structure, and the results show that it could be a significant contribution to the field as it can probably replace LSTMs in most cases but yield fast training. On the negative side, the authors present the results without fully referencing and acknowledging state-of-the-art. Some of this has been pointed out in the comments below already. As another example: Table 5 that presents results for English-German WMT translation only compares to OpenNMT setups with maximum BLEU about 21. But already a long time ago Wu et. al. presented LSTMs reaching 25 BLEU and current SOTA is above 28 with training time much faster than those early models (https://arxiv.org/abs/1706.03762). While the latest are non-RNN architectures, a table like Table 5 should include them too, for a fair presentation. In conclusion: the authors seem to avoid discussing the problem that current non-RNN architectures could be both faster and yield better results on some of the studied problems. That's bad presentation of related work and should be improved in the next versions (at which point this reviewer is willing to revise the score). But in all cases, this is a significant contribution to deep learning and deserves acceptance.\n\nUpdate: the revised version of the paper addresses all my concerns and the comments show new evidence of potential applications, so I'm increasing my score.", "The authors propose to drop the recurrent state-to-gates connections from RNNs to speed up the model. The recurrent connections however are core to an RNN. Without them, the RNN defaults simply to a CNN with gated incremental pooling. This results in a somewhat unfortunate naming (simple *recurrent* unit), but most importantly makes a comparison with autoregressive sequence CNNs [ Bytenet (Kalchbrenner et al 2016), Conv Seq2Seq (Dauphin et al, 2017) ] crucial in order to show that gated incremental pooling is beneficial over a simple CNN architecture baseline. \n\nIn essence, the paper shows that autoregressive CNNs with gated incremental pooling perform comparably to RNNs on a number of tasks while being faster to compute. Since it is already extensively known that autoregressive CNNs and attentional models can achieve this, the *CNN* part of the paper cannot be counted as a novel contribution. What is left is the gated incremental pooling operation; but to show that this operation is beneficial when added to autoregressive CNNs, a thorough comparison with an autoregressive CNN baseline is necessary.\n\nPros:\n- Fairly well presented\n- Wide range of experiments, despite underwhelming absolute results\n\nCons:\n- Quasi-RNNs are almost identical and already have results on small-scale tasks.\n- Slightly unfortunate naming that does not account for autoregressive CNNs\n- Lack of comparison with autoregressive CNN baselines, which signals a major conceptual error in the paper.\n- I would suggest to focus on a small set of tasks and show that the model achieves very good or SOTA performance on them, instead of focussing on many tasks with just relative improvements over the RNN baseline.\n\nI recommend showing exhaustively and experimentally that gated incremental pooling can be helpful for autoregressive CNNs on sequence tasks (MT, LM and ASR). I will adjust my score accordingly if the experiments are presented.\n\n", "The latest revision contains fixes to the tables and unifies the measurements used. Thanks for the suggestion", "Thank you for the new version of the paper. It looks much better, and I misunderstood the comments about Transformer. Indeed, combining it with SRUs could bring the best of both worlds and improve results even more. I have no more objections to accepting this work and I see its big potential, adjusting my review.", "We updated the paper to include recent state-of-the-art results for the QA and translation tasks to avoid confusion about how the results should be interpreted. We thank AnonReviewer3 for suggesting this. ", "Hi,\n\nSorry for the delayed revision. The state-of-the-art results have been included in the tables for both machine translation and reading comprehension tasks. We hope the results are now better presented. \n\nPlease let use know if other related work should be included. We are happy to address additional comments.\n\nAlso, we didn't mean that \"Transformer may not be needed with SRUs\". As discussed in the introduction of the Transformer paper, RNN is discarded in Transformer architecture due to the difficulty to parallelize recurrent computation. Thus, it is perhaps possible to \"achieve the best of both worlds\" by incorporating SRU into Transformer (e.g. substituting the FFN sub-unit). \n\n", "I am not sure how to interpret the comment about the Transformer architecture. There is a table with results in your paper that are far below SOTA and it doesn't even mention this -- it looks like clearly misleading presentation, and with your comment it starts looking like it's misleading on purpose. Thus I'm lowering my score until the presentation is improved. In particular, your results are below 21 BLEU which is very far apart from the 28 BLEU of the Transformer -- the suggestion you make in the comment (that architectures like Transformer may not be needed with SRUs) seems to be far from conclusive at this point. Please present your work fairly and compare to existing SOTA -- it's a very good work, but the presentation is misleading.", "We are students at McGill University and were reviewing your paper, here are some of our results.\n\nTo reproduce these results, we created a Google Cloud Instance with similar hardware specifications. The authors performed their experiments on a desktop machine with a single NVIDIA GeForce GTX 1070 GPU, Intel Core i7-7700K Processor, using CUDA 8 and cuDNN 6021. Our cloud instance runs on a Haswell-based Intel x86_64 and uses NVIDIA's Tesla K80 GPU. We use the source code provided by the authors with minimal changes on our environment https://github.com/taolei87/sru to sucessfully reproduce the classification, question answering, and language modeling tasks. Our software environment consisted of the following packages: Ubuntu 16.04, CUDA 9.0.176, CuDNN 6.0, PyTorch 0.2.0.post4, Pynvrtc 8.0, and CuPY 4.0.0b1. Python 2.7 was used for all models except question answering, which required Python 3 due to the DrQA dependency. \n\nFor the classification task, we were able to reproduce their results on all six datasets. We trained SRU- and LSTM-based RNNs and a CNN on each dataset 5 times for 100 epochs each. In all instances, the SRU outperformed CNN and LSTM-based RNNs in terms of accuracy and overall training time. We observed similar training times for all tests except MPQA and SST, where we observed wall clock training times nearly twice as long as reported. This could be explained by using 4 cores and a shared cloud GPU, where the authors had an 8-core CPU and dedicated GPU. \n\nFor the question answering model, we used an open source reimplentation of the Document Reader model https://github.com/hitvoice/DrQA with the suggested dropout rates. Despite our best efforts, we were not able to achieve the authors' reported baseline accuracy. We obtained an F1 score of 75.4 and 66% exact match for the LSTM-based RNN, which is 3% lower than reported. However when we trained the SRU model, we were able to obtain closer results to authors: 77.8 F1 score and 67.9 % exact match. This is within 1.5% of the reported results for SRU based training. Moreover, we observed 71% faster overall training when compared to the LSTM-based model, which aligns with the authors observed 69% increase in their published experiment.\n\nFor the language model, the published code ran essentially unmodified, allowing us to reproduce the paper's experimental results to within aproximately 1% error of the reported perplexity and wall clock runtime, for both cuDNN LSTM and SRU configurations, confirming state-of-the-art model performance for the setup described in the paper. Our final performance on the author-recommended hyperparameter settings (6 layers deep, 910 units wide) achieved test perplexity of 60.66 and validation perplexity of 64.17 after 300 training epochs. \n\nThe speech recognition model was the most challenging to set up and build. Due to unforseen difficulty in replicating the software environment, we were unable to reproduce the experiment as described. The authors use a forked version of CNTK with custom modifications to compare the bidirectional SRU to a latency-controlled bidirectional LSTM. Despite the authors timely assitance, we were unable to build the fork as described. Our efforts to reproduce the speech experiment are documented here: https://github.com/taolei87/sru/issues/36 https://github.com/taolei87/sru/issues/36.\n\nOverall, we feel the SRU architecture offers important advantages for parallelism and scaling, facilitating the training of recurrrent neural networks on larger datasets with commodity hardware. It achieves higher accuracy in the same number of epochs as traditional LSTM-based RNNs, using less wall clock time, and demonstrates RNN training and inference need not be as sequential as previously believed. This suggests further research into parallelizable architectures may unlock similar gains in speedup and performance. For a detailed summary of our experimental results, our full report is available at the following URL: https://github.com/msalihs/sru/blob/master/comp551_reproducibility_project_group_RMB.pdf\n", "Thank you for the comments and feedback. \n\nWe agree that having both “time” and “speed” in the tables are confusing. “Time/epoch” in Table 4 is misleading. We will use “Time per epoch” or simply “Time” instead. \n\nWe will address your feedback in the next version. Thanks!\n", "Thank you for the comments and feedback.\n\n== Paper revision ==\nWe will include missing SOTA results and related work for translation as pointed by R3, as we already included for language modeling and speech. We will update the table in the next version.\n\n== Clarification on our experiments ==\nThe goal of our experiments is not to outperform previous SOTA. Instead, the experiments were designed to study SRU’s effectiveness on a broad set of realistic applications via fair comparison. Therefore, we emphasized using existing open source implementations for MT and QA. Different implementations (network architectures, data processing etc.) have non-trivial impact on the final numbers. To the best of our effort, we aimed to avoid this influencing our experiments. Therefore, in the current version, Tables 1, 3, and 5 only compare the results of using LSTM / SRU / Conv2d as building blocks in existing models such DrQA and OpenNMT. We definitely agree that including SOTA models in these tables will improve our presentation. Thank you for the suggestion.\n\n== Non-RNN architectures ==\nThank you for the comment. We will include discussions of non-RNN architectures. Our contribution is orthogonal to recent architectures, such as Transformer (https://arxiv.org/abs/1706.03762), which is a novel combination of multi-head attention and feed-forward networks. Part of the motivation behind the Transformer architecture is the computational bottleneck of recurrent architectures. With SRU this is not longer the case. In fact, we observe in the translation model that only 4 minutes are spent per SRU layer, and 96 minutes are spent in the attention+softmax computation. An interesting direction for future work is combining the SRU and Transformer architectures to gain the benefits of both. While this is an important problem, it is beyond the scope of our experiments. ", "Thank you for the comments and feedback. We respond to the concerns and questions raised in three section. \n\n== Recurrent or convolution ==\nWe wish to certain aspects pertaining to the distinction between recurrent and convolution architectures as we use in the paper:\n\n(1) SRU only applies simple matrix multiplications (Wx_t) for each x_t. This is not a typical convolution operation that is applied over k consecutive tokens. While matrix multiplication can be considered a convolution operation of k=1, this entails that feed-forward networks (FFN) are also a convolutional network. More important, with k=1 there is no convolution over the words, which is the key aim of CNNs for text processing, for example to reason about n-gram patterns. Therefore, while notationaly correct, we consider the k=1 case to empty the term convolution from the meaning it is intended to convey, and do not use it in this way in the paper. That said, we discuss the relationship of these two types of computations in Appendix A, and will be happy to clarify it further in the body of the paper. \n\n(2) This being said, the effectiveness of SRU comes from the recurrent computation of its internal state c[t] (rather than applying conv operations). This internal state computation (referred to in the review as gated incremental pooling) is commonly used as the key component in gated RNN variants, including LSTM, GRU, RAN, MGU, etc. \n\n(3) Beyond the choice of terms, and even if we were to consider SRU as a special type of CNN (with k=1), to the best of our knowledge, our study is the first to demonstrate that k=1 suffices to work effectively across a range of NLP and speech tasks. This emphasis on efficiency goes beyond prior work (e.g. Bytenet, ConvS2S and Quasi-RNN), where conv operations of k=3,4,etc are used throughout the experiments. This allows us to simplify architecture tuning and significantly speeds up the network, which is the main focus of this work. As shown in Figure 2, SRU operates faster than a single conv operation of k=3.\n\n(4) Quasi-RNN, T-RNN and T-LSTM (https://arxiv.org/pdf/1602.02218.pdf) have also used “RNN” in naming, despite defaulting to CNN with gated incremental pooling. Broadly speaking, we consider any unit that successively updates state c[t] based on current input x[t] and the previous vector c[t-1] (as a function c[t]=f(x[t], c[t-1])) as a recurrent unit. We will clarify this better in the paper. \n\n== Quasi-RNN and scale of tasks ==\nWe discuss the comparison to Quasi-RNN in Appendix A, and emphasize the critical differences. In our experiments, the training time of a single run on machine translation takes about 2 days, and 4 days on speech on a Titan X GPU.\n\n== Wide experiments vs deep experiments ==\nOur experiments are aimed to study SRU’s effectiveness on a broad set of realistic applications via fair comparison. We discuss this more in our response to Reviewer 3. \n\nOur work focuses on practical simplifications, optimizations, and the applicability of SRU to a wide range of realistic tasks. Although we do not perform an exhaustive hyper-parameter / architecture tuning on each task given space and time constraints, we do see an improvement over deep CNNs on speech recognition. Similar results have been reported in prior work such as RCNN (Lei et al; 15,16), KNN (Lei et al; 17) and Quasi-RNN (Bradbury et al; 17), demonstrating that gated pooling is helpful for CNN-type models on tasks such as classification, retrieval, LM etc.\n", "Nice work! But isn't the name simple recurrent unit (SRU) a bit similar to the classic name \"simple recurrent network\" which often refers to both Jordan & Elman networks.\nhttps://en.wikipedia.org/wiki/Recurrent_neural_network", "Thank you! We will update it.", "In general we found that increasing depth is more helpful than increasing width as long as the width is in a reasonable size. I think this is because we drop \"the dependency\" between \"h\" and this context needs to be recovered by adding more layers. But since SWB training takes about 4 days, we didn't try all the configuration. That's why we didn't draw a conclusion on depth vs. width. ", "Thanks for a quick response.\n\nOk, I see now. Maybe you need to describe the setup more thoroughly and state which work you are basing your experiments on.", "In Tables 6 / 9: \nIt is not clear why SRU model capacity was increased in depth (to 12 layers) and not in width, which would give an even faster model I would think. As you mention for LSTM 5 layers appear to be optimal, so it is surprising that 12 were needed for SRU.", "Thanks for your comments.\n\nSorry for the confusing, we didn't use RNN-LM here (only N-gram). So the number we should compare with is 10.0 in Table 8. I think JHU recently have better number using the same language model with lattice-free MMI training. We will try this new loss later. But similar to RNN-LM, this is orthogonal to this paper, we are trying to compare with LSTM only for acoustic modeling.\n\nWe haven't try it on 2000hrs. (1) To my understanding, there still lots of institute use 300hrs setup especially at school. If you check last year ICASSP, there are still many paper use 300hrs set, e.g. http://danielpovey.com/files/2017_spl_tdnnlstm.pdf. (2) In my experiences, 20000hrs vs. 300hrs do make a difference, especially for end-to-end system. But 2000hrs set and 300hrs usually don't have significant difference in term of testing the trend of the model quality (especially for HMM-NN hybrid system, model A > model B for 300hrs usually also hold for the full fisher set). Also, 300hrs usually take 4 days on a single GPU which is a reasonable setup for reproduce results.\n", "You cannot claim state of the art on Switchboard. https://arxiv.org/pdf/1610.05256.pdf showed 7.7% WER (Table 8, first row). Unless you are using no LM here (you need to describe LM you used), you don't have SOTA. \n\nSecond, 300h training set is just not very interesting for current research on ASR, therefore not many paper publish results on it. Have you run your model on 2000h set? ", "Thank you for the comment. \n\nThe identity activation (use_tanh=0) and non-zero highway bias are applied only on language modeling following a few of recent papers such as \n - language modeling via gated convolutional network: https://arxiv.org/pdf/1612.08083.pdf\n - recurrent highway network: https://arxiv.org/abs/1607.03474\n\nWe expect the model to perform better on other tasks as well by initializing a non-zero highway bias, since it can help to balance gradient propagation and model complexity (non-linearity) from layer stacking. This is recommended in the original highway network paper (https://arxiv.org/abs/1505.00387). However, we choose to use zero highway bias on other tasks for simplicity. \n\nRegarding the choice of activation function:\n - this could be an empirical question since the best activation varies across tasks / datasets (Appendix A)\n - identity already works since the pre-activation state (i.e. c[t]) readily encapsulates sequence similarity computation. see the discussed related work (Lei et al 2017; section 2.1 & 2.2) https://arxiv.org/pdf/1705.09037.pdf\n\nThank you again for bringing up the questions.\n ", "If the original result (arxiv) was already pretty surprising, this result seems to be even better? It seems a solid 3x speed-up is expected, and it can train a crazy number of layers (10 layers in MT). \n\nIn the actual code on github, it says \"use_tanh=0\" and set highway bias to \"-3\". These intuitions are not explained in the paper. Can the author offer some understanding into them? It seems that identity is better than tanh in the appendix...but then again...some explanation?" ]
[ 7, 8, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJBiunlAW", "iclr_2018_rJBiunlAW", "iclr_2018_rJBiunlAW", "BJsMKkGgf", "Bkavi5nGM", "iclr_2018_rJBiunlAW", "rycUihjGz", "BJ_3Gnc-M", "iclr_2018_rJBiunlAW", "BJsMKkGgf", "SyjjOZ5gM", "HyMadv_bz", "iclr_2018_rJBiunlAW", "BJABIzckG", "r1XmNCKkz", "HkeDMwt1f", "iclr_2018_rJBiunlAW", "r1Yz8IYkG", "iclr_2018_rJBiunlAW", "B1HgFDfJf", "iclr_2018_rJBiunlAW" ]
iclr_2018_HJOQ7MgAW
Long Short-Term Memory as a Dynamically Computed Element-wise Weighted Sum
Long short-term memory networks (LSTMs) were introduced to combat vanishing gradients in simple recurrent neural networks (S-RNNs) by augmenting them with additive recurrent connections controlled by gates. We present an alternate view to explain the success of LSTMs: the gates themselves are powerful recurrent models that provide more representational power than previously appreciated. We do this by showing that the LSTM's gates can be decoupled from the embedded S-RNN, producing a restricted class of RNNs where the main recurrence computes an element-wise weighted sum of context-independent functions of the inputs. Experiments on a range of challenging NLP problems demonstrate that the simplified gate-based models work substantially better than S-RNNs, and often just as well as the original LSTMs, strongly suggesting that the gates are doing much more in practice than just alleviating vanishing gradients.
rejected-papers
The paper performs an ablation analysis on LSTM, showing that the gating component is the most important. There is little novelty in the analysis, and in its current form, its impact is rather limited.
train
[ "Bko-OSYgM", "HJ3eGCYlG", "Bk_Zgxcef", "SJ-VlLPZM", "ByUL_Swbz", "By5K36x-f", "SyaLnaeWG", "HyKV3TlZz", "HJ4IMQZxz", "BywjBIgxM", "Bk43StH0-", "SJeJ8KSAW", "rJuJg4dAW", "rJAIRiwR-", "HyIdLBH0W", "Bk_ZP7HRb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "public", "public", "public" ]
[ "This paper proposes a simplified LSTM variants by removing the non-linearity of content item and output gate. It shows comparable results with standard LSTM.\n\nI believe this is a updated version of https://arxiv.org/abs/1705.07393 (Recurrent Additive Networks) with stronger experimental results. \n\nHowever, the formulation is very similar to \"[1] Semi-supervised Question Retrieval with Gated Convolutions\" 2016 by Lei, and \"Deriving Neural Architectures from Sequence and Graph Kernels\" which give theoretical view from string kernel about why this type of networks works. Both of the two paper don't have output gate and non-linearity of \"Wx_t\" and results on PTB also stronger than this paper. It also have some visualization about how the model decay the weights. Other AnonReviewer also point out some similar work. I won't repeat it here. In the paper, the author argued \"we propose and evaluate the minimal changes...\" but I think the these type of analysis also been covered by [1], Figure 5. \n\nOn the experimental side, to draw the conclusion, \"weighted sum\" is enough for LSTM. I think at least Machine Translation and other classification results should be added. I'm not very familiar with SQuAD dataset, but the results seems worse than \"Reading Wikipedia to answer open-domain questions\" Table 4 which seems use a vanilla LSTM setup. \n\nUpdate: the revised version of the paper addresses all my concerns about experiments. So I increased my score. \n", "This paper presents an analysis of LSTMS showing that they have a from where the memory cell contents at each step is a weighted combination of the “content update” values computed at each time step. The weightings are defined in terms of an exponential decay on each dimension at each time step (given by the forget gate), which lets the cell be computed sequentially in linear time rather than in the exhaustive quadratic time that would apparently be necessary for this definition. Second, the paper offers a simplification of LSTMs that compute the value by which the memory cell at each time step in terms of a deterministic function of the input rather than a function of the input and the current context. This reduced form of the LSTM is shown to perform comparably to “full” LSTMs.\n\nThe decomposition of the LSTM in terms of these weights is useful, and suggests new strategies for comparing existing quadratic time attention-based extensions to RNNs. The proposed model variations (which replaces the “content update” that has a recurrent network in terms of context-independent update) and their evaluations seem rather more arbitrary. First, there are two RNNs present in the LSTM- one controls the gates, one controls the content update. You get rid of one, not the other. You can make an argument for why the one that was ablated was “more interesting”, but really this is an obvious empirical question that should be addressed. The second problem of what tasks to evaluate on is a general problem with comparing RNNs. One non-language task (e.g., some RL agent with an LSTM, or learning to execute or something) and one synthetic task (copying or something) might be sensible. Although I don’t think this is the responsibility of this paper (although something that should be considered).\n\nFinally, there are many further simplifications of LSTMs that could have been explored in the literature: coupled input-forget gates (Greff et al, 2015), diagonal matrices for gates, GRUs. When proposing yet another simplification, some sense for how these different reductions is useful, so I would recommend comparison to those.\n\nNotes on clarity:\nBefore Eq 1 it’s hard to know what the antecedent of “which” is without reading ahead.\n\nFor componentwise multiplication, you have been using \\circ, but then for the iterated component wise product, \\prod is used. To be consistent, notation like \\odot and \\bigodot might be a bit clearer.\n\nThe discussion of dynamic programming: the dynamic program is also only available because the attention pattern is limited in a way that self attention is not. This might be worth mentioning.\n\nWhen presenting Eq 11, the definition of w_j^t elides a lot of complexity. Indeed, w_j^t is only ever implicitly defined in Eq 8, whereas things like the input and forget gates are defined multiple times in the text. Since w_j^t can be defined iteratively and recursively (as a dynamic program), it’s probably worth writing both out, for expository clarity.\n\nEq 11 might be clearer if you show that Eq 8 can also be rewritten in the same wheat, provided, you make h_{t-1} an argument to output and content.\n\nTable 4 is unclear. In a language model, the figure looks like it is attending to the word that is being generated, which is clearly not what you want to convey since language models don’t condition on the word they are predicting. Presumably the strong diagonal attention is attending to the previous word when computing the representation to generate the subsequent word? In any case, this figure should be corrected to reflect this. This objection also concerns the right hand figure, and the semantics of the meaning of the upper vs lower triangles should be clarified in the caption (rather than just in the text).", "Summary: the paper proposes a new insight to LSTM in which the core is an element-wise weighted sum. The paper then argues that LSTM is redundant by keeping only input and forget gates to compute the weights. Experimental results show that the simplified versions work as well as the full LSTM. \n\n\nComment: I kinda like the idea and welcome this line of research. The paper is very well written and has nice visualisation of demonstrating weights. I have only one question:\n\nin the simplified versions, content(x_t) = Wx_t , which works very well (outperforming full LSTM). I was wondering if the problem is from the tanh activation function (eq 2). What if content(x_t) = W_1 . h_{t-1} + W_2 . x_t? ", "Thank you for taking our response into consideration.\n\nWe did not have time to run machine translation experiments, but we will run them now and update as they arrive.\n\nRegarding DrQA vs BiDAF - there is no specific preference for one model over the other except for engineering and experiment-time overhead. We will also run DrQA experiments and report the results.\n\nAs for the hyperparameters - we did not modify *any* hyperparameters, so we cannot say anything about the behavior of this space with confidence.", "Thank you for your quick response! I'll take a closer look of the equation and experimental part and I'll update the comments / score after that. Since the author may need some time to add more results, I just leave early comments here. \n\nTwo major concerns:\n\n(1) Model side, I feel the take-away message is similar to the model I pointed out. The difference is that one link to the kernel method and one is try to simplify LSTM (or make LSTM more interpretable). This is why it's not too surprised to me the \"Element-wise Weighted Sum\" is powerful. This is why I feel part of conclusion is over-claimed, e.g. \"This work sheds light on the inner workings of the relatively opaque LSTM\", and \"This transparency enables a visualization of how the context affects\" (simliar graph can be seen in the paper I pointed out). But overall, it is very useful analysis for LSTM on NLP.\n\n(2) Experiments: I think no need to re-do all the variants for PTB. I also agree with that classification task is a poor test of RNN's modeling in some sense. But any reason not include machine translation results? \n\nSome questions:\nI'm not familiar with the SQuaD data. But seems DrQA got better results than BiDAF. Any reason prefer BiDAF over DrQA? \n\nDid the author find any hyper-parameters difference for this simplified LSTM and a regular LSTM? Since it's remove the non-linearity of memory cell (f_t * c_t). Does it need a truncate the cell output or a more aggressive clipping?", "Thank you for your comments.\n\nWe would like to draw the reviewer's attention to two major points:\n\n=Paper Contribution=\nThe goal of this paper is not to propose a model that outperforms LSTMs, but to understand how LSTMs work. The aptly-named \"LSTM - RNN\" is the minimal change from the original LSTM that allows us to test whether the weighted summing mechanism (as computed by LSTMs) is sufficiently powerful to facilitate NLP models. The main point of this paper is that the gating mechanism (which is able to compute element-wise weighted sums) is a powerful model, and that the embedded vanilla RNN is redundant - at least on the variety of tasks on which we evaluated it on.\n\n=Controlled Experiments=\nBy controlling every variable (model, data, hyperparameters, etc) and running head-to-head comparisons between LSTM and LSTM - RNN, we deduce whether the recurrence in the content layer is redundant. These experiments are vastly different from the ones in \"Deriving Neural Architectures from Sequence and Graph Kernels\", since comparing the PTB results of Lei et al with ours does not control for a wide variety of changes in the experiment. The results of Lei et al are significantly stronger due to a deviation in both model and hyperparameters from the original setting of Zaremba et al (2014), which we followed. Specifically, these deviations include:\n1) highway connections\n2) tying input and output embeddings\n3) variational dropout\n4) different dropout rates\n5) different number of layers\n6) different number of dimensions\n7) different initialization scheme\nAll of these factors have a massive effect on performance, vastly outweighing the performance difference between one recurrent architecture and another on benchmarks such as PTB. In particular, the combination of highway connections and additional layers accounts for about 20 perplexity points.\n\nThe reason we are so familiar with these details is because we reproduced Lei et al's result, and tried to decouple the core model from these other variables. However, when we ran their model in the original Zaremba et al setting, their model did exhibit a drop in performance with respect to LSTMs. We are happy to add an additional section to the paper that demonstrates how tampering with the gating mechanism can actually reduce performance in some cases. This will complement the current experiments, which deal with simplifying the content layer.\n\n=Other Comments=\n- You compare our results on SQuAD, which are based on the BiDAF model (Seo et al, 2017), to the results of DrQA (Chen et al, 2017). These are two different QA models, and the comparison is flawed in the same way that the comparison of our PTB result to Lei et al's is invalid.\n- We are happy to run on additional benchmarks if there is a good reason to suspect that the current 4 benchmarks do not provide sufficient coverage of interesting linguistic phenomena. Specifically, the classification tasks are a poor test of RNN's modeling power since in many cases the best methods are simple bag-of-words classifiers.", "Thank you for your comments. We will amend the paper to address all the clarity issues you pointed out.\n\nIndeed, the gates can also be seen as vanilla RNNs, but there is also evidence that removing the recurrent nature of the gates still does not cripple an LSTM (see, for example, QRNNs https://arxiv.org/pdf/1611.01576.pdf and SRUs https://arxiv.org/pdf/1709.02755.pdf). However, both QRNNs and SRUs add alternative mechanisms that are not present in the original LSTMs; QRNNs add multi-token convolutions, while SRUs add highway connections.\n\nThe experiments in our paper are the first to explicitly isolate the gating mechanism from the embedded vanilla RNN in the content layer. The model we describe as \"LSTM - RNN\" is the minimal change that allows us to test whether the weighted summing mechanism (as computed by LSTMs) is sufficiently powerful to facilitate NLP models.\n\nThis ablation is very different from the ones in (Greff et al, 2015) and GRUs, since those models retain the recurrent content layer. We also ran similar experiments based on these models (e.g. \"GRU - RNN\"), but removed them from the paper to keep the discussion focused and succinct. We are happy to expand the paper accordingly if necessary. (Also, note that both GRUs and gate-coupled LSTMs are computing weighted averages rather than weighted sums, which appears to yield slightly lower performance in some tasks from preliminary experiments.)\n\nRegarding evaluation on a non-NLP task: our goal was to get a better understanding of why LSTMs are so useful for NLP. We will make sure that our claims are hedged accordingly. We are also happy to run experiments on an additional non-language task if this is a major concern.", "Thank you for your comments.\n\nRegarding the question about tanh, we did run preliminary experiments with the suggested setting, in which the model is identical to an LSTM save for the absence of a tanh in the content layer. Performance was very similar to the original LSTM. We also experimented with a content layer of tanh(Wx), and found the results to be very similar to those of \"LSTM - RNN\" that were presented in the paper.\n", "Of course, we agree that hyperparameters can have a dramatic effect on performance, which is why we chose readily-available systems that were tuned for LSTMs as our baselines, and did not modify any of these hyperparameters when trying out our alternatives. This puts the simplified models under a \"devil's advocate\" benchmark, since they could potentially benefit from other hyperparameter settings, but are forced to run with hyperparameters that were tuned for LSTMs.\n\nFor PTB language modeling, we chose Zaremba et al's setting [1] because it is widely used as a baseline by other work (see, for example, Gal & Ghahramani [2] and Press & Wolf [3]) and their code is publicly available. We did not use Melis et al's setting [4] because:\nA) their code is not publicly available\nB) the complete set of optimal hyperparameters is not explicitly mentioned in the paper\nC) the paper has yet to pass peer review\nD) we already had experimental results for PTB before the paper appeared on arXiv\n\nAs for dependency parsing, the code of Dozat and Manning [5] runs by default on universal dependencies (UD), which is publicly available (unlike PTB, which is proprietary). UD is also a significantly larger benchmark than PTB, and it was originally annotated as dependencies. \nSee: https://github.com/tdozat/Parser-v1 under \"How do you run the model?\"\n\nWe note that, in addition to these two benchmarks, we also tested on a much larger language-modeling dataset (Google's Billion Word Benchmark [6]) and question answering (SQuAD [7]). Our results were consistent across all four benchmarks.\n\n[1] https://arxiv.org/pdf/1409.2329.pdf \n[2] https://arxiv.org/pdf/1512.05287.pdf \n[3] https://arxiv.org/pdf/1608.05859.pdf \n[4] https://arxiv.org/pdf/1707.05589.pdf \n[5] https://arxiv.org/pdf/1611.01734.pdf \n[6] https://arxiv.org/pdf/1602.02410.pdf \n[7] https://arxiv.org/pdf/1606.05250.pdf ", "Although I welcome this work, I'm afraid that the authors jumped to the conclusion \"that the content and output layers are redundant, and that the space of element-wise weighted sums is sufficiently powerful to compete with fully parameterized LSTMs\" too quickly.\n\nAll comparison against LSTMs (and of course, any other architectures) should be done carefully, especially when there is evidence that LSTMs weren't properly tuned. For instance, Melis et al. (https://arxiv.org/pdf/1707.05589.pdf) point out that LSTMs if tuned properly can outperform most state-of-the-art models. In Melis et al. LSTMs achieve 59.6 on PTB, whereas the authors' LSTMs reach 78.8, which is much much far behind. \n\nFor dependency parsing, I was wondering why the authors didn't do the comparison on Penn Treebank, which is more standard and reported by Dozat & Manning, 2016.", "Thank you for the reference! Your idea of using a function of the gates to compute importance scores is very relevant, and we will include a citation in future revisions. However, our focus is on the understanding and model alternatives that come from ablating different model internals (e.g. the S-RNN). We hope you will agree that this is very different from (but hopefully complementary to) your focus on rule extraction. Look forward to reading your paper more carefully soon!", "Thanks for the feedback! We are happy to add more citations, sorry for missing yours. We hope our work is relevant to a wide range of existing and future work on designing model variants. The fact that there is so much work shows how important it is to understand more precisely what LSTMs are computing in practice. ", "Thank you for the pointing out this simultaneous and related work! Section 2 is not the novel part of our paper; it is a background section that presents a review of LSTMs and some simple algebra to highlight a known expansion of the memory cell's equation. As pointed out below, our contribution is in the later sections, which highlight the understanding and model alternatives that come from ablating different LSTM model internals (e.g. the S-RNN). However, it is interesting to see different papers observing the same simple recurrence, and taking it in very different directions.", "It is surprising to see that another earlier submitted paper titled \"Dependent bidirectional recurrent neural network with super-long short-term memory\" has done almost the same work in their second section for LSTM analysis.", "I begrudgingly admit that I really like this paper! It describes the LSTM model in a new light, helping me to better understand the nuts and bolts of how it works. I think the author(s) truly understand the LSTM model.\n\nI say begrudgingly because I fell sad they did not cite my work on recurrent weighted averages (https://arxiv.org/abs/1703.01253) or this work on recurrent discounted attention units (https://arxiv.org/abs/1705.08480). That said, the field is moving fast and it hard to both stay on top of the literature and to find space to cite every little paper.\n", "A paper from ICLR last year introduced your decomposition in equation (8) (their equation (9) looks the same), and interpreted the weights as importance scores, in a manner similar to your table 4. How does this characterization of LSTMs differ from your characterization of them as \"dynamically computed element-wise weighted sum\"?\n\nhttps://arxiv.org/abs/1702.02540" ]
[ 6, 5, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJOQ7MgAW", "iclr_2018_HJOQ7MgAW", "iclr_2018_HJOQ7MgAW", "ByUL_Swbz", "By5K36x-f", "Bko-OSYgM", "HJ3eGCYlG", "Bk_Zgxcef", "BywjBIgxM", "iclr_2018_HJOQ7MgAW", "Bk_ZP7HRb", "HyIdLBH0W", "rJAIRiwR-", "iclr_2018_HJOQ7MgAW", "iclr_2018_HJOQ7MgAW", "iclr_2018_HJOQ7MgAW" ]
iclr_2018_SkffVjUaW
Building effective deep neural networks one feature at a time
Successful training of convolutional neural networks is often associated with suffi- ciently deep architectures composed of high amounts of features. These networks typically rely on a variety of regularization and pruning techniques to converge to less redundant states. We introduce a novel bottom-up approach to expand representations in fixed-depth architectures. These architectures start from just a single feature per layer and greedily increase width of individual layers to attain effective representational capacities needed for a specific task. While network growth can rely on a family of metrics, we propose a computationally efficient version based on feature time evolution and demonstrate its potency in determin- ing feature importance and a networks’ effective capacity. We demonstrate how automatically expanded architectures converge to similar topologies that benefit from lesser amount of parameters or improved accuracy and exhibit systematic correspondence in representational complexity with the specified task. In contrast to conventional design patterns with a typical monotonic increase in the amount of features with increased depth, we observe that CNNs perform better when there is more learnable parameters in intermediate, with falloffs to earlier and later layers.
rejected-papers
Regarding clarity, while the paper definitely needs work if it is to be resubmitted to an ML venue, different revisions would be appropriate for a physics audience. And given the above comment, any suggested changes are likely to be superfluous.
train
[ "SkvTjWqxG", "S1gFMVoeM", "BJWfJzTez", "rJhvF46Xf", "Bk2UTZgGM", "HyNZcZxfG", "S1Nnn6A-G", "rkdbjT0Zz", "rknxzzRAb", "BJ4OYp8Tb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "public", "public", "public", "author", "public" ]
[ "The authors propose an approach to dynamically adjust the feature map depth of a fully convolutional neural network. The work formulates a measure of self-resemblance, to determine when to stop increasing the feature dimensionality at each convolutional layer. The experimental section evaluates this method on MNIST, CIFAR-10/100 and a limited evaluation of ImageNet. Generally, I am a very big proponent of structure learning in neural networks. In particular, we have seen a tremendous boost in performance in going from feature engineering to feature learning, and thus can expect similar effects while learning architectures rather than manually designing them. One important work in this area is \"Self-informed neural network structure learning\" by Farley et al. that is missing from the citations. \nHowever, this work falls short of its promises.\n\n1. The title is misleading. There really isn't much discussion about the architecture of networks, but rather the dimensionality of the feature maps. These are very different concepts.\n2. Novelty of this work is also limited, as the authors acknowledge, that much of the motivation is borrowed from Hao et al., while only the expansion mechanism is now normalized to avoid rescaling issues and threshold tuning.\n3. The general approach lacks global context. All decisions about individual feature depths are made locally both temporally and spatially. In particular, expanding the feature depth at layer f at time t, may have non trivial effect on layer f-1 at time t + 1. In other words, there must be some global state-space manifold to help make decisions globally. This resembles classical dynamic programming paradigms. Local decisions aren't always globally optimal.\n4. Rather than making decision on per layer basis at each iteration, one should wait for the model to converge, and then determine what is useful and what is not.\n5. Finally, the results are NOT promising. In table 1, although the final error has reduced in most cases, it comes at the expense of increases capacity, in extreme cases as much as ~5x, and always at the increased training time, in the extreme case ~14x, An omitted citation of \"Going deeper with Convolution\" is an example, where a much smaller footprint leads to a higher performance, further underlying the importance of a smaller footprint network as stated in the abstract.\n\n", "This paper introduces a simple correlation-based metric to measure whether filters in neural networks are being used effectively, as a proxy for effective capacity. The authors then introduce a greedy algorithm that expands the different layers in a neural network until the metric indicates that additional features will end up not being used effectively.\n\nThe application of this algorithm is shown to lead to architectures that differ substantially from hand-designed models with the same number of layers: most of the parameters end up in intermediate layers, with fewer parameters in earlier and later layers. This indicates that common heuristics to divide capacity over the layers of a network are suboptimal, as they tend to put most parameters in later layers. It's also nice that simpler tasks yield smaller models (e.g. MNIST vs. CIFAR in figure 3).\n\nThe experimental section is comprehensive and the results are convincing. I especially appreciate the detailed analysis of the results (figure 3 is great). Although most experiments were conducted on the classic benchmark datasets of MNIST, CIFAR-10 and CIFAR-100, the paper also includes some promising preliminary results on ImageNet, which nicely demonstrates that the technique scales to more practical problems as well. That said, it would be nice to demonstrate that the algorithm also works for other tasks than image classification.\n\nI also like the alternative perspective compared to pruning approaches, which most research seems to have been focused on in the past. The observation that the cross-correlation of a weight vector with its initial values is a good measure for effective filter use seems obvious in retrospect, but hindsight is 20/20 and the fact is that apparently this hasn't been tried before. It is definitely surprising that a simple method like this ends up working this well.\n\nThe fact that all parameters are reinitialised whenever any layer width changes seems odd at first, but I think it is sufficiently justified. It would be nice to see some comparison experiments as well though, as the intuitive thing to do would be to just keep the existing weights as they are.\n\nOther remarks:\n\nFormula (2) seems needlessly complicated because of all the additional indices. Maybe removing some of those would make things easier to parse. It would also help to mention that it is basically just a normalised cross-correlation. This is mentioned two paragraphs down, but should probably be mentioned right before the formula is given instead.\n\npage 6, section 3.1: \"it requires convergent training of a huge architecture with lots of regularization before complexity can be introduced\", I guess this should be \"reduced\" instead of \"introduced\".", "This paper aims to address the deep learning architecture search problem via incremental addition and removal of channels in intermediate layers of the network. Experiments are carried out on small-scale datasets such as MNIST and CIFAR, as well as an exploratory run on ImageNet (AlexNet).\n\nOverall, I find the approach proposed in the paper interesting but a little bit thin in content. Essentially, one increases or decreases the number of features based on equation 2. It would be much valuable to see ablation studies to show the effectiveness of such criterion: for example, simple cases one can think of is to model (1) a data distribution of known rank, (2) simple MLP/CNN models to show the cross-layer relationships (e.g. sudden increase and decrease of the number of channels across layers will be penalized by c^l_{f^{l+1}, t}), etc.\n\nThe experimentation section uses small scale datasets and as a result, it is relatively unclear how the proposed approach will perform on real-world applications. One apparent shortcoming of such approach is that training takes much longer time, and the algorithm is not easily made parallel (the sgd steps limit the level of parallelization that can be carried out). As a result, I am not sure about the applicability of the proposed approach.", "We would like to briefly remark that there seems to have been some difficulty in posting the rebuttal as an \"official comment\" at the time. To clarify, the \"anonymous\" comments marked with \"rebuttal\" and the \"Comments for AnonReviewer3\" have been posted by this paper's authors and should be regarded as official comments. \n\nWe again thank the reviewers for their efforts and have uploaded a revised document improving upon suggested aspects wherever possible. To give a short summary we have:\n\n* included 2 suggested valuable references into related work\n* made a minor modification to the title by omitting the word \"architectures\" and instead simply writing \"neural networks\" as reviewer 1 has kindly noted that the word and concept of architectures seems to have different interpretations in the community and thus could be misleading in the title of our work. \n* added an additional appendix section discussing increase of networks' capacities beyond the reference (addressing reviewer 1). We provide an example with loss, training and validation curves to show the non-triviality of effective capacity when regularizers are present.\n* made minor modifications to the main body to further underline the novelty to the reader and avoid miss-conceptions about concepts being borrowed by \"Hao et. al.\" or other pruning papers that are not in the scope of the expansion framework presented in this work. (addressing reviewer 1)\n* simplified equation 2 with respect to the explicit indices of the norm and the spatial dimensions. We have furthermore made corresponding changes to the description of the equation to portray the cross-correlation concept earlier. This should improve readability and understanding of the equation. (addressing reviewer 3) \n* added a section in the appendix addressing the possibility and open questions of applying our proposed framework without the need for re-initialization. The section should further clarify why we have decided to not include a demo of such an experiment as we believe it would lead to potentially misleading results and interpretation. (addressing reviewer 3) \n* made minor modifications to wording and corrected some few typos. \n* rephrased a short part about the computational perspective of our approach to emphasize the approach's modularity and potential for parallelization with no limitations known to us beyond regular SGD optimization (addressing reviewer 2)\n\nUnfortunately we have not been able to include the request made by reviewer 2: \"data distribution of known rank and simple models to show cross-layer relationships\". We have thought long and hard about this statement and could not come to a conclusion of how to conduct such an experiment in a convincing manner. We believe that such experiments about cross-layer relationships are absolutely desirable, but still an open-challenge for deep learning in general and thus not immediate to our contribution. We have requested some clarification about the nature of such experiments and did not yet receive further explanation. Independent of the decision of acceptance of our work we would be extremely grateful if the reviewer could extend and clarify the review so that we can draw more value from it and include it in future work and improvements. \n\nAs a last remark we would like to again point out our concern with the very harsh lack of novelty statement made by reviewer 1. The reviewer seems to believe our mechanism is \"borrowed\" from Hao et al's paper, which is concerned _only_ with pruning of already _trained_ networks, and voices correspondingly harsh feedback about the value (or lack there-of) of our bottom-up expansion approach. We are particularly concerned with the one-sided nature of statements such as \"one should wait for the model to converge, and then determine what is useful and what is not.\". Independent of whether such a statement turns out to be true, we strongly believe that exploration of alternatives (one presented here) to always training networks to full convergence before making modifications is crucial and provides necessary insights beyond \"pushing benchmark numbers\". \n", "We welcome the additional reference to Farley et al’s work. We went through it carefully and believe that the work contains some great ideas. We would also like to point out that the scope of “Self-informed neural network structure learning” is different and in fact orthogonal to the work presented here. \n\nOur work is complementary in a sense that it could be used as a precursor. Farley et al. show how to adapt/transfer already well-performing trained networks by doing capacity increases (e.g. with a large ImageNet trained GoogLeNet), whereas our work tackles the challenge of coming up with suitable capacity and feature spaces of such a network in the first place. In our understanding, Farley et al’s work does not seem to focus on the question of whether the underlying trained neural network’s capacity is appropriate in the first place and relies on this factor as given. \nIn this sense, our proposed method is valuable in construction of the initial feature space (from very small to larger more adequate) capacity on a task, and the method suggested by Farley could offer incremental capacity addition on top of the converged architecture when moving to novel data. We had thus initially not cited this work, but will include a reference to Farley et al in the related work section as a valuable orthogonal idea.\n", "Thank you for taking the time to read our work and write this review. We share the view that automated neural network design holds large promise. Concerning the five points we are a little dismayed by the statements.\n\n1.) In our opinion the word “architecture” doesn’t have a rigid definition and can span a variety of concepts. We have chosen the title because we investigate different neural networks and notice common patterns in formation of feature space dimensionality. We think that the abstract makes the scope of the paper quite clear. If it is allowed, we can imagine omitting the word “architecture” in the title. We believe this should clear the confusion.\n\n2.) >“lack of novelty” and the statement that our work is largely “borrowed”.\n While Hao et al. provided inspiration, there are several crucial differences to “Pruning Filters for Efficient Convnets”:\n \n-Hao et al ONLY talks about pruning already trained NNs. Our metric follows in spirit by observing entire filters instead of individual weight values. But while Hao et al base pruning on filter magnitudes, we look at the evolution over time in a normalized fashion. Due to this change we can move to a BOTTOM-UP expansion approach instead of pruning. This is fundamentally different from any pruning paper. We do NOT present this work as a technique for pruning at all.\n\n-“ while only the expansion mechanism is now normalized to avoid re-scaling issues and threshold tuning.”\nThe expansion mechanism itself is novel and only works due to the added idea of normalization. To the best of our knowledge this has not been proposed in previous works. Works in the spirit of Hao et al. take top-down approaches where it is always required to train a neural network to full convergence first. A network’s feature dimensionality had to first be picked through large scale empirical experimentation and human intuition before pruning. In contrast, our work incrementally adds capacity starting from just one feature per layer. \n\n-We empirically observe alternate feature composition in comparison with the common rule of thumb for NN design of adding features towards deeper layers. We speculate that this could play a role in future NN design.\n\n3.) >“expanding the feature depth at layer f at time t, may have non trivial effect on layer f-1 at time t + 1”. \nWe agree that change in number of features in any layer has non trivial effect on the other layers. This is the primary reason why we re-initialize features every time a feature is added to avoid the introduction of non-trivial perturbations.\n\n>“Local decisions aren't always globally optimal.”\nTemporal evolution of weights is very much dependent on the minimization of the global cost. In a layer that already has more than required number of features some of the features will not receive any or minor update from the SGD step. Based on our metric no further addition of features will be required. Since it is a greedy approach, we cannot guarantee global optimality, but under given regularization constraints, our approach seems to find a good solution without loss or even improvement of generalization.\n\n4.) In our opinion pruning is a good approach for parameter reduction in models. In the context of moving towards automation of network building, one has to still decide on what network size to train to convergence before pruning. Identification of suitable feature dimensionalities for unknown datasets by itself is a difficult and demanding task. To an extent, our expansion approach aims to overcome this limitation as adequate feature dimensionalities are approached in a bottom-up fashion.\n\n5.) > Model complexity: Our expansion mechanism operates on the basis of temporal evolution of weights which depends on the ability of the model to push for zero training error under regularization constraints. In some cases, e.g. 5x increase in parameters, the corresponding original models underfit on the training data with this particular hyperparameter and regularizer configuration. It is understandable that our approach adds parameters to build a model which adequately fits the training data. Arguably a model that underfits on training data (whether due to dropout, loss function regularization terms, batch normalization etc.) will not be able to generalize well either. We will add an appropriate example with loss, train and validation curves to improve the readers understanding.\n\n> Training time: We believe it is unfair to compare the time of our approach against original models. One should recognize that authors of the original models arrive at those architectures after rigorous experimental validation of many feature configurations which all together takes lot more time. Our approach on the other hand, given the depth of network, starts with one feature per layer and automatically chooses suitable feature dimensionality in one go.\n", "Thank you very much for taking the time to write this review.\n\n>“That said, it would be nice to demonstrate that the algorithm also works for other tasks than image classification.“\n\nThank you very much for the pointer. We are planning on and will make sure to add experiments on other data types in the future.\n\n> “The fact that all parameters are re-initialised whenever any layer width changes seems odd at first, but I think it is sufficiently justified. It would be nice to see some comparison experiments as well though, as the intuitive thing to do would be to just keep the existing weights as they are.”\n\nWe think that adding these experiments should only be hinted at and largely be postponed to a later version together with a more rigorous analysis. The reasoning here is similar to what we have stated in the outlook as we believe that it is necessary to do a more profound analysis of initialization techniques and the accompanied effect on convergence behavior. Given that the open questions here are of untrivial nature, we were hesitant to simply include some experiments here. \nWe are going to outline some of the concrete questions about initialization (re-initialization) more thoroughly in the future work section to give the reader a better understanding of the challenges and possibilities when moving away from re-initialization. \n\n>“Formula (2) seems needlessly complicated because of all the additional indices”\n\nThank you for the suggestion, we do agree that there can be less amount of indices. We chose to explicitly write down all the indices to avoid any ambiguity. We agree that we can simplify the spatial indices and only make the incoming and outcoming feature dimensionality explicit. We will update this. We will also make sure to mention that our equation is basically normalized cross-correlation right next to the formula to improve understanding as well.\n", "Thank you very much for the review and suggestions on how to improve our work. We would like to request the reviewer for some further clarification which will help us in the improvements.\n\n> “It would be much valuable to see ablation studies to show the effectiveness of such criterion: for example, simple cases one can think of is to model (1) a data distribution of known rank, (2) simple MLP/CNN models to show the cross-layer relationships (e.g. sudden increase and decrease of the number of channels across layers will be penalized by c^l_{f^{l+1}, t}), etc.”\n\nWe propose our approach as a greedy expansion method to construct a network’s feature space such that it can fit underlying data under some regularization constraints. Given a fixed amount of layers, we start with one feature per layer and grow the capacity until the network is capable of adequately fitting the training data.\nWith respect to comment (1)(“a data distribution of known rank”), we agree that suggested analysis will be very important and further theoretical analysis will be valuable and is necessary.\nWe also believe that comment (2)(“show cross-layer relationships”) addressing understanding of cross layer relationships in neural networks and analysis of multi-layer non-linear networks’ feature spaces can provide further insights. \n\nFollowing the reviewer’s suggestion, we can imagine conducting some toy dataset experiment of the following type: \nTake data distributions of increasing rank. Monitor and analyse the relationship to the capacity that our expansion algorithm allocates depending on rank.\nHowever it is unclear to us how such an analysis will provide more rigorous insights into the cross-layer relationships or how increase of distribution rank maps to a multi-layer non-linear neural network (even if we talk about a multi hidden layer MLP), particularly under regularization and SGD sampling. Unless we conduct such an experiment for a very shallow, linear MLP, to the best of our knowledge this will result in purely empirical insights on whether the capacity allocated by our algorithm scales (in a similar fashion as observed when moving from MNIST to CIFAR10 to CIFAR100). We would be grateful if the reviewer could further clarify his suggestion. \n \nIt is our view that the novel contributions in this work are 1) the expansion framework for network building itself, 2) conducted experiments and 3) the idea to use a normalized cross-correlation metric.\n\n>“The experimentation section uses small scale datasets “\n\nWe included a few initial experiments on the large scale ImageNet dataset in the initial version submitted for the review and hope to add more in the future. We would kindly ask the reviewer to consider that running experiments on ImageNet using very large architectures like ResNets and its corresponding hardware demands is a resource challenge for many. \n\n>“One apparent shortcoming of such approach is that training takes much longer time.”\n\nIt is true that our approach takes longer compared to the pure training time of corresponding original models. However, we believe that one should also take into account all the time spent by the authors of original models on validation experiments (grid-search, random-search etc.) used to find those models. In general in this paper we present that our method works consistently for the datasets tested so far.\nIf we imagine an encounter with a new (vision) dataset of unknown origin and the task to find a suitable (convolutional) neural network. Our method can provide substantial benefits in exploring architecture options by not having to choose feature dimensionality by hand and e.g. concentrating on amount of layers instead. The alternatives are all very time-consuming if the user doesn’t already have a very good prior on task complexity (like on the common benchmark datasets) and often includes training of networks that initially completely over- or underfit before determining suitable upper or lower bounds on model complexity. \n\n>“the algorithm is not easily made parallel (the sgd steps limit the level of parallelization that can be carried out)”\n\nTo summarize our expansion approach, we start with a network consisting of one feature per layer and begin training. Based on the temporal evolution metric proposed, features are added at each layer and training is re-initialized, very much like training a new network with increased features per layer. This process repeats until no more features are added at each layer and the final network is trained to convergence. \nIf we now compare this to traditional SGD (and its variants) with a fixed model that means that we do not interfere with the optimization, other than making the decision of whether to expand after a SGD step is taken. \n\nWe would greatly appreciate if you could further elaborate on this point as it isn’t clear to us how SGD steps are limiting the level of parallelization in our approach in contrast to “conventional” SGD. \n", "Thank you for the pointer to the ICLR 2017 paper. We were presently unaware of this paper, but after taking a brief look identified it as a relevant reference. \n\nWe will go through it more thoroughly and then add it where appropriate in the related work section. \n\nBest,", "I think this paper from ICLR 2017 may be relevant to your work, and is probably worth adding to your related work section.\n\nBest of luck\n\nhttps://www.cs.cmu.edu/~jgc/publication/Nonparametric%20Neural%20Networks.pdf" ]
[ 4, 8, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkffVjUaW", "iclr_2018_SkffVjUaW", "iclr_2018_SkffVjUaW", "iclr_2018_SkffVjUaW", "SkvTjWqxG", "SkvTjWqxG", "S1gFMVoeM", "BJWfJzTez", "BJ4OYp8Tb", "iclr_2018_SkffVjUaW" ]
iclr_2018_SJmAXkgCb
DNN Feature Map Compression using Learned Representation over GF(2)
In this paper, we introduce a method to compress intermediate feature maps of deep neural networks (DNNs) to decrease memory storage and bandwidth requirements during inference. Unlike previous works, the proposed method is based on converting fixed-point activations into vectors over the smallest GF(2) finite field followed by nonlinear dimensionality reduction (NDR) layers embedded into a DNN. Such an end-to-end learned representation finds more compact feature maps by exploiting quantization redundancies within the fixed-point activations along the channel or spatial dimensions. We apply the proposed network architecture to the tasks of ImageNet classification and PASCAL VOC object detection. Compared to prior approaches, the conducted experiments show a factor of 2 decrease in memory requirements with minor degradation in accuracy while adding only bitwise computations.
rejected-papers
The paper presents a technique for feature map compression at inference time. As noted by reviewers, the main concern is that the method is applied to one NN architecture (SqueezeNet), which severely limits its impact and applicability to better performing state-of-the-art models.
train
[ "rJbz1nrgM", "SJG0Ga5ef", "BJ46Rwjez", "ByqVft3GG", "HyQcdMfGM", "S1uluMGGM", "BJR2wfGGG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The method of this paper minimizes the memory usage of the activation maps of a CNN. It starts from a representation where activations are compressed with a uniform scalar quantizer and fused to reduce intermediate memory usage. This looses some accuracy, so the contribution of the paper is to add a pair of convolution layers in the binary domain (GF(2)) that are trained to restore the lost precision. \n\nOverall, this paper seems to be a nice addition to the body of works on network compression. \n\n+ : interesting approach and effective results. \n\n+ : well related to the state of the art and good comparison with other works. \n\n- : somewhat incremental. Most of the claimed 100x compression is due to previous work.\n\n- : impact on runtime is not reported. Since there is a caffe implementation it would be interesting to have an additional column with the comparative execution speeds, even if only on CPU. I would expect the FP32 timings to be hard to beat, despite the claims that it uses only binary operations.\n\n- : the paper is sometimes difficult to understand (see below)\n\ndetailed comments: \n\nEquations (3)-(4) are difficult to understand. If I understand correctly, b just decomposes a \\hat{x} in {0..2^B-1} into its B bits \\tilda{x} \\in {0,1}^B, which can be then considered as an additional dimension in the activation map where \\hat{x} comes from. \n\nIt is not stated clearly whether P^l and R^l have binary weights. My understanding is that P^l has but R^l not.\n\n4.1 --> a discussion of the large mini-batch size (1024) could be useful. My understanding is that large mini-batches are required to use averaged gradients and get smooth updates. \n\nend of 4.1 --> unclear what \"equivalent bits\" means\n\n", "In order to compress DNN intermediate feature maps the authors covert fixed-point activations into vectors over the smallest finite field, the Galois field of two elements (GF(2)) and use nonlinear dimentionality reduction layers.\n\nThe paper reads well and the methods and experiments are generally described in sufficient detail.\n\nMy main concern with this paper and approach is the performance achieved. According to Table 1 and Table 2 there is a small accuracy benefit from using the proposed approach over the \"quantized\" SqueezeNet baseline. If I am weighing in the need to alter the network for the proposed approach in comparison with the \"quantized\" setting then, from practical point of view, I would prefer the later \"quantized\" approach.\n", "Strengths:\n- Unlike most previous approaches that suffer from significant accuracy drops for good feature map compression, the proposed method achieves reductions in feature map sizes of 1 order of magnitude at effectively no loss in accuracy.\n- Technical approach relates closely to some of the prior approaches (e.g., Iandola et al. 2016) but can be viewed as learning the quantization rather than relying on a predefined one.\n- Good results on both large-scale classification and object detection.\n- Technical approach is clearly presented.\n\nWeaknesses:\n- The primary downside is that the approach requires a specialized architecture to work well (all experiments are done with SqueezeNets). Thus, the approach is less general than prior work, which can be applied to arbitrary architectures.\n- From the experiments it is not fully clear what is the performance loss due to having to use the SqueezeNet architecture rather than state-of-the-art models. For example, for the image categorization experiment, the comparative baselines are for AlexNet and NIN, which are outdated and do not represent the state-of-the-art in this field. The object detection experiments are based on a variant of Faster R-CNN where the VGG16 feature extractor is replaced with a SqueezeNet model. However, the drop in accuracy caused by this modification is not discussed in the paper and, in any case, there are now much better models for object detection than Faster R-CNN.\n- In my view the strengths of the approach would be more convincingly conveyed visually with a plot reporting accuracy versus memory usage, rather than by the many numerical tables in the paper.\n\n", "Dear reviewers, we went through a major revision. Updates:\n1. We replaced Faster R-CNN with SSD detector. As we presumed, Faster R-CNN's hard limitation on mini-batch size = 1 was a limiting factor for downsampling-upsampling compression layers. Thanks to R2 who pointed us to this direction.\n\n2. The main concern of R2/R3 was the achieved compression gain. We agreed that the compression along channel dimension only was limited and, hence, concentrated on a more promising approach with downsampling-upsampling layers which allow to learn quantization redundancies along combined channel and local spatial dimensions. In the 1st revision of the paper we could get results with spatial-dimension compression on ImageNet classifier but was not able to do this for object detector due to #1. After switching to SSD and setting a reasonable mini-batch size of 256, we could apply such layers for detector as well. According to Table 1 and 2, this approach provides additional ~2x compression gain compared to prior works with minor accuracy degradation.\n\n3. Section 4.2/4.3 was rewritten to reflect #1 and #2 changes.\n\n4. Empirically, we found that 2x2 kernel with stride 2 works better (higher accuracy and less extra parameters) for SSD.", "> Overall, this paper seems to be a nice addition to the body of works on network compression.\n\nThank you.\n\n- : somewhat incremental. Most of the claimed 100x compression is due to previous work.\n\nWe agree that compressing along channel dimension improved compression by relatively small amount (~1/3) with comparable accuracy. At the same time:\na) We selected state-of-the-art architecture which is hard to compress unlike general unoptimized networks.\nb) Practically, even ~1/3 improvement may lead to significant improvements if off-chip bandwidth can be completely avoided.\nc) According to Table 1, the scheme with convolutional-deconvolutional layers compressed feature maps by another factor of 4 which is significant improvement.\nd) Due to lack of time/space we didn't experiment with multiple compression layers which is another option.\n\nUnfortunately, c) didn't work well for Faster R-CNN. We believe, it is because training of the Faster R-CNN is limited to batch size = 1. Currently, we try to get results for SSD detector which doesn't have such limitation.\n\n- : impact on runtime is not reported. Since there is a caffe implementation it would be interesting to have an additional column with the comparative execution speeds, even if only on CPU. I would expect the FP32 timings to be hard to beat, despite the claims that it uses only binary operations.\n\nWe have both CPU and GPU implementations and added speed numbers to Table 1. However, these numbers represent emulation speed because new quantization and compression layers are emulated on GPU in fp32 rather than efficiently processed. So, these numbers measure relative overhead for emulation of quantization and compression layers.\n\n- : the paper is sometimes difficult to understand (see below)\n\ndetailed comments:\n\n- : Equations (3)-(4) are difficult to understand. If I understand correctly, b just decomposes a \\hat{x} in {0..2^B-1} into its B bits \\tilda{x} \\in {0,1}^B, which can be then considered as an additional dimension in the activation map where \\hat{x} comes from.\n\nCorrect. We preferred a more formal and compact description to save some space.\n\n- : It is not stated clearly whether P^l and R^l have binary weights. My understanding is that P^l has but R^l not.\n\nWe do not consider weight quantization in this paper. All weights are floating-point from notation given in 3.2 and as stated in the 1st paragraph of 4.1. The only exception is Appendix A where weights are 8-bit integers to show benefits of optimized architecture compared to binarized networks in terms of weight size. We added some missing notation in the 2nd paragraph of Section 3.2 as well.\n\n- : 4.1 --> a discussion of the large mini-batch size (1024) could be useful. My understanding is that large mini-batches are required to use averaged gradients and get smooth updates.\n\nThank you, we added this discussion to 4.1 and 4.2. We used the original mini-batch size from SqueezeNet authors for image classification. To be precise, they set global mini-batch size = mini-batch_size * iter_size = 32 * 32 = 1024 in Caffe. So, global mini-batch size of 1024 is achieved by using 32 iterations each of size 32. Hence, large mini-batch is used to train such optimized architecture even in fp32. We agree that the large mini-batch allows to smooth quantization effects as well.\n\n- : end of 4.1 --> unclear what \"equivalent bits\" means\n\nThank you. We removed this to not to confuse readers and free some space. The idea was to differentiate between bits for binary vectors and integers.", "- The paper reads well and the methods and experiments are generally described in sufficient detail.\n\nThank you.\n\n> My main concern with this paper and approach is the performance achieved. According to Table 1 and Table 2 there is a small accuracy benefit from using the proposed approach over the \"quantized\" SqueezeNet baseline. If I am weighing in the need to alter the network for the proposed approach in comparison with the \"quantized\" setting then, from practical point of view, I would prefer the later \"quantized\" approach.\n\nWe agree that compressing along channel dimension improved compression by relatively small amount (~1/3) with comparable accuracy. At the same time:\na) We selected state-of-the-art architecture which is hard to compress unlike general unoptimized networks.\nb) Practically, even ~1/3 improvement may lead to significant improvements if off-chip bandwidth can be completely avoided.\nc) According to Table 1, the scheme with convolutional-deconvolutional layers compressed feature maps by another factor of 4 which is significant improvement.\nd) Due to lack of time/space we didn't experiment with multiple compression layers which is another option.\n\nUnfortunately, c) didn't work well for Faster R-CNN. We believe, it is because training of the Faster R-CNN is limited to batch size = 1. Currently, we try to get results for SSD which doesn't have such limitation.", "- The primary downside is that the approach requires a specialized architecture to work well (all experiments are done with SqueezeNets). Thus, the approach is less general than prior work, which can be applied to arbitrary architectures.\n\nWe selected SqueezeNet just because it is state-of-the-art in terms of feature map size (for non- binary/ternary networks). Then, we clarified previously unpublished aspects of this network when combined with the fusion approach, proposed unified model of feature map compression used in SqueezeNet and our method, and showed improvements compared to prior work. The same compression strategy can be applied to any network architecture by introducing compression layers.\n\n- From the experiments it is not fully clear what is the performance loss due to having to use the SqueezeNet architecture rather than state-of-the-art models. For example, for the image categorization experiment, the comparative baselines are for AlexNet and NIN, which are outdated and do not represent the state-of-the-art in this field.\n\nThe only competitive prior work in terms of feature map footprint are binary/ternary networks. Unfortunately, binary/ternary networks work well only for over-parametrized networks like AlexNet. We report ResNet-18 and NiN as well. These are the only published results we could find to compare to. We couldn't find reported binary/ternary networks derived from ImageNet state-of-the-art networks for the above reasons.\n\n- The object detection experiments are based on a variant of Faster R-CNN where the VGG16 feature extractor is replaced with a SqueezeNet model. However, the drop in accuracy caused by this modification is not discussed in the paper and, in any case, there are now much better models for object detection than Faster R-CNN.\n\nThank you, we added numbers for VGG-16 Faster R-CNN. Could you clarify what are much better models for object detection? We believe, Faster R-CNN, R-FCN, SSD and YOLO are the most popular approaches. For example, recent CVPR17 paper(https://arxiv.org/abs/1611.10012) accomplished comprehensive comparisons and showed that Faster R-CNN might be better in terms of speed/accuracy than others.\nNow we try to add results for SSD detector as well. We believe, that SSD would give us a better result than Faster R-CNN because training of the latter is limited to batch size = 1.\n\n- In my view the strengths of the approach would be more convincingly conveyed visually with a plot reporting accuracy versus memory usage, rather than by the many numerical tables in the paper.\n\nWe tried. Unfortunately, the linear scale doesn't look good in such plots due to big gaps in memory size numbers. We will try to make Tables more elegant looking.\n" ]
[ 7, 5, 4, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_SJmAXkgCb", "iclr_2018_SJmAXkgCb", "iclr_2018_SJmAXkgCb", "iclr_2018_SJmAXkgCb", "rJbz1nrgM", "SJG0Ga5ef", "BJ46Rwjez" ]
iclr_2018_SJn0sLgRb
Data Augmentation by Pairing Samples for Images Classification
Data augmentation is a widely used technique in many machine learning tasks, such as image classification, to virtually enlarge the training dataset size and avoid overfitting. Traditional data augmentation techniques for image classification tasks create new samples from the original training data by, for example, flipping, distorting, adding a small amount of noise to, or cropping a patch from an original image. In this paper, we introduce a simple but surprisingly effective data augmentation technique for image classification tasks. With our technique, named SamplePairing, we synthesize a new sample from one image by overlaying another image randomly chosen from the training data (i.e., taking an average of two images for each pixel). By using two images randomly selected from the training set, we can generate N^2 new samples from N training samples. This simple data augmentation technique significantly improved classification accuracy for all the tested datasets; for example, the top-1 error rate was reduced from 33.5% to 29.0% for the ILSVRC 2012 dataset with GoogLeNet and from 8.22% to 6.93% in the CIFAR-10 dataset. We also show that our SamplePairing technique largely improved accuracy when the number of samples in the training set was very small. Therefore, our technique is more valuable for tasks with a limited amount of training data, such as medical imaging tasks.
rejected-papers
The paper proposes a data augmentation technique for image classification which consists in averaging two input images and using the label of one of them. The method is shown to outperform the baseline on the image classification task, the but evaluation doesn’t extend beyond that (to other tasks or alternative augmentation mechanisms); theoretical justification is also lacking.
train
[ "ryOCyetxf", "S1CYm85gM", "ryjXhymZM", "r1DpKzaQM", "Hy-FdzpXG", "Hy1zDzTmz", "SkqNBf6XG", "HJDYNmMff", "HJgwDzMff", "S1KnxyMGz", "Hkp3PtC-G", "ByZ_nVAbf", "HJJ1R06-z", "Hk0XcA6WM", "H1BRpQFbM", "BJZogjdZG", "r1zfLuSZf", "rybe_34-f", "HyMUHnsez", "SJXiG2oeM", "H1kIxTqxM", "r1dM9S5xf", "SyEr_7ceG", "BkCNMpTA-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "public", "author", "public", "author", "public", "author", "public", "author", "public", "author", "public", "author", "author", "public", "author" ]
[ "The paper proposes a new data augmentation technique based on picking random image pairs and producing \na new average image which is associated with the label of one of the two original samples. The experiments show\nthat this strategy allows to reduce the risk of overfitting especially in the case of a limited amount of training \nsamples or in experimental settings with a small number of categories.\n\n+ The paper is easy to read: the method and the experiments are explained clearly.\n\n- the method is presented as a heuristic technique. \n1) The training process has some specific steps with the Sample Pairing intermittently disabled. \nThe number of epochs with enabled or disabled Sample Pairing changes depending on the dataset.\nHow much si the method robust/sensitive to variations on these choices?\n2) There is no specific analysis on the results besides showing the validation and training errors: would it\nbe possible to see the results per class? Would the confusion matrices reveal something more about the\neffect of the method? Does Sample Pairing help to differentiate similar categories even if they are mixed\nat trainign time?\n3) Would it be possible to better control the importance of each sample label rather\nthan always choosing one of the two as ground truth? \n\nThe paper misses an in-depth analysis of the proposed practical strategy.\n\n", "The paper investigates a method of data augmentation for image classification, where two images from the training set are averaged together as input, but the label from only one image is used as a target. Since this scheme is asymmetric and uses quite unrealistic input images, a training scheme is used where the technique is only enabled in the middle of training (not very beginning or end), and in an alternating on-off fashion. This improves classification performance nicely on a variety of datasets.\n\nThis is a simple technique, and the paper is concise and to the point. However, I would have liked to see a few additional comparisons.\n\nFirst, this augmentation technique seems to have two components: One is the mixing of inputs, but another is the effective dropping of labels from one of the two images in the pair. Which of these are more important, and can they be separated? What if some of the images' labels are changed at random, for half the images in a minibatch, for example? This would have the effect of random label changes, but without the input mixing. Likewise, what if both labels in the pair are used as targets (with 0.5 assigned to each in the softmax target)? This would mix the images, but keep targets intact.\n\nSecond, the bottom of p.3 says that multiple training procedures were evaluated, but I'd be interested to see the results of some of these. In particular, is it important to alternate enabling and disabling SamplePairing, or does it also work to mix samples with and without it in each minibatch (e.g. 3/4 of the minibatch with pairing augmentation, and 1/4 without it)?\n\nI liked the experiment mixing images from within a restricted training set composed of a subset of the CIFAR images, compared to mixing these images with CIFAR training set images outside the restricted sample (p.5 and Fig 5). This suggests to me, however, that it's possible the label manipulations may play an important role. Or, is an explanation why this performs not as well that the network will train these mixing images to random targets (that of the training image in the pair), and never see this example again, whereas by using the training set alone, the mixing image is likely to be repeated with its correct label? Some more discussion on this would be nice.\n\nOverall, I think this is an interesting technique that appears to achieve nice results. It could be investigated deeper at some key points.\n", "The paper reports that averaging pairs of training images improves image classification generalization in many datasets. \nThis is quite interesting. The paper is also straightforward to read and clear, which is positive. Overall i think the finding is of sufficient interest for acceptance.\n\nThe paper would benefit from adding some speculation on reasons why this phenomenon occurs.\nThere are a couple of choices that would benefit from more explanation / analysis: a) averaging, then forcing the classifier to pick one of the two classes present; why not pick both? b) the choice of hard-switching between sample pairing and regular training - it would be interesting if sample-pairing as an augmentation meshed better with other augmentations implementation-wise, so that it could be easier to integrate in other frameworks.", "Thank you so much for your comments.\nPlease refer the updates 1) and 2) in the above response.\nI am currently implementing SamplePairing in a sub-minibatch granularity. So far, I do not see the significant differences by using smaller granularity of enabling/disabling SamplePairing, e.g. disabling for one mini batch after enabling for four mini batches instead of disabling two epochs after enabling eight epochs. But I am going to add the data with different granularity including the sub-minibatch granularity.", "Thank you so much for your comments.\nPlease refer the updates in above response on three points you mentioned in the comment.\n\nI like to specially thank the advice on confusion matrix. I have never investigated it.\nOn average, SamplePairing gave improvements in classification of similar classes (e.g. two animals or two vehicles) or different classes (e.g. animal and vehicle). But I am doing further investigation on the characteristics of SamplePairing using confusion matrices.", "Thank you so much for your comments.\nPlease refer the updates 1) and 2) in above response on two points you mentioned in the comment (using two labels and switching between SamplePairing and regular training).\nI am adding more experiments on the second point (switching), e.g. using different granularity. I hope I can add more discussion on this point.\n", "First of all, we greatly thank the reviewers for their valuable comments. Also, we like to thank who made effort to reproduce our results.\nI updated the submission based on the comments from reviewers.\n\nThe major updates are:\n1) I added discussion on mixup, which is proposed in another submission (https://openreview.net/forum?id=r1Ddp1-Rb&;noteId=r1Ddp1-Rb), in related work.\nAlthough mixup does blending two samples as we do in this paper, mixup also blends labels from both samples while we pick only one. \nThere is a blog post by Ferenc Huszár (http://www.inference.vc/mixup-data-dependent-data-augmentation/), which points out that using label from one sample will give the same results by reformulating the loss function of mixup.\nI also tested using both labels in our SamplePairing and it did not give significant difference as show in Figure 7 (in Appendix).\n\n2) In this paper, we intermittently disable SamplePairing in 20% of the epochs. I added Figure 6 on how this ratio affects the final results to answer the reviewers' questions. By intermittently disabling SamplePairing, we can get small improvements compared to the case without disabling SamplePairing. But this improvement is minor compared to the improvements by SamplePairing itself; hence the training with SamplePairing is not so sensitive to this (potentially workload dependent) tuning parameter. \n\n3) I added confusion matrices with and without SamplePairing to show how samples in each class are predicted in Figure 8 (in Appendix). \n", "We attempted to reproduce the results of this paper. The authors were quite clear in the implementation procedure for the Samplepairing technique as a data augmentation method. They clearly described the steps they took for their experiments. Due to unavailability of published code, created our own Samplepairing, cropping and flipping methods. We choose the CIFAR-10 dataset for our reproducibility challenge given its relatively low computational complexity. \n\nOur results coincide with the author’s in that Samplepairing does improve the classification performance of the classifier on the validation set. However, we were unable to achieve the levels of accuracy stated in the paper with the CNN architecture. Our validation error rate for the classifier trained on Samplepaired data was 0.139, a 33.8% improvement over the baseline result. Naturally, this different could be down to assumptions we made about unknown factors in the experimental procedure particular pertaining the 6 layered CNN architecture as well as the random nature of the augmentation techniques. However, we were able to achieve similar results on the validation set using the same training horizon and classifier on the original 32 by 32 CIFAR-10 dataset without any data augmentations. It would certainly have been helpful if a comparison without any augmentation techniques would have been used as a control. More details about the CNN architecture used on the smaller datasets as well as what steps the authors took to finetune the classifier would also have made the results easier to reproduce. Computational cost was also a factor considering the large training horizons and the large number of augmentations per epoch, mkaing reproducing the results all the more challenging. \n\nDespite the difference, our results also show that Samplepairing helps improve validation performance, lowering variance at the cost of higher bias as displayed by the higher training error. We believe that with finetuning our model would yield accuracies close to what the author's saw in their experiments. The paper itself was concise and very explicit about the details pertaining to the Samplepairing methodology as well as the augmentation techniques used which certainly helped in reproducing it. \n\nThe detailed report of our analysis is accessible on :https://drive.google.com/open?id=1bVwqbcQXVkNRju2Schi_oPHS5p_VAAJp\n", "Many machine learning algorithms are limited by the availability and the amount of data. Running a deep neural network on a small dataset generally results in overfitting without careful adaptation, and does not generalize well for unseen data. This translates to the loss of the algorithm's predictive power. Data augmentation techniques are common to circumvent this problem. Basic data augmentation such as adding noise, randomly cropping and flipping patches of pixels have shown to reduce overfitting and increase the overall model's robustness to noise. \n\nThis paper introduces a novel data augmentation technique called SamplePairing for image classification tasks. For each image during training, another image is randomly sampled with replacement from the same training set, and the pairs of images are merged as input. The label of the original image is set as the ground truth label for the newly created sample. This data augmentation technique is simple, and it does not require additional data outside of the original one. The latter is absolutely crucial for small datasets.\n\nOur team mainly focused on reproducing the relevant results for the CIFAR-10 dataset. This is a well studied dataset and is readily available online with detail description as well as instruction to extract it. The source code for SamplePairing technique is not available in the paper, but can be easily implemented. The architecture of the 6 layers convoluted neural networks (CNN) used is also not presented in the paper, but is released by the authors after inquiring on open review . All of the algorithms are implemented in Python. Keras was used for implementing the CNN instead of the Chainer framework used by the authors; but, since both API's are extremely similar, we do not think this decision would have an impact on our results. A major challenge for reproducing the experiment is that all of the data augmentation are determined through random numbers. And since no details about the random number generators (random seed) are communicated by the authors, to obtain the exact same results would require us to perform all of the possible combinations of data augmentation on each sample, which is unfeasible. Another challenge is the computational resources and time that these experiments required. To train on the full dataset, roughly 24 hours was needed with a Nvidia Tesla P100 GPU. The details of how the authors validate their results were also missing, as the nature of the validation error rates presented in the table is unknown. A rather uncommon practice for validating the performance was also employed by the authors, where the entire training set is used as input for the CNN, and the entire testing set is used as validation set. \n\nAfter carefully following the procedures presented in the paper, we obtained a final validation accuracy at the last epoch of 90.89% for the full dataset with and without using SamplePairing. A slight improvement can be observed, however, if we investigate the past 20 epochs, which yielded a 0.41% increase of accuracy on average. One significant difference in the behavior of validation error rates during each epochs in SamplePairing phase is that the gradual decrease in error rates was not present in our results, instead the validation error rates kept relatively constant. Our validation error rates were also much higher than that presented in the paper throughout the SamplePairing phase. The effect of applying SamplePairing was also studied for datasets that have smaller numbers of samples per class. The subsets are extracted from the original dataset with 2500, 500, 100, 20, and 10 samples per class, randomly and respectively. Based on the accuracy, only the dataset with 20 samples per class yielded a better accuracy when SamplePairing is used, the other subsets have a poorer performance when SamplePairing is applied. \n\nOverall, the paper presents the SamplePairing technique, and the training procedures in a clear and concise manner. It is easy to read even for readers that are not familiar with the relevant literature. The results of the paper can be interpreted straight forwardly with figures, where the effects of applying SamplePairing is strongly contrasted. Based on the results, by using this data augmentation technique, it can help reduce overfitting and even obtain reasonable results for small datasets.\n\nThe lack of source code for this paper greatly contributed to the difficulty of reproducing the same results. Despite of the fact that our results do not all agree with what is presented in the paper, we believe that with more fine tuning of CNN's hyperparameters, and more experiments, we can achieve the same conclusion as what is presented in the paper. \n\n", "Hello everyone,\n\nMy team and I aimed to reproduce the results as presented in the paper, under the ICLR 2018 Reproducibility Challenge. Due to limited computation resources, we have only reproduced the paper for the CIFAR-10 dataset. \n\nWe reproduced almost similar trends as produced by the paper for training and validation dataset with 5000 samples per class(Refer Report).The validation error reduced significantly in the fine-tuning phase as claimed by the paper. The training error in case of SamplePairing came out to be comparatively higher than that without SamplePairing, hence proving that SamplePairing avoids overfitting. In the paper, it is mentioned that the validation error for CIFAR-10 datasets is decreased by 15.68 %. As the paper hasn't mentioned anything about samples per class in the table for CIFAR-10 we assume them to be taking full dataset with 5000 samples per class. When we reproduced the procedure, we got a reduction in validation error rate by 16.61% which is pretty similar when compared with the result given in the paper. However, for the dataset with smaller samples per class, this particular graph became pretty irregular as we proceeded due to limited dataset and overfitting (See APPENDIX in the Report). Although the final trend in all the samples per class is decreased validation error when trained with SamplePairing, there is an exception in one dataset where we took 500 samples per class. That variance might have arrived because of changes in batch size for lower samples per class. The paper hasn’t mentioned explicitly about the batch size for smaller samples per class, which made us experiment with different values. Although we were able to produce the similar trends for 5000, 2500, 100, 20, and 10 samples per class respectively. We are not able to produce a similar trend for 500 samples per class even after experimenting with a number of different batch size. The relation of trends between SamplePairing within the test set and outside the test set keep on varying with for different batch size and hence no conclusion can be drawn from that. The decrease in validation error for 100 samples per class in our implementation is in accordance with the trend mentioned in the paper, however, the value is not too similar. In the paper, a 28% reduction in validation error rate is there however, we got a fairly low error reduction i.e. 10.08%. It was said that SamplePairing within the training data produced more effective results. This applied to our results too but not in all cases, as the validation error of SamplePairing outside the training dataset is often higher but in some cases almost similar or lower to validation error of SamplePairing within the training dataset.\n\nWe have made a detailed analysis and put it all together in a report. Please access it here https://goo.gl/kN27Cp", "All augmentations (crop, flip, pairing) are per epoch based on random numbers.", "Perfect, than you so much for your responses. One last part we want to get right is whether the augmentations change per epoch for the baseline? As in do you reflip and recrop to create new data for every epoch or just do it once and keep training on that data?", "In fine tuning part, I just stop applying SamplePairing. The basic data augmentations, drop out etc are still active during the fine tuning phase.", "Thank you for your reply. I had a further query about the fine tuning part. Can you describe what steps you took during that phase? Did you just let the model train on the original cropped data for that duration (because we don't see any spikes representative of the sample paired data)?", "In each epoch, we generate one (but not all) sample for each input sample. Since we use random number generator, the generated patches are different for epoch by epoch. The size of the extracted patch (i.e. input of the classifier) is 28x28 for CIFAR, not the original image size of 32x32, as you can see in above network design.", "Hi, I have a follow up question. \nEach time you do basic data augmentation, do you generate all possible combinations of patches + flipping, or do you keep the data size to be the same as N (original sample size)?", "Thank you so much for your effort for reproducing our results.\n1) I am sorry, but not yet published.\n2) Yes, the baseline uses the flipping and cropping. I will make the paper more clearer on this point.\n3) I use softmax_cross_entropy function provided by Chainer framework. (http://docs.chainer.org/en/stable/reference/generated/chainer.functions.softmax_cross_entropy.html)\n", "Hello, \n\nMy team and I are attemption to reproduce the results of your paper and had a few queries:\n\n1) Is any code available for the experiments you performed?\n2) For the baseline results (without sample pairing) on the CIFAR-10 and CIFAR-100, did you use any augmentation methods such as flipping and cropping the images or simple feed in the raw images?\n3) What loss function did you use for the CNN training?", "Each image is cropped and random flipped differently for each epoch based on random numbers, not only once before training.", "Thanks for the quick and detailed reply!\n\nJust to make sure: were each of 50000 images randomly flipped and cropped (28x28 patch) at a random place before being introduced to the network for each epoch? In other words, are each of the training images slightly altered and therefore different for each non-SamplePairing epoch? Or were each of the 50000 images randomly flipped and cropped before training occured and so that the training set is identical for each non-SamplePairing epoch?", "In the above network structure, all convolutions are 3x3 size with padding to keep the size. ", "Thank you so much for your effort!\n\n> Structure of the network\n(input 28x28x3)\nBatchNorm\nConv 64\nRELU\nBatchNorm\nConv 96\nRELU\nMaxPool 2x2\nBatchNorm\nConv 96\nRELU\nBatchNorm\nConv 128\nRELU\nMaxPool 2x2\nBatchNorm\nConv 128\nRELU\nBatchNorm\nConv 192\nRELU\nMaxPool 2x2\nBatchNorm\nDropOut 40% dropped\nFullConnect 512\nRELU\nDropOut 30% dropped\nFullConnect 10 (100 for CIFAR-100)\nSoftMax\n\n> What fraction of the training data was put aside for the validation set?\nFor CIFAR-10, I used 50,000 images included in data_batch_* for training (except for experiments shown in Figure 5). For validation set, I used 10,000 images in test_batch.\n\n> Was the training set fabricated by fully using the two basic augmentation techniques (e.g. N samples -> 2048N samples)?\nYes. When we test validation images, we extract 28x28 patch from center of the image without ensembling.\n\n> For training on the CIFAR-10 dataset, how many images were used during each SamplePairing epoch and each non-SamplePairing epoch?\nFor each epoch (with or without SamplePairing), all 50,000 training images were fed into the training for CIFAR datasets.\n", "I am attempting to reproduce the results described in this paper. \nI have a few questions:\nWhat is the exact structure of the network trained on the CIFAR-10 dataset?\nWhat fraction of the training data was put aside for the validation set?\nWas the training set fabricated by fully using the two basic augmentation techniques (e.g. N samples -> 2048N samples)?\nFor training on the CIFAR-10 dataset, how many images were used during each SamplePairing epoch and each non-SamplePairing epoch?\n\nThank you.", "I found there is another submission discussing a quite similar technique.\nmixup: Beyond Empirical Risk Minimization\nhttps://openreview.net/forum?id=r1Ddp1-Rb&noteId=r1Ddp1-Rb\n" ]
[ 4, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJn0sLgRb", "iclr_2018_SJn0sLgRb", "iclr_2018_SJn0sLgRb", "S1CYm85gM", "ryOCyetxf", "ryjXhymZM", "iclr_2018_SJn0sLgRb", "iclr_2018_SJn0sLgRb", "iclr_2018_SJn0sLgRb", "iclr_2018_SJn0sLgRb", "ByZ_nVAbf", "HJJ1R06-z", "Hk0XcA6WM", "r1zfLuSZf", "BJZogjdZG", "HyMUHnsez", "rybe_34-f", "iclr_2018_SJn0sLgRb", "SJXiG2oeM", "H1kIxTqxM", "r1dM9S5xf", "SyEr_7ceG", "iclr_2018_SJn0sLgRb", "iclr_2018_SJn0sLgRb" ]
iclr_2018_Sy3fJXbA-
Connectivity Learning in Multi-Branch Networks
While much of the work in the design of convolutional networks over the last five years has revolved around the empirical investigation of the importance of depth, filter sizes, and number of feature channels, recent studies have shown that branching, i.e., splitting the computation along parallel but distinct threads and then aggregating their outputs, represents a new promising dimension for significant improvements in performance. To combat the complexity of design choices in multi-branch architectures, prior work has adopted simple strategies, such as a fixed branching factor, the same input being fed to all parallel branches, and an additive combination of the outputs produced by all branches at aggregation points. In this work we remove these predefined choices and propose an algorithm to learn the connections between branches in the network. Instead of being chosen a priori by the human designer, the multi-branch connectivity is learned simultaneously with the weights of the network by optimizing a single loss function defined with respect to the end task. We demonstrate our approach on the problem of multi-class image classification using four different datasets where it yields consistently higher accuracy compared to the state-of-the-art ``ResNeXt'' multi-branch network given the same learning capacity.
rejected-papers
The paper proposes a method for learning connectivity in neural networks, evaluated on the ResNeXt architecture. The novelty of the method is rather limited, and even though the method has been shown to improve on the ResNeXt baselines on CIFAR-100 and ImageNet classification tasks (which is encouraging), it should have been evaluated on more architectures and datasets to confirm its generality.
train
[ "ByyJKKXgz", "BJ9DfkxWM", "HJykWAMWG", "B1Vwd8pQf", "rJa9dLT7M", "B1HmOUTQM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors extend the ResNeXt architecture. They substitute the simple add operation with a selection operation for each input in the residual module. The selection of the inputs happens through gate weights, which are sampled at train time. At test time, the gates with the highest values are kept on, while the other ones are shut. The authors fix the number of the allowed gates to K out of C possible inputs (C is the multi-branch factor in the ResNeXt modules). They show results on CIFAR-100 and ImageNet (as well as mini ImageNet). They ablate the choice of K, the binary nature of the gate weights.\n\nPros:\n(+) The paper is well written and the method is well explained\n(+) The authors ablate and experiment on large scale datasets\n\nCons:\n(-) The proposed method is a simple extension of ResNeXt \n(-) The gains are reasonable, yet not SOTA, and come at a price of more complex training protocols (see below)\n(-) Generalization to other tasks not shown\n\nThe authors do a great job walking us through the formulation and intutition of their proposed approach. They describe their training procedure and their sampling approach for the gate weights. However, the training protocol gets complicated with the introduction of gate weights. In order to train the gate weights along with the network parameters, the authors need to train the parameters jointly followed by the training of only the network parameters while keeping the gates frozen. This makes training of such networks cumbersome.\n\nIn addition, the authors report a loss in performance when the gates are not discretized to {0,1}. This means that a liner combination with the real-valued learned gate parameters is suboptimal. Could this be a result of suboptimal, possibly compromised training? \n\nWhile the CIFAR-100 results look promising, the ImageNet-1k results are less impressive. The gains from introducing gate weights in the input of the residual modules vanish when increasing the network size. \n\nLast, the impact of ResNeXt/ResNet lies in their ability to generalize to other tasks. Have the authors experimented with other tasks, e.g. object detection, to verify that their approach leads to better performance in a more diverse set of problems?\n", "The paper is clear and well written.\nIt is an incremental modification of prior work (ResNeXt) that performs better on several experiments selected by the author; comparisons are only included relative to ResNeXt.\n\nThis paper is not about gating (c.f., gates in LSTMs, mixture of experts, etc) but rather about masking or perhaps a kind of block sparsity, as the \"gates\" of the paper do not depend upon the input: they are just fixed masking matrices (see eq (2)).\n\nThe main contribution appears to be the optimisation procedure for the binary masking tensor g. But this procedure is not justified: does each step minimise the loss? This seems unlikely due to the sampling. Can the authors show that the procedure will always converge? It would be good to contrast this with other attempts to learn discrete random variables (for example, The Concrete Distribution: Continuous Relaxation of Continuous Random Variables, Maddison et al, ICLR 2017).\n", "The paper proposes replacing each layer in a standard (residual) convnet with a set of convolutional modules which are run in parallel. The input to each model is a sparse sum of the outputs of modules in the previous set. The paper shows marginal improvements on image classification datasets (2% on CIFAR, .2% on ImageNet) over the ResNeXt architecture that they build on. \n\nPros:\n- The connectivity is constrained to be sparse between modules, and it is somewhat interesting that this connectivity can be learned with algorithms similar to those previously proposed to learn binary weights. Furthermore, this learning extends to large-scale image datasets.\n- There is indeed a boost in classification performance, and the approach shows promise for automatically reducing the number of parameters in the network.\n\nCons:\n- Overall, the approach seems to be an incremental improvement over the previous work ResNeXt.\n- The datasets used are not very interesting: Cifar is too small, and ImageNet is essentially solved. From the standpoint of the computer vision community, increasing performance on these datasets is no longer a meaningful objective.\n- The modifications add complexity.\n\nThe paper is well written and conceptually simple. However, I feel the paper demonstrates neither enough novelty nor enough of a performance gain for me to advocate acceptance. ", "We thank you for the insightful comments.\n\n* “Comparisons are only included relative to ResNeXt.”\n\nSince in the paper we chose to apply our connectivity learning to ResNeXt architectures, we use the ResNeXt performance as a baseline to assess the accuracy gain enabled by our method.\n\n\n* “This paper is not about gating but rather about masking... ”\n\nThis is a good point. We changed “gate” to “mask” in the updated version of our paper. \n\n\n* “The main contribution appears to be the optimisation procedure for the binary masking tensor g. But this procedure is not justified: does each step minimise the loss? This seems unlikely due to the sampling. Can the authors show that the procedure will always converge? It would be good to contrast this with other attempts to learn discrete random variables (for example, The Concrete Distribution: Continuous Relaxation of Continuous Random Variables, Maddison et al, ICLR 2017).”\n\nThe main contribution of our work is not a method to learn discrete random variables, but rather an algorithm for connectivity learning. To achieve this goal we do make use of learnable discrete random variable. It could very well be that other discrete optimization methods will lead to further improvements in our connectivity learning framework. But testing such methods is beyond the scope of this work, which is merely focused on the application of connectivity learning rather than optimization of discrete random variables.\n\n* “Does each step minimise the loss? This seems unlikely due to the sampling.”\n\nThe algorithm is not guaranteed to reduce the loss at each iteration. But the deep learning literature includes many examples of methods/procedures that have no guarantee of reducing the original loss and yet are routinely adopted in practice due to their empirical effectiveness. Examples include dropout or batch normalization. Similarly, we believe that our method may be a useful tool in certain scenarios, given that it enables consistent accuracy improvements at a small additional computational cost.", "We thank the reviewer for the useful observations. We address the individual questions/comments below.\n\n* “The proposed method is a simple extension of ResNeXt.”\n\nWe point out that our algorithm is a general procedure for connectivity learning in multi-branch networks. We chose to demonstrate it on ResNext, since it is one of the state-of-the-art architectures for image categorization. However, the method can be applied without modifications to any other multi-branch architecture. Furthermore, it is not even tied to the problem of image categorization, as it can use any arbitrary loss function. Thus, we disagree with the characterization that it is merely an extension of ResNeXt.\n\n* “Loss in performance when the gates are not discretized.”\n\nThe training algorithm for the binary gates (GateConnect as shown in Algorithm 1) is substantially different from the training algorithm used to train the model with real-valued gates (mentioned in page 6). First, GateConnect performs sampling of gate weights to activate K branches while the training with real-valued gates does not use sampling since all branches are activated at all times. Second, in GateConnect the gradient of the loss function is calculated w.r.t. to the binary gate values (as shown in Algorithm 1, parameter update step); whereas in the case of training with real-valued gates, the gradient of the loss function is calculated w.r.t. the real-valued gates. Therefore, the loss in performance is not due to suboptimality. It is due to the different training procedure. We found that the forward and backward propagation using stochastically-sampled binary gates yields a larger exploration of connectivities and results in bigger changes of the auxiliary real-valued gates, which in turn leads to better connectivity learning. \n\n \n* “The training protocol gets complicated with the introduction of gate weights... The authors need to train the parameters jointly followed by the training of only the network parameters while keeping the gates frozen. This makes training of such networks cumbersome.”\n\nOverall, the proposed procedure remains straightforward. Evidence of this is the fact the entire algorithm used in the first stage can be summarized in a few lines of pseudo-code, as illustrated in Algorithm 1. The second stage simply involves freezing the binary gate weights and performing standard backpropagation. Considering that this two-stage procedure results consistently in a gain in accuracy, we believe that it will be of interest to the community despite the slightly increase in complexity. We note that we will be releasing the software of our approach for reproducibility and to allow other researchers to use it without having to reimplement it.\n\n* “The impact of ResNeXt/ResNet lies in their ability to generalize to other tasks. Have the authors experimented with other tasks, e.g. object detection?“\n\nIn order to show the generalization ability of our approach, in this work we conducted experiments using many different model specifications and a wide variety of datasets, albeit all focused on image categorization. We plan to apply our approach to other tasks in future work. \n", "We thank the reviewer. We address the questions/comments below.\n\n* “The approach seems to be an incremental improvement over the previous work ResNeXt... I feel the paper demonstrates neither enough novelty nor enough of a performance gain for me to advocate acceptance.” \n\nOur approach is not an incremental improvement over ResNeXt. It is a general procedure to learn connectivity in multi-branch architectures. We chose to demonstrate it using ResNeXt architectures due to their strong performance. But we expect our method to be beneficial for other multi-branch models. Furthermore we note that accuracy gains are consistently obtained in all our experiments. Thus, we believe that researchers would be interested in using our method where even moderate performance improvements are critical. Finally, we are unaware of any other connectivity learning algorithm using an approach closely similar to ours. Thus, we disagree with the criticism of scarce novelty.\n\n\n* ”The datasets used are not very interesting: Cifar is too small, and ImageNet is essentially solved. From the standpoint of the computer vision community, increasing performance on these datasets is no longer a meaningful objective.”\n\nWe believe that few computer vision researchers would agree with this statement. While deep networks have achieved super-human performance on ImageNet, object categorization is far from being considered a solved problem and ImageNet remains today the most established benchmark for this task. Finally, we want to point out that our approach is very general and it is applicable without modifications to other tasks, different from image categorization. We chose to validate it on image categorization merely because of our interest in this application area and because manual design of CNNs for image analysis remains today a challenging endeavor. \n" ]
[ 5, 5, 5, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_Sy3fJXbA-", "iclr_2018_Sy3fJXbA-", "iclr_2018_Sy3fJXbA-", "BJ9DfkxWM", "ByyJKKXgz", "HJykWAMWG" ]
iclr_2018_rJoXrxZAZ
HybridNet: A Hybrid Neural Architecture to Speed-up Autoregressive Models
This paper introduces HybridNet, a hybrid neural network to speed-up autoregressive models for raw audio waveform generation. As an example, we propose a hybrid model that combines an autoregressive network named WaveNet and a conventional LSTM model to address speech synthesis. Instead of generating one sample per time-step, the proposed HybridNet generates multiple samples per time-step by exploiting the long-term memory utilization property of LSTMs. In the evaluation, when applied to text-to-speech, HybridNet yields state-of-art performance. HybridNet achieves a 3.83 subjective 5-scale mean opinion score on US English, largely outperforming the same size WaveNet in terms of naturalness and provide 2x speed up at inference.
rejected-papers
The paper presents a hybrid architecture which combines WaveNet and LSTM for speeding-up raw audio generation. The novelty of the method is limited, as it’s a simple combination of existing techniques. The practical impact of the approach is rather questionable since the generated audio has significantly lower MOS scores than the state-of-the-art WaveNet model.
test
[ "r16uKJ5gG", "ryOLIn5lf", "ByDRVIuZG", "Sk3XOcp7f", "Byp0z9TQz", "rJ43n56XM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public" ]
[ "TL;DR of paper: for sequential prediction, in order to scale up the model size without increasing inference time, use a model that predicts multiple timesteps at once. In this case, use an LSTM on top of a Wavenet for audio synthesis, where the LSTM predicts N steps for every Wavenet forward pass. The main result is being able to train bigger models, by increasing Wavenet depth, without increasing inference time.\n\nThe idea is simple and intuitive. I'm interested in seeing how well this approach can generalize to other sequential prediction domains. I suspect that it's easier in the waveform case because neighboring samples are highly correlated. I am surprised by how much an improvement \n\nHowever, there are a number of important design decisions that are glossed over in the paper. Here are a few that I am wondering about:\n* How well do other multi-step decoders do? For example, another natural choice is using transposed convolutions to upsample multiple timesteps. Fully connected layers? How does changing the number of LSTM layers affect performance?\n* Why does the Wavenet output a single timestep? Why not just have the multi-step decoder output all the timesteps?\n* How much of a boost does the separate training give over joint training? If you used the idea suggested in the previous point, you wouldn't need this separate training scheme.\n* How does performance vary over changing the number of steps the multi-step decoder outputs?\n\nThe paper also reads like it was hastily written, so please go back and fix the rough edges.\n\nRight now, the paper feels too coupled to the existing Deep Voice 2 system. As a research paper, it is lacking important ablations. I'll be happy to increase my score if more experiments and results are provided.", "This paper presents HybridNet, a neural speech (and other audio) synthesis system (vocoder) that combines the popular and effective WaveNet model with an LSTM with the goal of offering a model with faster inference-time audio generation.\n\nSummary: The proposed model, HybridNet is a fairly straightforward variation of WaveNet and thus the paper offers a relatively low novelty. There is also a lack of detail regarding the human judgement experiments that make the significance of the results difficult to interpret. \n\nLow novelty of approach / impact assessment:\nThe proposed model is based closely on WaveNet, an existing state-of-the-art vocoder model. The proposal here is to extend WaveNet to include an LSTM that will generate samples between WaveNet samples -- thus allowing WaveNet to sample at a lower sample frequency. WaveNet is known for being relatively slow at test-time generation time, thus allowing it to run at a lower sample frequency should decrease generation time. The introduction of a local LSTM is perhaps not a sufficiently significant innovation. \n\nAnother issue that lowers the assessment of the likely impact of this paper is that there are already a number of alternative mechanism to deal with the sampling speed of WaveNet. In particular, the cited method of Ramachandran et al (2017) uses caching and other tricks to achieve a speed up of 21 times over WaveNet (compared to the 2-4 times speed up of the proposed method). The authors suggest that these are orthogonal strategies that can be combined, but the combination is not attempted in this paper. There are also other methods such as sampleRNN (Mehri et al. 2017) that are faster than WaveNet at inference time. The authors do not compare to this model.\n\nInappropriate evaluation:\nWhile the model is motivated by the need to reduce the generation of WaveNet sampling, the evaluation is largely based on the quality of the sampling rather than the speed of sampling. The results are roughly calibrated to demonstrate that HybridNet produces higher quality samples when (roughly) adjusted for sampling time. The more appropriate basis of comparison is to compare sample time as a function of sample quality. \n\nExperiments:\nFew details are provided regarding the human judgment experiments with Mechanical Turkers. As a result it is difficulty to assess the appropriateness of the evaluation and therefore the significance of the findings. I would also be much more comfortable with this quality assessment if I was able to hear the samples for myself and compare the quality of the WaveNet samples with HybridNet samples. I will also like to compare the WaveNet samples generated by the authors' implementation with the WaveNet samples posted by van den Oord et al (2017). \n\n\nMinor comments / questions:\n\nHow, specifically, is validation error defined in the experiments? \n\nThere are a few language glitches distributed throughout the paper. \n", "By generating multiple samples at once with the LSTM, the model is introducing some independence assumptions between samples that are from neighbouring windows and are not conditionally independent given the context produced by Wavenet. This reduces significantly the generality of the proposed technique.\n\nPros:\n- Attempting to solve the important problem of speeding up autoregressive generation.\n- Clarity of the write-up is OK, although it could use some polishing in some parts.\n- The work is in the right direction, but the paucity of results and lack of thoroughness reduces somewhat the work's overall significance.\n\nCons:\n- The proposed technique is not particularly novel and it is not clear whether the technique can be used to get speed-ups beyond 2x - something that is important for real-world deployment of Wavenet.\n- The amount of innovation is on the low side, as it involves mostly just fairly minor architectural changes.\n- The absolute results are not that great (MOS ~3.8 is not close to the SOTA of 4.4 - 4.5)\n\n\n", "We thank the reviewer's great feedback. In terms of your question: \n* How well do other multi-step decoders do?\nYes, we have the same question at the early stage of the project. We tried a variety of approaches to generate multiple samples, including a transposed convolution, a vanilla RNN, a high-way, etc. None of them get comparable performance to LSTMs. \n\n* Why does the Wavenet output a single timestep? Why not just have the multi-step decoder output all the timesteps? \nWe tried having multi-step decoder to output all timesteps, but unintuitively it is worse than having one sample generated by WaveNet. As pointed out in the result section, LSTM can effectively reduce variance in the output distribution, but this also could reduce the sharpness and naturalness of the audio. \n\n* How much of a boost does the separate training give over joint training? If you used the idea suggested in the previous point, you wouldn't need this separate training scheme.\nThe audio quality is substantially better with ground-truth training. Thanks for the suggestion, we will try this idea out. \n\n* How does performance vary over changing the number of steps the multi-step decoder outputs?\nThe inference time can be drastically reduced (~2x each time step added) by increasing the number of steps. The audio quality will not degrade noticeably until 6-7 steps (~32-64x speed up) compared to base line. \n\n", "We thank the reviewer's feedback but we do feel the hybrid method has its own merits. It is orthogonal to existing techniques including caching. Even with caching, the critical path caused by dependencies between samples still exists (You cannot generate the next sample earlier). Caching does not fundamentally address this dependency problem. Also caching is subject to hardware. For a different hardware platform (e.g. mobile), there might not be sufficient cache or memory for this purpose. \n\nMathematically, the hybrid method can be built on top of caching, and still achieve 2x-4x speedup. For instance, caching reduces per sample generation time by k (~20). The total generation time for a full utterance of n samples would be n*(1/k)*T, where T is the original per sample generation time. With the hybrid method, it can be further reduced to n*(1/k)*T*(1/4). \n\nWith respect to the evaluation, we do have a figure of comparison of inference time (Figure 6). We feel it is a fair comparison when we fix the accuracy while comparing the inference time. And yes, we agree that the key point of the paper is not to improve accuracy, thus the figures should better convey the key point (reference time). \n\nIn terms of the definition of validation error, we partition the training data into 5% validation data and 95% training data and run validation every 250 iterations. It is not the final test error. Audio quality is measured with MOS as described in the result section. \n\nIn terms of audio quality, yes we feel confident to upload samples. The Mechanical Turkers consistently gives better MOS scores for this hybrid model, compared to a WaveNet. Me, personally, listened the samples many times and can confirm that the scores reflect the quality. \nWe would love to compare with samples posted by van den Oord et al (2017). \n\n", "We really appreciate the reviewer's comments. We also really like Reviewer1's feedback that accuracy is not the main purpose of this paper. We are not trying to outperform SOTA in terms of accuracy but only provide a way to speedup an autoregressive model like WaveNet. We understand that the WaveNet team also have made great progress improving their MOS scores using various techniques (please find their recent paper :) ), but even with those changes, our technique can still be applied to a model that is fundamentally a \"WaveNet\" and still achieve 2-4x speedup. \n\nLike we explained to Reviewer1, mathematically, the hybrid method can be built on top of other techniques including caching, and still achieve 2x-4x speedup. For instance, caching reduces per sample generation time by k (~20). The total generation time for a full utterance of n samples would be n*(1/k)*T, where T is the original per sample generation time. With the hybrid method, it can be further reduced to n*(1/k)*T*(1/4). \n\nThe speedup can be beyond 2x. The inference time can be drastically reduced (~2x each time step added) by increasing the number of steps produced by the LSTM. The audio quality will not degrade noticeably until 6-7 steps (~32-64x speed up) compared to base line. We would love to add more evaluation in future version. " ]
[ 6, 4, 4, -1, -1, -1 ]
[ 5, 5, 5, -1, -1, -1 ]
[ "iclr_2018_rJoXrxZAZ", "iclr_2018_rJoXrxZAZ", "iclr_2018_rJoXrxZAZ", "r16uKJ5gG", "ryOLIn5lf", "ByDRVIuZG" ]
iclr_2018_S1NHaMW0b
ShakeDrop regularization
This paper proposes a powerful regularization method named \textit{ShakeDrop regularization}. ShakeDrop is inspired by Shake-Shake regularization that decreases error rates by disturbing learning. While Shake-Shake can be applied to only ResNeXt which has multiple branches, ShakeDrop can be applied to not only ResNeXt but also ResNet, Wide ResNet and PyramidNet in a memory efficient way. Important and interesting feature of ShakeDrop is that it strongly disturbs learning by multiplying even a negative factor to the output of a convolutional layer in the forward training pass. The effectiveness of ShakeDrop is confirmed by experiments on CIFAR-10/100 and Tiny ImageNet datasets.
rejected-papers
The paper proposes a regularisation technique based on Shake-Shake which leads to the state of the art performance on the CIFAR-10 and CIFAR-100 dataset. Despite good results on CIFAR, the novelty of the method is low, justification for the method is not provided, and the impact of the method on tasks beyond CIFAR classification is unclear.
test
[ "r1Pm9nUNG", "r1HFPmSgG", "HkAAvk0gf", "HyI9Lxf-z", "Sy8VrBpXf", "HJ2pqLhQf", "ryGlO75QG", "Skv0E75Xz", "H11n1Xcmz", "BknEhzqmf", "HkmBcxr-f", "Sy3adjgWz", "r1F6F5Ygf", "S1rHOM-gz", "B16xxZzJf", "SyrG-ql1f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "author", "public", "author", "public" ]
[ "---\nRegd. the 'factual errors':\n\n1. My original review said \"the proposed method is *fundamentally* a combination of prior work\" --- in that the underlying ideas had been introduced before in prior work (dropout & shake shake), not that the proposed method involved literally applying a combination of dropout and shake shake. As the paper notes, shake shake and dropout come out to be special cases of the proposed method (if one chose b_l to always be 0 with probability 1, or if one chose alpha to be = 0). My point was that the framework can be seen as a combination of the underlying mechanisms in both.\n\n2. Attenuation = weighting by alpha. This would be shake shake (where the residual path is just weighted by alpha that is uniformly distributed in some range, rather than the proposed method where the attenuation/weighting is only applied if the randomly sampled bernoulli variable is 1).\n--\n\nThe proposed method is easy to understand, and the new experiments are certainly welcome (although, I think the evaluation remains unconvincing without experiments on a large scale task such as Imagenet). However, I just don't think the contribution is novel enough to be accepted as a paper at ICLR. Therefore, I'm inclined to stay with my original evaluation/score.\n", "This paper proposes a regularization technique for deep residual networks. It is inspired by regularization techniques which disturb the training by applying multiplicative factors to the convolutional layer outputs e.g Shake-Shake (Gastaldi '17) and PyramidDrop (Yamada '16). The proposed approach samples a Bernoulli variable randomly to either follow the standard variant of Pyramid net, or applies a variant of shake-shake to pyramid net.\n\n+ Experimental results on CIFAR-10 and CIFAR-100 well-exceed exceed the existing \"vanilla\" techniques + regularizers. \n- Clarity: some statements are not clear / not substantiated e.g. how does the proposed method overcome the memory problem that shake-shake has? There are some minor issues wrt presentation, e.g. grammatical correctness of sentences, consistent usage of references, which can be fixed with more careful proofreading.\n- Quality: even though the experimental results are compelling, the paper lacks thorough analysis in understanding the effects of the regularizer. The two experiments looks at (1) the training error, which the paper openly states does not explain why the proposed regularization works and (2) variance of the gradients throughout learning; the larger variance of gradients is speculated to be the cause, but this is almost expected, given that the method is designed to allow larger fluctuations and perturbations during training.", "The paper proposes a new form of regularization that is an extension of \"Shake-Shake\" regularization (Gastaldi, 2017). The original \"shake-shake\" proposes using two residual paths adding to the same output (so x + F_1(x) + F_2(x)), and during training, considering different randomly selected convex combinations of the two paths (while using an equally weighted combination at test time). However, this paper contends that this requires additional memory, and attempt to achieve similar regularization with a single path. To do so, they train a network with a single residual path, where the residual is included without attenuation in some cases with some fixed probability, and attenuated randomly (or even inverted) in others. The paper contends that this achieves superior performance than choosing simply a random attenuation for every sample (although, this can be seen as choosing an attenuation under a distribution with some fixed probability mass at 1). Experiments show improved generalization on CIFAR-10 and CIFAR-100.\n\nI don't think the paper contains sufficiently novel elements to be accepted as a conference track paper at ICLR. While it is interesting that this works well (especially the \"negative\" weight on the residual), the proposed method is fundamentally a combination of prior work: dropout and \"shake-shake\" regularization. Moreover, the evaluation is somewhat limited---essentially, I feel there isn't conclusive proof that \"shake-drop\" is a generically useful regularization technique. For one, the method is evaluated only on small toy-datasets: CIFAR-10 and CIFAR-100. I think at the very least, evaluation on Imagenet is necessary. The proposed regularization is applied only to the \"PyramidNet\" architecture---which begs the question of whether the proposed regularization is useful only for this specific network architecture. It would have been more useful to see results with and without \"shake-drop\" on different architectures (the point being to show a consistent improvement with this regularization, rather than achieving 'state of the art' on CIFAR-10). Moreover, it would be interesting to see if the hyperparameter comparison shown in Tables 1 and 2 remained consistent across architectures.", "The paper proposes ShakeDrop regularization, which is essentially a combination of the PyramidDrop and Shake-Shake regularization. The procedure consists of essentially weighting the residual branch with a random weight, in the style of Shake-Shake, where the weight is sampled from a mixture of uniform distribution in [-1, 1] and delta at 1, such that the mixture of those two distributions varies linearly with layer depth, in the style of PyramidDrop. In the style of Shake-Shake, a different random weight (in [0, 1]) is used for the backward pass. The most surprising part is that that the forward weight can be negative thus inverting the output of a convolution. Apparently the goal is to \"disturb\" the training, and the procedure yields state-of-the-art results on CIFAR-10/100.\n\nPositives:\n\n- Results: state-of-the-art on CIFAR-10/100\n\nNegatives:\n\n1. No real motivation on why should this work. I guess the motivation is the mixture of PyramidDrop and Shake-Shake motivations, but the main surprising part (forward weight can be negative) is not motivated at all. There is a tiny bit of discussion at the very end, section 4.4, where authors examine the training loss (showing it's non-zero so less overfitting) and mean/variance of gradients (increased). However, this doesn't really satisfy me - it is clear that more disturbance will cause this behaviour, but that doesn't mean any disturbance is good, e.g. if I always apply the negative weight and make my model weights go in the wrong direction, I'm pretty sure training loss and gradients will be even larger, but it's a bad idea to do.\n\n2. I'm concerned with the \"weird trick that happens to work on CIFAR\" line of work (not saying that this paper is the only offender) - are these methods actually useful and generalizable to other problems, or are we overfitting on CIFAR and creating MNIST v2.0 ? It would be nice to demonstrate that this regularization works in at least one more problem, maybe ImageNet, though maybe regularization is not needed there but just find one more dataset that needs regularization and test this on that.\n\n3. The paper doesn't explain well what is the problem with Shake-Shake and memory. I see that the author of Shake-Shake has made a comment on this and that makes a lot of sense, i.e. there is no memory issue, just because there are 2x branches doesn't mean shake-shake needs 2x memory as it can use less capacity=memory to achieve the same performance. So it seems the main premise of the paper - \"let's apply Shake-Shake to deeper models but we need to come up with a modified method because Shake-Shake cannot be applied due to memory problems\" - seems wrong.\n\n4. The writing quality is quite bad, it is very hard to understand what authors mean in parts of the text. E.g. at two places \"it has almost the same residual block as Eqn. (1)\" - how is it \"almost\"? Below equation 5, it is never specified that alpha and beta are sampled uniformly(?) from those ranges, one could think that alpha and beta are fixed constants that take a specific value that is in that range. There are also various grammatical errors such as \"is expected to be powerful but slight memory overhead\" or \"which is introduced essence\", etc.\n\nSmaller comments:\n- Isn't it surprising that alpha in [-1, 1] and beta in [0, 1] works well, but alpha in [0, 1] and beta in [-1, 1] works much worse? The two important cases, (alpha negative, beta positive) and (alpha positive, beta negative), seem to me like they are conceptually very similar.\n- End of section 4.1, should it be b_l as p_L is a constant and b_l is what is sampled?\n- I don't like that exactly the same text is repeated 3 times (abstract, end of intro, end of 1.1) and in very short distance from each other - repeating the same words 3 times doesn't make the reader understand it better, slight rephrasing is much more beneficial.\n\nOverall:\nGood to know that this method sets the new state of the art on CIFAR-10/100, so as such it should be of interest to the community to be available online (arXiv). But with fairly little novelty (is a combination of 2 methods), very little insights of why this should work at all (especially the negative scaling coefficient which is the only extra thing that one learns from this paper, since the rest is a combination of PyramidDrop and Shake-Shake), no idea on whether the method would work outside of the CIFAR-world, and bad quality of the text - I don't think the manuscript is sufficiently good for ICLR.\n\n", "We have cleaned up \"bugs\" in the revised paper.", "We found some errors that should be corrected. Now we are revising it and will upload a further revised version of the paper today.", "Thank you very much for your review comments. \n\nHere are our responses to your comments.\n\n- Clarity: Based on your comments, we substantially improved the paper. In the revised paper, we more clearly state the motivation including the memory issue, the problem we tackled and its difficulty, idea to solve the problem, interpretation of Shake-Shake regularization to derive the proposed regularization method, and experimental results including the condition of base network architectures to apply the proposed ShakeDrop into greater details.\n\n- Quality: We found that the results in question do not have so informative to explain the phenomenon. So, in the revised paper, we added further consideration regarding the range parameters (alpha and beta) and the condition of base network architectures to apply the proposed ShakeDrop.\n", "We appreciate your valuable feedback.\n\nWe found factual errors in your comment.\n(1) The proposed method is not a combination of dropout and shake-shake.\nWe agree that a combination of “dropout” and shake-shake regularization is trivial. But, it is not what we did. In the proposed ShakeDrop, we used ResDrop in a different usage from the usual. ResDrop is not used for dropping some layers as in the original paper. Instead, the mechanism of ResDrop is used as a probabilistic switch of two networks. We show in the paper that such usage of ResDrop contributes to stabilize a network hard to train. This is novel and must be informative in the community. Furthermore, in the revised paper, we present how the problem is not trivial and greater details about our interesting findings.\n(2) While this is not a clear error, we are afraid that we cannot get which method you mean by saying “choosing simply a random attenuation for every sample” of the following sentence: “The paper contends that this achieves superior performance than choosing simply a random attenuation for every sample (although, this can be seen as choosing an attenuation under a distribution with some fixed probability mass at 1).“\n\nWe found your comments are reasonable. So, based on your comments, we extended experiments in two aspects in the revised paper.\n(1) The proposed ShakeDrop has been successfully applied to ResNet (EraseReLU version), Wide ResNet (with batch normalization added in the end of residual blocks) and ResNeXt (EraseReLU version) since we found that batch normalization is required to be at the end of residual blocks.\n(2) In addition to CIFAR-10/100 datasets, we confirm the effectiveness of the proposed ShakeDrop through experiments on Tiny ImageNet dataset. Unfortunately, experiments on ImageNet dataset was not possible in time with our computational resources.\n\nWe hope you find our revised paper is valuable.\n", "We appreciate your detailed review comments. Based on your feedback, we substantially improved the paper.\nWe believe that our contribution is not limited to achieving the state of the art of CIFAR-10/100 but also providing interesting insight to the community.\n\nFirst of all, we would like to point out a factual error which might be caused by our paper quality.\nRegarding the novelty, we understand that the reviewer regards the proposed ShakeDrop as a simple combination of two methods (ResDrop and Shake-Shake). Though apparently it could be seen like that, it is not true. To clarify it, we enumerate what we contribute for proposing ShakeDrop.\n(1) As Shake-Shake does not work on a single residual branch, we proposed a new regularization method working on a single residual branch (used in the intermediate method (“1-branch Shake”; previously we called it PyramidShake)). While it is inspired by Shake-Shake, it is completely different one.\n(2) We used ResDrop in a different usage from the usual. In the original paper, ResDrop is used for dropping some layers. Instead, we used it as a switch of two networks. We demonstrated in the paper that such usage of ResDrop contributes to stabilize a network hard to train like “1-branch Shake.”\n\nThe following are responses to negative aspects of the paper in your comments.\n\n1. Motivation\nAt the time of the initial submission, our motivation was to propose an effective and memory efficient regularization method applicable to PyramidNet because PyramidNet was the best network architecture on CIFAR-10/100 datasets.\nAfter reading review comments, we slightly updated our motivation. That is, in the revised paper, the target network architectures are not only PyramidNet but also ResNet, Wide ResNet and ResNext.\nThough the negative forward weight could be sensational, it is not our central contribution (we explained in the revised manuscript into greater detail). We added consideration on why the negative forward weight works well on the proposed ShakeDrop.\n\n2. Experiments on different datasets\nIt is understandable reaction to request experiments on different datasets. We added experiments on Tiny ImageNet dataset. While we understand experiments on ImageNet dataset are better, we found it is not possible to complete them in time with our computational resources.\n \n3. Memory issue of Shake-Shake\nIt seems our explanation was not appropriate. As we responded to the post by the author of Shake-Shake, out intention is not taking into account only learnable parameters but the total memory consumption.\n\nAnyway, out intention is as follows (as is written in the revised paper). Shake-Shake is designed to take a weighted sum of outputs of two residual branches. So, it requires at least two branches in a layer to apply. Due to this, it can be applied only to ResNeXt (having multiple branches) and it requires more memory to make the network deep than networks with a single residual branch in a layer.\n\n4. Writing quality\nWe are very sorry about it. We tried our best to correct such issues.\n\nRespond to “Smaller comments”\n- (alpha negative, beta positive) and (alpha positive, beta negative) are conceptually same?\nIt is a reaction we expected. The answer is no. We tried our best to explain this in the revised paper.\n\n- End of section 4.1, should it be b_l as p_L is a constant and b_l is what is sampled?\nThanks for pointing this out. It is a typo. It should have been b_l.\n\n- Exactly the same text is repeated 3 times\nWe are sorry for this. It is also solved in the revised paper.\n", "We substantially improved the paper. In the revised paper, we updated as follows.\n- Added experiments on different network architectures; not only PyramidNet, but also ResNet, WRN and ResNeXt\n- Added experiments on a new dataset “Tiny ImageNet”\n- Added consideration about parameters (alpha and beta)\n- Revised introduction to fit the updated; the paper does not focus only on PyramidNet anymore\n- The intermediate method specially focusing on PyramidNet, named PyramidShake, was deleted. Instead, we named an intermediate regularization method “1-branch Shake”\n- Improved paper writing quality\n", "We are afraid that you don't correctly understand what we claim.\nWe know that we can keep the number of parameters by adjusting Cardinality and baseWidth on ResNeXt.\nBut, we don't talk about the number of parameters solely.\nInstead, we talk about memory consumption.\nAmount of required memory depends on not only the number of learnable parameters but also other factors (such as the input of each layer for calculation of gradients on the backward pass). They cause the overhead we pointed out.\n\nWe found that our paper in the current form is not correctly understood by readers. So, we are improving it. Please wait for a while for the revised version.\n\nAnyway, thanks for your interest to our paper.", "If I understand correctly, the authors assume that, for Shake-Shake regularization to bring an improvement, you have to keep the same number of filters, add another branch and then apply Shake-Shake regularization. \n\nWhile I can understand why the authors would assume that, the tests below paint a different story. The models are the same as in the Shake-Shake regularization paper. They were run once and Shake-Shake regularization was not applied (i.e. Even-Even-Batch in the paper):\n\nA. 26 layers, 1 residual banch, 32 filters, 1.47M params: 4.69% test error\nB. 26 layers, 2 residual banches, 22 filters, 1.37M params: 4.65% test error\nC. 26 layers, 1 residual banch, 22 filters, 0.696M params: 5.35% test error\nD. 26 layers, 2 residual banches, 16 filters, 0.736M params: 5.11% test error\nE. 26 layers, 1 residual banch, 16 filters, 0.369M params: 5.98% test error\nF. 26 layers, 2 residual banches, 12 filters, 0.416M params: 5.59% test error\nG. 26 layers, 1 residual banch, 12 filters, 0.209M params: 7.12% test error\nH. 26 layers, 2 residual banches, 8 filters, 0.186M params: 7.24% test error\n\n[B,C],[D,E],[F,G] have the same number of filters per residual branch.\n[A,B],[C,D],[E,F],[G,H] have roughly the same capacity.\n\nIf the author's claim was correct then we would observe the same error rates for [B,C],[D,E],[F,G]. What we see in practice is that [A,B],[C,D],[E,F],[G,H] have roughly the same error rates. This means that what is important is the total capacity of the model not the number of filters per residual branch. \n\nTo apply Shake-Shake regularization correctly, you should add a second branch, reduce the number of filters to get back to the capacity of your initial 1 branch model and then apply Shake-Shake regularization. Following this procedure does not lead to a memory issue.", "Roughly speaking, Shake-Shake requires as twice the amount of memory as ResNet on a residual block due to twice the number of residual branches. ShakeDrop can solve the issue by using a single residual branch (this corresponds to PyramidShake). Since PyramidShake is unstable in learning, we combined it with ResDrop to stabilize it.", "Can you please explain in a few more words what is the memory issue with Shake-Shake networks? Namely, in which way Shake-Shake network is different in memory consumption from, say, a ResNet. And how does Shake-Drop addresses/solves this problem. Thank you.", "Thank you very much for your inquiry. \nThe answer of your question is the former.\nThat is, we sample b_l on the forward pass and reuse it on the backward pass.\n\nRegarding Fig. 2(d), yes, what you have pointed out is correct.\nWe will revise it on a later version.\nThank you very much for pointing out the mistake. ", "Quick question, do you sample one bernoulli variable b_l during the forward pass and then save it and use it again on the backward pass, or are the bernoulli variables independently sampled on both the forward and backward passes? Thanks.\n\nAdditionally, there would appear to be an error in Figure 2(d) for the backward pass, where it has the equation as (b_l + \\beta - b_l), instead of (b_l + \\beta - (b_l * \\beta))." ]
[ -1, 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 2, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "Skv0E75Xz", "iclr_2018_S1NHaMW0b", "iclr_2018_S1NHaMW0b", "iclr_2018_S1NHaMW0b", "iclr_2018_S1NHaMW0b", "BknEhzqmf", "r1HFPmSgG", "HkAAvk0gf", "HyI9Lxf-z", "iclr_2018_S1NHaMW0b", "Sy3adjgWz", "r1F6F5Ygf", "S1rHOM-gz", "iclr_2018_S1NHaMW0b", "SyrG-ql1f", "iclr_2018_S1NHaMW0b" ]
iclr_2018_rJ6iJmWCW
POLICY DRIVEN GENERATIVE ADVERSARIAL NETWORKS FOR ACCENTED SPEECH GENERATION
In this paper, we propose the generation of accented speech using generative adversarial networks. Through this work we make two main contributions a) The ability to condition latent representations while generating realistic speech samples b) The ability to efficiently generate long speech samples by using a novel latent variable transformation module that is trained using policy gradients. Previous methods are limited in being able to generate only relatively short samples or are not very efficient at generating long samples. The generated speech samples are validated through a number of various evaluation measures viz, a WGAN critic loss and through subjective scores on user evaluations against competitive speech synthesis baselines and detailed ablation analysis of the proposed model. The evaluations demonstrate that the model generates realistic long speech samples conditioned on accent efficiently.
rejected-papers
The paper proposes a method for accented speech generation using GANs. The reviewers have pointed out the problems in the justification of the method (e.g. the need for using policy gradients with a differentiable objective) as well as its evaluation.
train
[ "rJfNnSxez", "rkKtPpFxz", "SkxuGmyZG", "BJBuWyf4z", "r1IP74-Ez", "B1LlRgbVM", "HJdYIP6Qz", "H1m-DPpmf", "SkuDwwaQf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "This paper presents a method for generating speech audio in a particular accent. The proposed approach relies on a generative adversarial network (GAN), combined with a policy approach for joining together generated speech segments. The latter is used to deal with the problem of generating very long sequences (which is generally difficult with GANs).\n\nThe problem of generating accented speech is very relevant since accent plays a large role in human communication and speech technology. Unfortunately, this paper is hard to follow. Some of the approach details are unclear and the research is not motivated well. The evaluation does not completely support the claims of the paper, e.g., there is no human judgment of whether the generated audio actually matches the desired accent.\n\nDetailed comments, suggestions, and questions:\n- It would be very useful to situate the research within work from the speech community. Why is accented modelling important? How is this done at the moment in speech synthesis systems? The paper gives some references, but without context. The paper from Ikeno and Hansen below might be useful.\n- Accents are also a big problem in speech recognition (references below). Could your approach give accent-invariant representations for recognition?\n- Figure 1: Add $x$, $y$, and the other variables you mention in Section 3 to the figure.\n- What is $o$ in eq. (1)?\n- Could you add a citation for eq. (2)? This would also help justifying that \"it has a smoother curve and hence allows for more meaningful gradients\".\n- With respect to the critic $C_\\nu$, I can see that it might be helpful to add structure to the hidden representation. In the evaluation, could you show the effect of having/not having this critic (sorry if I missed it)? The statement about \"more efficient layers\" is not clear.\n- Section 3.4: If I understand correctly, this is a nice idea for ensuring that generated segments are combined sensibly. It would be helpful defining with \"segments\" refer to, and stepping through the audio generation process.\n- Section 4.1: \"using which we can\" - typo.\n- Section 5.1: \"Figure 1 shows how the Wasserstein distance ...\" I think you refer to the figure with Table 1?\n- Figure 4: Add (a), (b) and (c) to the relevant parts in the figure.\n\nReferences that might be useful:\n- Ikeno, Ayako, and John HL Hansen. \"The effect of listener accent background on accent perception and comprehension.\" EURASIP Journal on Audio, Speech, and Music Processing 2007, no. 3 (2007): 4.\n- Van Compernolle, Dirk. \"Recognizing speech of goats, wolves, sheep and… non-natives.\" Speech Communication 35, no. 1 (2001): 71-79.\n- Benzeghiba, Mohamed, Renato De Mori, Olivier Deroo, Stephane Dupont, Teodora Erbes, Denis Jouvet, Luciano Fissore et al. \"Automatic speech recognition and speech variability: A review.\" Speech communication 49, no. 10 (2007): 763-786.\n- Wester, Mirjam, Cassia Valentini-Botinhao, and Gustav Eje Henter. \"Are We Using Enough Listeners? No!—An Empirically-Supported Critique of Interspeech 2014 TTS Evaluations.\" In Sixteenth Annual Conference of the International Speech Communication Association. 2015.\n\nThe paper tries to address an important problem, and there are good ideas in the approach (I suspect Sections 3.3 and 3.4 are sensible). Unfortunately, the work is not presented or evaluated well, and I therefore give a week reject.\n", "The contributions made by this paper is unclear. As one of the listed contributions, the authors propose using policy gradient. However, in this setting, the reward is a known differentiable function, and the action is continuous, and thus one could simply backpropagate through to get the gradients on the encoder. Also, it seems the reward is not a function of the future actions, which further questions the need for a reinforcement learning formulation.\n\nThe paper is written poorly. For instance, I don't understand what this sentence means: \"We condition the latent variables to come from rich distributions\". Observed accent labels are referred to as latent (hidden) variables.\n\nWhile the independent Wasserstein critic is useful to study whether models are overfitting (by comparing train/heldout numbers), their use for comparing across different model types is not justified. Moreover, since GAN-based methods optimize the Wasserstein distance directly, it cannot serve as a metric to compare GAN-based models with other models.\n\nAll of the models compared against do not use accent information during training (table 2), so this is not a fair comparison.\n\nOverall, the paper lacks any novel technical insight, contributions are not explained well, exposition is poor, and the evaluations are invalid.", "The paper considers speech generation conditioned on an accent class.\nLeast Squares GAN and a reconstruction loss is used to train the network.\n\nThe network is using continuous latent variables. These variables are trained by policy gradients.\nI do not see a reason for the policy gradients. It would be possible to use the cleaner gradient from the discriminator.\nThe decoder is already trained with gradient from the discriminator.\nIf you are worried about truncated backpropagation through time,\nyou can bias it by \"Unbiasing Truncated Backpropagation Through Time\" by Corentin Tallec and Yann Ollivier.\n\n\nComments on clarity:\n- It would be helpful to add x, z, y, o labels to the Figure 1.\nI understood the meaning of `o` only from Algorithm 1.\n- It was not clear from the text what is called the \"embedding variable\". Is it `z`?\n- It is not clear how the skip connections connect the encoder and the decoder.\nAre the skip connections not used when generating?\n- In Algorithm 1, \\hat{y}_k is based on z_k, instead of \\hat{z}_k. That seems to be a typo.\n\nComments on evaluation:\n- It is hard to evaluate speech conditioned just on the accent class.\nOverfitting may be unnoticed.\nYou should do an evaluation on a validation set.\nFor example, you can condition on a text and generate samples\nfor text sentences from a validation set.\nPeople can then judge the quality of the speech synthesis.\nA good speech synthesis would be very useful.\n", "Ah, thank you. The figure caption is a bit confusing since you describe it as a \"preference\" rather than saying that you compare to a reference accent (as you do in the first par. of Section 5.2), but I think you have answered the question.", "Thanks for getting back to us. If you would look at Figure 4, the presented graph is the weighted preference (plotted to show the difference between models explicitly). We describe what exactly has been plotted in the graph in section 5.2 (the last paragraph being most relevant). We decided not to report the actual values for space considerations, and hoped the representation would help us to get the point across better. We apologise for the confusion this might have caused. ", "Thanks for responding to the review. I did spot in the original paper \"they were also asked to mark on a numeric scale which of the two samples they thought was closer to the accent in the reference samples,\" but I could not find these result in the original paper nor in the revised version. Again, apologies if I am just missing these.", "We thank the reviewer for the valuable suggestions. \n\nUsefulness of policy gradients for continuous variables: \n\nWe thank the reviewer for raising this question, as we should have included a discussion regarding this in the paper. (Indeed, given this question from two reviewers, a contribution of our work could be seen as showcasing the relevance of policy gradients even when the variables involved are continuous.)\n\nIn the early stages of our project, we did experiment with plain back-propagation, as the reviewer suggested. But we observed that the resulting generated samples were of very poor quality. (We have uploaded a few samples from such a model at http://ec2-13-126-31-173.ap-south-1.compute.amazonaws.com:5000/ alongside samples generated by our proposed approach.) Hence we clearly needed techniques beyond plain back-propagation. Policy gradients appealed to us as we could readily adopt it to our setting, and immediately it gave us improvements over the original approach (with high quality utterances up to 12s long). Further, it did not add any significant computational overhead.\n\nWe have added this discussion in the revised version.\n\nWe do not deny the possibility that other recent approaches developed for similar purposes could also be adopted to our task, but the goal of this work has been to report the very significant improvements we achieved by adopting policy gradients. \n\n\nComments on clarity: These have all been addressed in the revised version.\n\nComments on evaluation: Conditioning on text and synthesizing accented speech is indeed part of our future work. Given the additional technical challenges involved, we have considered this to be outside the scope of the current work. \n\nThe remaining comments on improving clarity have been addressed in the revised version.", "The reviewer has made several objections. We agree to the extent that the exposition could have been improved. We would like to answer the other concerns below.\n\nNeed for policy gradients: As we detailed in an answer to the first reviewer, simple back-propagation as the reviewer suggests demonstrably fails. Using policy gradients overcame the drawbacks of directly using back-propagation, without introducing significant computational overheads. While we had carried out extensive experimentation on this aspect, we omitted it entirely in our submission. We shall incorporate this in the revised version.\n\nUse of an independent Wasserstein critic to compare across models: We do not agree with the reviewer’s contention that using an independent Wasserstein critic to compare across models is unjustified. Not only is it a natural approach, but also one of the main uses detailed in Danihelka et al. To quote from the paper: “If we use the independent critic, we can compare generators trained by other GAN methods or by different approaches.”\n\nTable 2 Comparisons: The GAN models from the literature we compare with did not provide a means to incorporate accent information during training. Nevertheless, they were trained on data from a mix of accents identical to that in the validation/test data. So the additional data that our models were given corresponds to less than 5 bits per utterance, and this was essential for a harder task (of being able to generate speech in given accents) that is not captured in Table 2.\n\nWe have tried to improve the presentation by removing some mysterious sounding phrasings (that resulted from using the vocabulary from our internal discussions). We apologize for any confusion they may have caused.\n\nWe urge the reviewer to kindly reconsider their impression of the paper in light of our response.\n\nThe remaining comments on improving clarity have been addressed in the revised version.\n", "We thank the reviewer for all the valuable inputs. \n\nMotivation: The reviewer’s point about motivating the problem of generated accented speech further is well received and we thank the reviewer for pointing us to some relevant references. The revised version now contains a more detailed motivation.\n\nAccent-invariant representations for recognition: Our proposed approach could indeed be used to generate representations that can be incorporated within speech recognition systems for accented speech. This is a direction we intend to explore as future work and we consider this to be outside the scope of the current work.\n\nHuman judgment of whether the generated audio matched the desired accent: We did actually conduct such a study. Section 4.2.2 describes the setup of our human evaluation study where we asked participants to listen to reference samples corresponding to a specific accent and then rate on a numeric scale how close a generated sample was to the accent in the reference sample. \n\nEffect of not adding structure to the latent representation: This is discussed in Table 2 which shows Wasserstein distances from an independent critic on different ablations of AccentGAN. PolicyGAN is identical to AccentGAN except there is no conditioning of the latent variables. We observe that PolicyGAN performs poorly in comparison to AccentGAN.\n\nThe remaining comments on improving clarity will be addressed in the revised version.\n" ]
[ 5, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJ6iJmWCW", "iclr_2018_rJ6iJmWCW", "iclr_2018_rJ6iJmWCW", "r1IP74-Ez", "B1LlRgbVM", "SkuDwwaQf", "SkxuGmyZG", "rkKtPpFxz", "rJfNnSxez" ]
iclr_2018_H1cKvl-Rb
UCB EXPLORATION VIA Q-ENSEMBLES
We show how an ensemble of Q∗-functions can be leveraged for more effective exploration in deep reinforcement learning. We build on well established algorithms from the bandit setting, and adapt them to the Q-learning setting. We propose an exploration strategy based on upper-confidence bounds (UCB). Our experiments show significant gains on the Atari benchmark.
rejected-papers
The idea studied here is interesting, if incremental. The empirical results are not particularly stellar, but it's clear that the authors have done their best to provide reproducible and defensible results. A few sticking points: a) The use of the term 'UCB', as mentioned in an anonymous comment, is somewhat misleading. "Approximate Confidence Interval" might be less controversial; b) there are a number of recent research results on exploration that are worth paying attention to (Plappert et al, O'Donoghue et al.) and worth comparing to, and c) the theoretical results are not always justified or useful (e.g. Equation 9: the bound is trivial, posterior >= 0 or 1).
train
[ "rkTJ2wYeG", "B13fzyclG", "r1prpe1ZM", "BkyyDvTXG", "SkOHUDamG", "S1S54DamM", "SyQS9Go7f", "SJMbiMiXf", "BkI9qfsmG", "HyqLPh9Qz", "H14cttFXM", "HySlgMqMf", "rysAtjUMz", "r1w5R7Ufz", "SJsA9mLfz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "public", "official_reviewer", "public", "public" ]
[ "This paper paper uses an ensemble of networks to represent the uncertainty in deep reinforcement learning.\nThe algorithm then chooses optimistically over the distribution induced by the ensemble.\nThis leads to improved learning / exploration, notably better than the similar approach bootstrapped DQN.\n\nThere are several things to like about this paper:\n- It is a clear paper, with a simple message and experiments that back up the claims.\n- The proposed algorithm is simple and could be practical in a lot of settings and even non-DQN variants.\n- It is interesting that Bootstrapped DQN gets such poor performance, this suggests that it is very important in the original paper https://arxiv.org/abs/1602.04621 that \"ensemble voting\" is applied to the test evaluation... (why do you think this is by the way, do you think it has something to do with the data being *more* off-policy / diverse under a TS vs UCB scheme?)\n\nOn the other hand:\n- The novelty/scope of this work is somewhat limited... this is more likely (valuable) incremental work than a game-changer.\n- Something feels wrong/hacky/incomplete about just doing \"ensemble\" for uncertainty without bootstrapping/randomization... if we had access to more powerful optimization techniques then this certainly wouldn't be sensible - I think that you should mention that you are heavily reliant on \"random initialization + SGD/Adam + specific network architecture\" to maintain this idea of uncertainty. For example, this wouldn't work for linear value functions!\n- I think the original bootstrapped DQN used \"ensemble voting\" at test time, so maybe you should change the labels or the way this is introduced/discussed. It's definitely very interesting that *essentially* the learning benefit is coming from ensembling (rather than \"raw\" bootstrapped DQN) and UCB still looks like it does better.\n- I'm not convinced that page 4 and the \"Bayesian\" derivation really add too much value to this paper... alternatively, maybe you could introduce the actual algorithm first (train K models in parallel) and then say \"this is similar to particle filter\" and add the mathematical derivation after, rather than as if it was some complex formula derived. If you want to reference some justification/theory for ensemble-based uncertainty approximation you might consider https://arxiv.org/pdf/1705.07347.pdf instead.\n- I think this paper might miss the point of the \"bigger\" problem of efficient exploration in RL... or even how to get \"deep\" exploration with deep RL. Yes this algorithm sees improvements across Atari, but it's not clear why/if this is a step change versus simply increasing the amount of replay or tuning the learning rate. (Actually I do believe this algorithm can demonstrate deep exploration... but it looks like we're not seeing the big improvements on the \"sub-human\" games you might hope.)\n\nOverall I do think this is a pretty good short paper/evaluation of UCB-ensembles on Atari.\nThe scope/insight of the paper isn't groundbreaking, but I think it delivers a clear short message on the Atari benchmark.\nPerhaps this will encourage people to dig deeper into some of these issues... I vote accept.\n", "The authors propose a new exploration algorithm for Deep RL. They maintain an ensemble of Q-values (based on different initialisations) to model uncertainty over Q. The ensemble is then used to derive a confidence interval at each step, which is used to select actions UCB-style.\n\nThere is some attempt at a Bayesian interpretation for the Bellman update. But to me it feels a bit like shoehorning the probabilistic interpretation into an already existing update - I’m not sure this is justified and necessary here. Moreover, the UCB strategy is generally not considered a Bayesian strategy, so I wasn’t convinced by the link to Bayesian RL in this paper.\n\nI liked the actual proposed method otherwise, and the experimental results on Atari seem good (but see also latest SOTA Atari results, for example the Rainbow paper). Some questions about the results:\n-How does it perform compared to epsilon-greedy added on top of Alg1, or is there evidence that this produces any meaningful exploration versus noise? \n-How does the distribution of Q values look like during different phases of learning?\n-Was epsilon-greedy used in addition to UCB exploration? Question for both Alg 1 and Alg 2.\n-What’s different between Alg 1 and bootstrapped DQN (other than the action selection)?\n\nMinor things:\n-Missing propto in Eq 7?\n-Maybe mention that the leftarrows are not hard updates. Maybe you already do somewhere…\n-it looks more a Bellman residual update as written in (11).\n", "This paper introduces a number of different techniques for improving exploration in deep Q learning. The main technique is to use UCB (upper confidence bound) to speedup exploration. The authors also introduces \"Ensemble voting\" facilitate exploitation.\n\nThis paper shows improvement over baselines. But does not seem to offer significant insight or dramatic improvement. The techniques introduced are a small permutation of previous results. The baselines are not particularly strong either.\n\nThe paper appeared to have be rushed. The presentation is not always clear.\n\nI also have the following questions I hope the authors could help me with:\n\n1. I failed to understand how Eqn (5). Could you please clarify.\n\n2. What is the significance of the math introduced in section 3? All that was proposed was: (1) Majority voting, (2) UCB exploration.\n\n3. Why comparing to A3C+ which is not necessarily better than A3C in final performance?\n\n4. Why not comparing to Bootstrapped DQN since the proposed method is based on it?\n\n5. Why is the proposed method better than Bootstrapped DQN, since UCB does not necessarily outperform Thompson sampling in the case of bandits?\n\n6. If there is a section on INFOGAIN exploration, why not mention it in the main text?", "Dear reviewers, we have taken your feedback into account and revised the manuscript. A new manuscript has been uploaded. ", "We would like to thank you for reproducing and validating the results of our paper. In our implementation, gradients from the multiple heads are first averaged before passing into the gradient update of the convolutional layers. ", "We would like to thank you for reproducing and validating the results of our paper. Regarding the hyperparameter $\\lambda$ in UCB exploration, it is set to $\\lambda = 0.1$ uniformly for all games evaluated as stated in the middle of Page 5 of the draft. No game-specific fine-tuning was done in the experiments.", "We thank the reviewer’s comments. We address in the following:\n\n1. We first comment that the improvement from our proposed methods is significant. We used a strong Double DQN baseline, which achieves competitive or better learning results trained with 40 million frames, compared with prior published results [Van Hasselt, et al, 2016] trained with 200 million frames. Improvement of proposed methods is significant over this strong Double DQN baseline. Table 2 in Appendix B shows that Ensemble Voting performs better than Double DQN in 37 out of 49 games evaluated, and UCB Exploration performs better than Double DQN in 38 out of 49 games evaluated. In addition, UCB Exploration performs better than Ensemble Voting in 35 out of 49 games evaluated. We will include such comparison in the results section. \n\n2. We also compared to bootstrapped DQN as shown in Figure 1, Figure 2, and the results Table 2 in Appendix B. \n\n3. A3C+ represents one line of research comprised of multiple works where the agent constructs exploration bonus based on state visitation counts. As discussed in Section 2.2, the exploration bonus from these methods does not depend on the reward, thus the exploration may focus on irrelevant aspects of the environment. In comparison, our exploration bonus depend on the Q values directly. We chose A3C+ to compare our method of reward-based exploration bonus against count-based exploration bonus and demonstrate that this reward/Q values-based approach of constructing exploration bonus is promising. \n\n4. Bootstrapped DQN samples one Q network from the ensemble applies it for a whole episode for exploration. We hypothesize that this intuition of deep exploration, by consistently using one Q function in each episode, does not guarantee that exploration is beneficial nor efficient. For example, each Q function deviates from the ensembled-Q. Although our proposed methods use the same network structure of bootstrapped DQN, the goal is very different: we build exploration bonus based on the uncertainty or discrepancy of Q ensembles. UCB exploration bonus is based on the uncertainty of Q ensembles and encourages the agent to reduce the uncertainty in Q values. \n\n5. The INFOGAIN section attempts another approach of exploration using Q-ensembles. However, the improvement of this method is less consistently across the board. This could be due to the approximations we made in constructing the INFOGAIN exploration bonus. We document the results of the experiment in the Appendix for potential future interest in this direction.\n\n\n6. We will modify/shorten the derivation on pages 3 and 4.\n", "We thank the reviewer’s comments and address in the following: \n\n1. Bootstrapped DQN samples one Q network from the ensemble applies it for a whole episode for exploration. We hypothesize that this intuition of deep exploration, by consistently using one Q function in each episode, does not guarantee that each Q function’s exploration is beneficial nor efficient. For example, each Q function deviates from the ensembled Q and accumulates inefficiency in a long episode. Although our proposed methods use the same network structure of bootstrapped DQN, the goal is very different: we build exploration bonus based on the uncertainty or discrepancy of Q ensembles. In UCB exploration, exploration bonus is based on the uncertainty of Q ensembles and encourages the agent to reduce the uncertainty in Q values. \n\n2. Ensemble uncertainty is due to Q networks being parametrized with deep neural networks, which introduces nonconvexity in Bellman update. Thus, even though the Q networks are trained with the same samples, their parameters do not converge to the same. We also experimented with training each Q network with independently sampled transitions from the reply buffer, and did not observe improved performance. We don’t think the optimization method (SGD/Adam) plays a key role. This phenomenon that bagging worsens the performance of deep ensembles is also observed in supervised training setting. [Lee et al, 2015] observed that supervised learning trained with deep ensembles with random initializations perform better than bagging for deep ensembles. [Balaji et al, 2017] used deep ensembles for uncertainty estimates and also observed that bagging deteriorated performance in their experiments. We will revise and clarify the source of uncertainty from the ensembles. \n\n3. We will modify/shorten the derivation on pages 3 and 4.\n\n4. On efficient exploration in RL, our proposed two algorithms use the Q functions directly while prior works construct exploration bonus using state-visitation counts, which are not tied to the rewards that agents seek to maximize. Our goal is to construct methods that reduce the inefficiency of prior algorithms where learning can be wasted on visiting irrelevant states. Thus, by improving upon bootstrapped DQN and comparing with state-visitation count-based methods such as A3C+, we demonstrate that this direction of exploration based on Q-values is promising, and different from hyperparameter tuning. Due to compute constraint, we trained the proposed algorithms on each game with 40 million frames, less than 200 million frames used in prior works. Thus games that typically require more frames to learn do not show big improvement in our experiments. \n\nReferences: \t\t\t\t \t\t\t\t\t\t\t\nS. Lee, S. Purushwalkam, M. Cogswell, D. Crandall, and D. Batra. Why M heads are better than one: Training a diverse ensemble of deep networks. arXiv preprint arXiv:1511.06314, 2015. \n\t\t\t\t\nLakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. \"Simple and scalable predictive uncertainty estimation using deep ensembles.\" Advances in Neural Information Processing Systems. 2017.\t\t\t \t\t\t\t\n\t\t\t", "We thank the reviewer’s comment and address in the following:\n\n1. We will modify/shorten the derivation on pages 3 and 4.\n\n2. We observed the Q values for actions chosen according to Alg 1 and Alg 2. These Q values correspond to good actions. During learning, the Q values for such good actions gradually increase. The discrepancies between the Q values also increase in absolute values. But normalized by the mean Q value from different Q networks, the discrepancies gradually decrease. \n\n3. In Alg 1 and Alg 2, epsilon-greedy is not used, such that we can isolate the effects of exploration using Ensemble Voting or UCB exploration only. We did not experiment with adding epsilon-greedy on top of Alg 1, but agree that it will be an interesting experiment to see whether epsilon-greedy helps or hurts exploration on top of Alg 1. \n\n4. Besides action selection, bootstrapped DQN allows each Q network to be trained with different samples (using a masking mechanism), even though in bootstrapped DQN’s Atari experiments, all Q networks are trained using the same samples. In Alg 1, Q networks only use random initialization and trained with the same samples. We also experimented with training Q networks with independently drawn samples, which deteriorated the performance. \n", "We first comment on the significance of the performance improvement. The Double DQN baseline we used is a very fine-tuned baseline, achieving competitive rewards compared with prior published results such as the original Double DQN paper [Van Hasselt, et al. 2016]. In fact, in many games, our results trained with 40-million-frames are already higher than prior results trained with 200-million-frames. For common hyperparameters, our Ensemble Voting and UCB Exploration algorithms used the same as those in our Double DQN implementation. Both methods achieved significant improved performance over Double DQN. Table 2 in Appendix B shows that Ensemble Voting performs better than Double DQN in 37 out of 49 games evaluated, and UCB Exploration performs better than Double DQN in 38 out of 49 games evaluated. In addition, UCB Exploration performs better than Ensemble Voting in 35 out of 49 games evaluated. We will expand the Results section and include these comparisons. \n\nWe do assume that the reward function is deterministic given state and action. We will state out this assumption more clearly in the notations (Section 2.1). We will also replace `r` with `r(s, a)` to make the reward’s dependency on the state and action more clear. We will abbreviate the derivations on pages 3 and 4 and rewrite based on the feedback. \n\nRegarding the upper confidence bound, we use the different network to construct an empirical variance of the estimated Q values for each (s, a) combination. As the Q functions are parametrized by a deep neural network, they do not converge to the same parameters when initialized independently and trained with the same samples due to nonconvexity, thus leading to varied Q value empirical estimates from the Q networks. The uncertainty comes from the variance of the empirical estimates. \n\nWe also try constructing the empirical variance from Q networks initialized randomly and trained with samples drawn independently from the replay buffer. However, this approach does not lead to better performance compared with random initialization only. Thus we conclude the discrepancies in the Q-networks created by independent random initializations contain very useful information for exploration. \n", "We reproduce the experiments in the paper \"UCB EXPLORATION VIA Q-ENSEMBLES\" and verify the main conclusions. Our full report can be found at https://github.com/yifjiang/UCB-review/blob/master/Reproducing%20UCB%20EXPLORATION%20VIA%20Q-ENSEMBLES.pdf. Here is a summary of our work.\n\nThe original paper employed the UCB method on bootstrapped DQN and did an experiment on 49 Atari games. We implemented the baseline model, Double DQN, as well as one of the proposed models, UCB Exploration, upon OpenAI baseline models. Due to the constraints on time and computing resources, we attempted to replicate the results on one game (UpNDown). In addition, we also evaluated the models on a simpler environment, CartPole. We got similar results as the original paper on UpNDown. UCB Exploration outperforms Double DQN in this environment. However, the UCB Exploration method does not perform as well as Double DQN in the CartPole environment.\n\nOverall, Our experiments show that the original paper is reproducible. The hyperparameter table provided in the original paper greatly helps the reproduction and improves the soundness of the paper.\n", "We attempted to replicate this paper as part of the ICLR Reproducibility Challenge. We built our own implementation of this algorithm on top of OpenAI's existing Double DQN baseline. We attempted to replicate the results on three environments: Space Invaders, Breakout, and UpNDown. In the first two games our results appear to validate the paper's baseline-relative performance, although the specific scores we achieved were quite different. However on the UpNDown environment we were unable to achieve success using their algorithm, doing far worse than the baseline or results in other papers. The cause of our failure to replicate in UpNDown is still unclear. It's plausibly due implementation differences, such as in exactly how Adam was used train the convolutional layers shared by the multiple heads, or whether gradient normalization was in fact used. Based on the score differences between our baselines across all experiments, and since our UCB was implemented on top of our Double DQN baseline, implementation differences between baselines probably also play a role.\n\nThe full report is here: https://drive.google.com/file/d/1QVAmKK1ijZkYXeHxdyb6YrP0-K2CXQht/view?usp=sharing\nThe report appendix includes a link to our codebase.", "Sorry for the confusion, I meant *not* considered a Bayesian strategy of course... I've edited my review.", "Regarding your comment on\n\"There is some attempt at a Bayesian interpretation for the Bellman update. But to me it feels a bit like shoehorning the probabilistic interpretation into an already existing update - I’m not sure this is justified and necessary here. Moreover, the UCB strategy is generally considered a Bayesian strategy, so I wasn’t convinced by the link to Bayesian RL in this paper.\"\n\nAs you mentioned, it is not clear to me as well what is the purpose if the derivations on pages 3 and 4 where it ends up to equation (12). But regarding your latter statement, could you please point me to a reference which says UCB strategy is a Bayesian strategy?\n\nCheers", "In the main text, the authors mentioned that \n\"A sufficient condition for (8) is to maximize the lower-bound of the posterior distribution in (9) by ensuring the indicator function in (9) to hold.\" \nThe input to the indicator function looks like the second moment of a random variable. Could you elaborate when it happens? I am not sure it can be achieved for stochastic reward. Is that correct?\nFurthermore, the paper suggests that \"We can replace (8) with the update (10)\". Can you comment on why this is the case? Is it again for the deterministic reward? In Alg2, line 9, when you update according (12), you mean using TD update to reduce the Bellman residual? If the answer is yes, then I am not sure that I understood the message out of derivation in pages 3 and 4.\n\nThe authors introduce UCB exploration using Q-ensemble and mentioned that they extend the intuition of UCB algorithms in order to propose algorithm 2. But, as the reviewer 2 also touched upon it, I could not find a justification why the variance of k networks, trained using the same procedure but different initialization resembles upper confidence bound. Could you please comment on that?\n\nIn light of recent revelations in deep reinforcement learning (i.e. https://arxiv.org/pdf/1709.06560.pdf) and lack of significant improvement of the two proposed methods over DDQN, that would be helpful if the authors could comment about whether they feel their empirical results is an evidence of the significance of their methods.\n\nNotation. In many places in the equations, e.g. first equation in page 2, the authors used r as a reward, but I guess it should be r(s,a). \n\nThanks.\n" ]
[ 6, 7, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1cKvl-Rb", "iclr_2018_H1cKvl-Rb", "iclr_2018_H1cKvl-Rb", "iclr_2018_H1cKvl-Rb", "HySlgMqMf", "H14cttFXM", "r1prpe1ZM", "rkTJ2wYeG", "B13fzyclG", "SJsA9mLfz", "iclr_2018_H1cKvl-Rb", "iclr_2018_H1cKvl-Rb", "r1w5R7Ufz", "B13fzyclG", "iclr_2018_H1cKvl-Rb" ]
iclr_2018_B16yEqkCZ
Avoiding Catastrophic States with Intrinsic Fear
Many practical reinforcement learning problems contain catastrophic states that the optimal policy visits infrequently or never. Even on toy problems, deep reinforcement learners periodically revisit these states, once they are forgotten under a new policy. In this paper, we introduce intrinsic fear, a learned reward shaping that accelerates deep reinforcement learning and guards oscillating policies against periodic catastrophes. Our approach incorporates a second model trained via supervised learning to predict the probability of imminent catastrophe. This score acts as a penalty on the Q-learning objective. Our theoretical analysis demonstrates that the perturbed objective yields the same average return under strong assumptions and an ϵ-close average return under weaker assumptions. Our analysis also shows robustness to classification errors. Equipped with intrinsic fear, our DQNs solve the toy environments and improve on the Atari games Seaquest, Asteroids, and Freeway.
rejected-papers
This paper presents an interesting idea that is related to imitation learning, safe exploration, and intrinsic motivation. However, in its current state the paper needs improvement in clarity. There are also some concerns about the number of hyperparameters involved. Finally, the experimental results are not completely convincing and should reflect existing baselines in one of the areas described above.
test
[ "ByVTKgqEz", "SkYNcg5xz", "SyHc3kp1f", "S117txRef", "H11eRromM", "Sy1zTBomM", "rk6W9Si7z", "B10tYHsQG", "BkEm_rimM", "B1mRwQzQf", "BJZkdQzGG", "ry8OQ7zzz", "BybJ2MqxG", "S16Q2IXlM", "rJckvqkgz", "rk3V5Hjkf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public", "public", "author", "public", "author", "public" ]
[ "I've slightly increased my score to reflect the improvements made by the authors. Theorem 1 seems to have been corrected. Unfortunately, the bound now indicates that the average reward is within lambda * epsilon * (R_max - R_min) of the optimal average reward (where lambda can be arbitrarily large). This does not provide much in the way of guarantees. \n\nMy final feeling about the paper is that it introduces a mostly heuristic method, which can provide some empirical benefit when properly tuned. It wasn't clear to me, however, that this fear model offers a generic method or that is capable of achieving the goals the authors mention.", "The paper addresses the problem of learners forgetting rare states and revisiting catastrophic danger states. The authors propose to train a predictive ‘fear model’ that penalizes states that lead to catastrophes. The proposed technique is validated both empirically and theoretically. \n\nExperiments show a clear advantage during learning when compared with a vanilla DQN. Nonetheless, there are some criticisms than can be made of both the method and the evaluations:\n\nThe fear radius threshold k_r seems to add yet another hyperparameter that needs tuning. Judging from the description of the experiments this parameter is important to the performance of the method and needs to be set experimentally. There seems to be no way of a priori determine a good distance as there is no way to know in advance when a catastrophe becomes unavoidable. No empirical results on the effect of the parameter are given.\n\nThe experimental results support the claim that this technique helps to avoid catastrophic states during initial learning.The paper however, also claims to address the longer term problem of revisiting these states once the learner forgets about them, since they are no longer part of the data generated by (close to) optimal policies. This problem does not seem to be really solved by this method. Danger and safe state replay memories are kept, but are only used to train the catastrophe classifier. While the catastrophe classifier can be seen as an additional external memory, it seems that the learner will still drift away from the optimal policy and then need to be reminded by the classifier through penalties. As such the method wouldn’t prevent catastrophic forgetting, it would just prevent the worst consequences by penalizing the agent before it reaches a danger state. It would therefore be interesting to see some long running experiments and analyse how often catastrophic states (or those close to them) are visited. \n\nOverall, the current evaluations focus on performance and give little insight into the behaviour of the method. The paper also does not compare to any other techniques that attempt to deal with catastrophic forgetting and/or the changing state distribution ([1,2]).\n\nIn general the explanations in the paper often often use confusing and imprecise language, even in formal derivations, e.g. ‘if the fear model reaches arbitrarily high accuracy’ or ‘if the probability is negligible’.\n\nIt is wasn’t clear to me that the properties described in Theorem 1 actually hold. The motivation in the appendix is very informal and no clear derivation is provided. The authors seem to indicate that a minimal return can be guaranteed because the optimal policy spends a maximum of epsilon amount of time in the catastrophic states and the alternative policy simply avoids these states. However, as the alternative policy is learnt on a different reward, it can have a very different state distribution, even for the non-catastrophics states. It might attach all its weight to a very poor reward state in an effort to avoid the catastrophe penalty. It is therefore not clear to me that any claims can be made about its performance without additional assumptions.\n\nIt seems that one could construct a counterexample using a 3-state chain problem (no_reward,danger, goal) where the only way to get to the single goal state is to incur a small risk of visiting the danger state. Any optimal policy would therefore need to spend some time e in the danger state, on average. A policy that learns to avoid the danger state would then also be unable to reach the goal state and receive rewards. E.g pi* has stationary distribution (0,e,1-e) and return 0*0+e*Rmin + (1-e)*Rmax. By adding a sufficiently high penalty, policy pi~ can learn to avoid the catastrophic state with distribution (1,0,0) and then gets return 1*0+ 0*Rmin+0*Rmax= 0 < n*_M - e (Rmax - Rmin) = e*Rmin + (1-e)*Rmax - e (Rmax - Rmin). This seems to contradict the theorem. It wasn’t clear what assumptions the authors make to exclude situations like this.\n\n[1] T. de Bruin, J. Kober, K. Tuyls and R. Babuška, \"Improved deep reinforcement learning for robotics through distribution-based experience retention,\" 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, 2016, pp. 3947-3952.\n[2] Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., ... & Hassabis, D. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 201611835.", "\nSUMMARY\n\nThe paper proposes an RL algorithm that combines the DQN algorithm with a fear model. The fear model is trained in parallel to predict catastrophic states. Its output is used to penalize the Q learning target.\n\n\n\nCOMMENTS\n\nNot convinced about the fact that an agent forgets about catastrophic states. Because it does not experience it any more. Shouldn’t the agent stop learning at some point in time? Why does it need to keep collecting good data? How about giving more weight to catastrophic data (e.g., replicating it)\n\nIs the catastrophic scenario specific to DRL or RL in general with function approximation?\n\nWhy not specify catastrophic states with a large negative reward?\n\nIt seems that catastrophe states need to be experienced at least once.\nIs that acceptable for the autonomous car hitting a pedestrian?\n", "The paper studies catastrophic forgetting, which is an important aspect of deep reinforcement learning (RL). The problem formulation is connected to safe RL, but the emphasis is on tasks where a DQN is able to learn to avoid catastrophic events as long as it avoids forgetting. The proposed method is novel, but perhaps the most interesting aspect of this paper is that they demonstrate that “DQNs are susceptible to periodically repeating mistakes”. I believe this observation, though not entirely novel, will inspire many researchers to study catastrophic forgetting and propose improved strategies for handling these issues.\n\nThe paper is accurate, very well written (apart from a small number of grammatical mistakes) and contains appealing motivations to its key contributions. In particular, I find the basic of idea of introducing a component that represents fear natural, promising and novel. \n\nStill, many of the design choices appear quite arbitrary and can most likely be improved upon. In fact, it is not difficult to design examples for which the proposed algorithm would be far from optimal. Instead I view the proposed techniques mostly as useful inspiration for future papers to build on. As a source of inspiration, I believe that this paper will be of considerable importance and I think many people in our community will read it with great interest. The theoretical results regarding the properties of the proposed algorithm are also relevant, and points out some of its benefits, though I do not view the results as particularly strong. \n\nTo conclude, the submitted manuscript contains novel observations and results and is likely to draw additional attention to an important aspect of deep reinforcement learning. A potential weakness with the paper is that the proposed strategies appear to be simple to improve upon and that they have not convinced me that they would yield good performance on a wider set of problems. \n", "We are grateful to Reviewer3 for taking the time to review our paper but disagree with several of the assertions. \n\n1. The reviewer states “Not convinced about the fact that an agent forgets about catastrophic states”. The susceptibility of neural networks to catastrophic forgetting (not to be confused with our safety-motivated notion of a catastrophe) is well-documented in the literature. Whenever the policy is modified such that some states would never be encountered, they will eventually, as soon as they are flushed from the replay buffer, cease to influence the Q-network. If we continue to update the network as is necessary, especially in non-stationary environments (e.g. nearly all real-world settings) then nothing in the standard DQN formulation guards the agent from revisiting the catastrophic states.\n\nIn addition to being well-documented in the literature, we demonstrate this problem clearly in our paper using the simplest failure case. Even in AdventureSeeker, a 1-D environment with only two actions, the agent will eventually forget about the catastrophic states. \n\n2. Re: “Shouldn’t the agent stop learning at some point in time?”\n\na. First, even when there is limited duration learning period, we may want an agent to make a minimal number of catastrophic errors while learning.\n\nb. Second, as stated above: in nonstationary environments, which describes most real-world environments, we want to learn continually. Otherwise the policy will become stale and cease to perform owing to the shifting dynamics. In the case of driving, imagine new cars which appear on the road, or new street signs. In the case of a vacuum-cleaner, imagine that it is confronted with new household appliances that didn’t exist previously. \n\n3. “Re: “Why not specify catastrophic states with a large negative reward?”\n\nUnfortunately, even large negative rewards are eventually forgotten, leading the agent to revisit the catastrophic states. Take AdventureSeeker as an example: no matter how negative the penalty is, the same catastrophic forgetting will eventually happen. Moreover, this approach, unlike ours, has no notion of “danger zone” and therefore does not benefit from reward shaping. In our approach, the agent avoids even getting *close* to a catastrophe. When this assumption is reasonable, this leads to significantly faster exploration.\n\n4. Re: “It seems that catastrophe states need to be experienced at least once. Is that acceptable for the autonomous car hitting a pedestrian?”\n\nThis is a good question that has implications for all of RL: If catastrophes can truly never be experienced even once, then is reinforcement learning off the table altogether?\n\nHowever, in many settings, perhaps even car accidents, if enough cars are on the road and the probability of an accident is nonzero, then accidents will happen. Our work addresses how to learn from these mistakes rapidly and to guard against repeating the same mistakes in the future. \n", "Thanks for the thoughtful review of our paper. \n\n1. We are glad that you noticed the issue in the proof of theorem 1. Per your feedback, we have corrected the proof and substantially revised the paper (see current revision). At a high level, the performance degradation, as described corrected theorem and proof are as follows:\n\nIf the optimal policy, \\pi^*, of the original environment, without intrinsic fear, (M), visits the fear zone with probability at most \\epsilon, then applying pi^* on the environment with intrinsic fear (M,F), gives the return of eta^*-\\epsilon\\lambda(Rmax-Rmin) therefore, the optimal policy on (M,F), \\tilde{\\pi}, can not give a return less than \\eta^*\\epsilon-\\lambda(Rmax-Rmin) on environment (M,F). If \\tilde{\\pi} visits the fear zone with probability \\epsilon’, we can rewrite its return as:\n(return from non intrinsic fears)-epsilon’(\\lambda(Rmax-Rmin)\nTherefore applying \\tilde{\\pi} on original environment (M) gives a return of at least \\eta^*-\\epsilon\\lambda(Rmax-Rmin) +\\epsilon’\\lambda(Rmax-Rmin) which is lower bounded by \\eta^*-\\epsilon\\lambda(Rmax-Rmin).\n\n2. Regarding the parameter k_r, as the reviewer mentioned, without any prior knowledge and posed safety constraint of the environment, this parameter needs to be chosen empirically, as with other hyper-parameters. We note however, that this is a kind of prior knowledge that might be reasonably to expect of an algorithm designer. For example, a robot should perhaps never be too close to a cliff or a ledge. \n\nIntuitively, small k_r’s better preserve the original policy, but for too small a k_r, the fear model might be ignored. On the other hand, large k_r are better at preventing the agent from visiting the catastrophic states but run more risk of deviating substantially from the optimal policy. Prior knowledge of the environment can guide us to design a proper k_r, otherwise, k_r needs to be chosen experimentally.\n\n3. Regarding the (very) long term forgetting, the reviewer is correct that this paper doesn’t completely alleviate catastrophic forgetting an that we instead guard against the worst consequences. We have created a video to visualize the fear probability as a red overlay on the video game play and will continue to work on other ways to qualitatively understand how our algorithm is working. \n\n4. We thank the reviewer for suggesting baselines to compare to. They have some relevance but are designed for different purposes. In particular,\n[1] (IROS) uses a second experience replay buffer to store state transitions that covers the whole state space uniformly, in addition to a typical buffer used in standard DQN. This approach aims mostly to reduce exploration, but can face the curse of dimensionality as it tries to cover the state space uniformly. Moreover, the uniform covering idea is not efficient for avoiding catastrophic events that are rare, while our approach uses a fear classifier to target danger zones directly.\n[2] (PNAS) takes a Bayesian approach to continual learning, trying to avoid catastrophic forgetting of solutions to earlier tasks that have not occured for a long time. In contrast, our problem is to avoid running into catastrophic states in the same task. It is not clear how a similar, Bayesian variant of DQN (such as BBQ) can be extended to address our safe exploration challenge.\n", "We thank AnonReviewer1 for a clear and constructive review. We are encouraged that you recognize the importance of the problem addressed and the novelty of the methods. Per your suggestions, we have polished the paper, fixing several of the typos that had made it into the first draft. The reviewer’s point that many aspects of the algorithm can likely be improved upon in future work is well-taken. We hope that this is just one of the first among many papers to improve with respect to these fundamental problems. ", "We would like to thank the reviewers for taking the time to leave thoughtful reviews. Given this feedback, we have significantly improved the draft and hope the reviewers and area chair will take this into account when assessing the final scores. For example, the harsh score from reviewer 2 owes largely to a mistake in one theorem that has since been fixed in the newest version. We are also grateful to the folks at the reproducibility project who noted the commendable clarity and reproducibility of our paper, algorithm and empirical findings. Please find specific rebuttals to each reviewer as replies to the respective reviews. ", "Note that there are many out-of-the-box DQNs available. They do not all achieve the same performance on every game. DRL is unfortunately still rather brittle to small implementation changes. For example, if you alter (SGD vs Momentum vs ADAM), size of initial replay buffer before reducing epsilon to < 1, etc. you will notice that often each agent will do better for some games (sometimes strikingly) and worse on others. We cannot vouch for the performance of every configuration of DQN you might access, only for the specific implementation that we used. One small detail that could potentially explain some differences is that we used a smaller initial replay buffer size than in the original DQN paper. Perhaps this additional early exploration was crucial for DQN but not for the Intrinsic Fear model.", "After reading the paper I also had some questions about the DQN baseline used in the paper. After running some simple experiments with DQN on Freeway, it seemed to me that the results reported by the paper for DQN in this paper were underpowered, as an out-of-the-box DQN got superior results. I came on here to comment this but just saw the above post so I'll just leave this here as a reply (thanks to the commenter above for their thorough experiments). ", "Being part of the ICLR 2018 Reproducibility Challenge, we worked to reproduce the results presented in this paper (Avoiding Catastrophic States with Intrinsic Fear) currently under a double-blind review process at the time of writing of this comment. We enjoyed reading the paper and replicating their results.\n\n\tThis paper proposed a model for intrinsic fear, a reward shaping model that improves the functioning of a deep-Q network by minimizing the number of catastrophes experienced during training. The authors tested their model on the popular reinforcement learning game Cartpole, their own game Adventure Seeker, and the three Atari games Seaquest, Asteroids, and Freeway.\n\n\tIn this paper, they only showed detailed plots for reward and experienced catastrophes for the Atari games. As such, we set out to replicate their results for those games. The code was available online in a Github repository, as directed by the authors in a comment to another member of the public. We had some trouble running the code at first as there were no guidelines for the usage of the code such as package versions and necessary dependencies, but we were able to adapt it for our environment. Our slightly adapted version of their code, and a README file that details how we ran their code, can be found along with our full report linked at the end of this comment.\n\nIn our runs of their code, we found that on Freeway, we were able to reproduce better performance of the DQN with intrinsic fear (DQN-IF) over that of a normal DQN without intrinsic fear, in terms of the reward gained. Our plot did not look exactly like the one in the paper, which may be due to the authors’ averaging their results over multiple runs of the same experiment. It was not mentioned in the original report but pointed out by the authors in the OpenReview discussion that there is averaging of the total rewards per episode over many learning runs for the Atari games. It may also be due to differences in hyperparameter settings that were not made explicit in the paper. However, our result for the catastrophe rate for Freeway did not match the result in the paper. In fact, we found the catastrophe rate looked identical with and without intrinsic fear, which may also be due to different hyperparameter usage. Another interesting observation from our runs on Freeway was that identical hyperparameter settings led to different results on our two different systems, suggesting that library versions or hardware may play a role in the performance of the algorithm.\n\nOn Asteroids, we found that at approximately 4000 episodes we were not able to prove the dominance of DQN-IF over DQN in terms of obtained reward. After one run of each model, the variation in reward for training was too high to observe the clear trend that was presented in the paper after averaging, though qualitatively the DQN-IF experiences marginally higher reward. The total reward however was much lower than that found in the paper for both models. Nevertheless, our results showed that the DQN model visited more catastrophe states than the DQN-IF within the same timeframe, which reflects the data presented in the original paper. \n\nOur results for Seaquest were insufficient to draw any conclusions due to our limited computational resources, but the results for the obtained reward over 3000 episodes looked similar to trends observed in the paper. Interestingly, the decrease in catastrophe rate for the DQN-IF compared to the DQN seemed to show a promising trend.\n\n\tTo summarize, we assume that the differences between our results and those presented in the paper are due in part to different hyperparameter settings. Another potential reason is the fact that we were not able to do enough runs to average results. Unfortunately, we do not know the exact averaging conditions, so our graphs are not as smooth as the ones presented in the paper.\n\n\tSome features that would have been useful to us for our replication are (1) clearer definition of hyperparameters for all games, including the missing fear factor for CartPole (2) figures of results for Cartpole and Adventure Seeker (3) online availability of Adventure Seeker (4) some discussion of computational resources required (5) a usage guide for the code, as well as improved saving and restart functionality (6) explanation for hyperparameter selection and optimization to aid efforts to implement this model in other learning environments.\n\nOur full report and code can be found here: https://drive.google.com/drive/folders/1DSsQq4YRiwA-KpEancIo-GAvsPYQ-u4K", "The goal of this summarized review is to investigate the reproducibility of the results found in the above report on the use of instrinsic fear in the context of reinforcement learning. In doing so, the assessment aims to contribute to the machine learning community by allowing others who wish to make use of the latter research to do so with confidence and ease. A link to the full report can be found at the end of this thread.\n\nThe authors of \"Avoiding Catastrophic States with Intrinsic Fear\" claim that incorporating the distancing from dangerous scenarios into the reward system of the DQN model will shorten training time and reduce the number of catastrophes that the learning agent will put itself in. The latter is demonstrated using toy environments Adventure-Seeker and CartPole, as well as in Atari games Seaquest, Asteroids and Freeway. The two models attempt to learn in these environments and their results are compared empirically.\n\nTo evaluate the reproducibility of these findings, we chose two of the five environments to train the learning agents. The reason all five were not selected were because of the time constraint of this assignment as well as the limited hardware that our team had in possession. To train the agents, we used a combination of open-source code and self implemented models. The empirical results that were found showed that the intrinsic fear model did in fact outperform the standard DQN model as the authors of the original report suggested.\n\nIn order to evaluate the ease of reproducibility of this research paper, we evaluated the latter under four metrics:\n\n1) Availability of code, names and version numbers of dependencies\n\nThe code from the authors' was not explicitly given to us, but they anonymously hinted that their code was open sourced on Github. After using this code to reproduce the Asteroids scenario, we were able to refer to the code to implement an intrinsic fear for the CartPole environment. Unfortunately, the version numbers of the dependencies were not explicitly given but through research we managed to figure out the preliminary steps required to run the authors' code.\n\n2) Clarity of code and paper\n\nHyper-parameters in the code were specified in a single section with appropriate descriptions. Additionally, the concepts specific to the report were well explained with approachable examples.\n\n3) Details of computing infrastructure used and computation requirements\n\nNo information was given relating the computing infrastructure, so we were unable to know if our hardware was sufficient to fully train the agents without running the learners first.\n\n4) Reimplementation effort\n\nConsidering the limited time given for this assignment, the fact that we were able to successfully reproduce the findings in the original paper is an strong indication that the latter is reproducible.\n\nAfter considering said criteria, it became clear that \"Avoiding Catastrophic States with Intrinsic Fear\" was a reproducible report and we must applaud the authors in their ability to convey such a complex topic in an approachable and duplicable manner.\n\nThe full report on reproducibility can be found here:\nhttps://drive.google.com/open?id=1QSEIgg2f22Cd06mA-23IthE-9M9o7Kpe", "Happy to share details - pardon the delay due to holiday travel. Need to get back home to look up exact details on hyperparameter settings as the toy environment experiments were done a while ago.\n\nYes the hyper-parameters can make a big difference on many of these problems. Optimizer, number of exploration turns, etc. There's also a large amount of variance across runs. Especially on the toy environments. That's why we run every experiment multiple times and report averages. \n\nThanks for the questions and for holding tight, more details on toy environments coming soon!", "Hi,\n\nThis is a nice paper and we like the ideas in it! I tried to implement the algorithm DQN-Fear by modifying the baseline DQN, and reproduce the simulation results in your paper. The thing is, based on our trials with different parameters so far, we have some difficulty to reproduce part of the result. For example, (1) for the CartPole test, the DQN runs more than 10000 episodes within 4e6 time steps, while in your paper, there are only 4000 episodes; (2) (this is more weird) for Freeway, our DQN achieves better performance than the plot of DQN-Fear in your paper within just 300 episodes. I guess this may be largely due to the hyper parameters.\n\nWe really appreciate your code on GitHub, and we can see the parameters for Atari games. But the hyperparameters for Adventure seeker and Cartpole are still unclear. So, I am wondering if it's possible you share the hyper-parameters for DQN and DQN-Fear on all three experiments? The hyper-paramters could include but not limit to the following:\n(1) AdamLearning rates for the two neural nets of DQN and fear model\n(2) Buffer sizes for all 3 buffers\n(3) How exploration rate is scheduled\n(4) Train frequency\n(5) Batch size\n(6) When do the the learning start for the two neural nets of DQN and fear model\n(7) discount factor gamma\n(8) Target network update frequency\n(9) fear factor\n(10) fear phase-in length\n(11) fear radius\n\nThank you!", "Hi Nick,\n\nThanks for your interest in our paper! The code is actually open sourced now, and already one group of researchers has re-implemented our algorithm from scratch and confirmed outperformance of DQN. To preserve double blind status, we won't post the GitHub link here but it's not too hard to find.\n\nCheers,\n\nAuthors", "I was wondering if you have the code open sourced so that we can more easily reproduce the results provided in the paper." ]
[ -1, 5, 5, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "SkYNcg5xz", "iclr_2018_B16yEqkCZ", "iclr_2018_B16yEqkCZ", "iclr_2018_B16yEqkCZ", "SyHc3kp1f", "SkYNcg5xz", "S117txRef", "iclr_2018_B16yEqkCZ", "B1mRwQzQf", "BJZkdQzGG", "iclr_2018_B16yEqkCZ", "iclr_2018_B16yEqkCZ", "S16Q2IXlM", "iclr_2018_B16yEqkCZ", "rk3V5Hjkf", "iclr_2018_B16yEqkCZ" ]
iclr_2018_BJ7d0fW0b
Faster Reinforcement Learning with Expert State Sequences
Imitation learning relies on expert demonstrations. Existing approaches often re- quire that the complete demonstration data, including sequences of actions and states are available. In this paper, we consider a realistic and more difficult sce- nario where a reinforcement learning agent only has access to the state sequences of an expert, while the expert actions are not available. Inferring the unseen ex- pert actions in a stochastic environment is challenging and usually infeasible when combined with a large state space. We propose a novel policy learning method which only utilizes the expert state sequences without inferring the unseen ac- tions. Specifically, our agent first learns to extract useful sub-goal information from the state sequences of the expert and then utilizes the extracted sub-goal information to factorize the action value estimate over state-action pairs and sub- goals. The extracted sub-goals are also used to synthesize guidance rewards in the policy learning. We evaluate our agent on five Doom tasks. Our empirical results show that the proposed method significantly outperforms the conventional DQN method.
rejected-papers
This paper proposes a simple idea for using expert data to improve a deep RL agent's performance. Its main flaw is the lack of justification for the specific techniques used. The empirical evaluation is also fairly limited.
train
[ "r1ke1YDlz", "H1lsqQdeG", "Hk-OLVKeM", "S1LqdbcMG", "BygudWqfz", "SJOQu-5Mf", "H1o2J2YfG", "r1J7Oaw1z", "HJv1OsDyM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "author", "public" ]
[ "SIGNIFICANCE AND ORIGINALITY:\n\nThe authors propose to accelerate the learning of complex tasks by exploiting traces of experts.\nUnlike the most common form of imitation learning or behavioral cloning, the authors \nformulate their solution in the case where the expert’s state trajectory is observable, \nbut the expert’s actions are not. This is an important and useful problem in robotics and other\napplications. Within this specific setting the authors differentiate their approach from others \nby developing a solution that does NOT estimate an explicit dynamics model ( e.g., P( S’ | S, A ) ).\nThe benefits of not estimating an explicit action model are not really demonstrated in a clear way.\n\nThe author’s articulate a specific solution that provides heuristic guidance rewards that cause the \nlearner to favor actions that achieve subgoals calculated from expert behavior\nand refactors the representation of the Q function so that it \nhas a component that is a function of the subgoal extracted from the expert.\nThese subgoals are linear functions of the expert’s change in state (or change in state features).\nThe resultant policy is a function of the expert traces on which it depends.\nThe authors show they can retrain a new policy that does not require the expert traces.\nAs far as I am aware, this is a novel approach to the problem. \nThe authors claim that this factorization is important and useful but the paper doesn’t\nreally illustrate this well.\n\nThey demonstrate the usefulness of the algorithm against a DQN baseline on Doom game problems.\nThe algorithm learns faster than unassisted DQN as shown by learning curve plots. \nThey also evaluate the algorithms on the quality of the final policies for their approach, DQN, \nand a supervised learning from demonstration approach ( LfD ) that requires expert actions.\nThe proposed approach does as well or better than competing approaches.\n\n\nQUALITY\n\nAblation studies show that the guidance rewards are important to achieving the improved performance of the proposed method which is important confirmation that the architecture is working in the intended way. However, it would also be useful to do an ablation study of the “factorization” of action values. Is this important to achieving better results as well or is the guidance reward enough? This seems like a key claim to establish.\n\n\nCLARITY\n\nThe details of the memory based kernel density estimation and neural gradient training seemed\ncomplicated by the way that the process was implemented. Is it possible to communicate\nthe intuitions behind what is going on?\n \nI was able to work out the intuitions behind the heuristic rewards, but I still don’t clearly get \nwhat the Q-value factorization is providing:\n\nTo keep my text readable, I assume we are working in feature space\ninstead of state space and use different letters for learner and expert:\n\n Learner: S = \\phi(s) \n Expert’s i^th state visit: Ei = \\phi( \\hat{s}_i } where Ei’ is the successor state to Ei\n\nThe paper builds upon approximate n-step discrete-action Q-learning \nwhere the Q value for an action is a linear function of the state features:\n\n Qp(S,a) = Wa S + Ba\n\nwhere parameters p = ( Wa, Ba ).\n\nAfter observing an experience ( S,A,R,S’ ) we use Bellman Error as a loss function to optimize Qp for parameter p.\nI ignore the complexities of n-step learning and discount factors for clarity.\n\n Loss = E[ R + MAXa’ Qp(S’,a’) - Qp(S,a) ] \n\nThe authors suggest we can augment the environment reward R \nwith a heuristic reward Rh proportional to the similarity between \nthe learner “subgoal\" and the expert “subgoal\" in similar states. \n\nThe authors propose to use cosine distance between representations \nof what they call the “subgoals” of learner and expert. \nA subgoal is defined as a linear transformation of the distance traveled by an agent during a transition.\nThe heuristic reward is proportional to the cosine distance between the learner and expert “subgoals\"\n\n Rh = B < Wv LearnerDirectionInStateS, \n Wv ExpectedExpertDirectionInStatesSimilarToS >\n\nThe learner’s direction in state S is just (S-S’) in feature space.\n\nThe authors model the behavior of the expert as a kernel density type approximator\ngiving the expected direction of the expert starting from a states similar to the one the learner is in. \nLet < Wk S, Wk Ej > be a weighted similarity between learner state features S and expert state features Ej\nand Ej’ be the successor state features encountered by the expert.\nThen the expected expert direction for learner state S is:\n\n SUMj < Wk S, Wk Ej > ( Ej - Ej’ ) \n\nPresumably the linear Wk transform helps us pick out the important dimensions of similarity between S and Ej.\n\nMapping the learner and expert directions into subgoal space using Wv, the heuristic reward is\n\n Rh = B < Wv (S-S’), \n Wv SUMj < Wk S, Wk Ej > ( Ej - Ej’ ) >\n\nI ignore the ReLU here, but I assume that is operates element-wise and just clips negative values?\nThere is only one layer here so we don’t have complex non-linear things going on?\n\nIn addition to introducing a heuristic reward term, the authors propose to alter the Q-function\nto be specific to the subgoal.\n\n Q( s,a,g ) = g(S) Wa S + Ba\n\nThe subgoal is the same as the first part, namely a linear transform of the expected expert direction in \nstates similar to state S.\n\n g(S) = Wv SUMj < Wk S, Wk Ej > ( Ej - Ej’ ) \n\nSo in some sense, the Q function is really just a function of S, as g is calculated from S.\n\n Q( S,a ) = g(S) Wa S + Ba \n\nSo this allows the Q-function more flexibility to capture each subgoal in a different linear space?\nI don’t really get the intuition behind this formulation. It allows the subgoal to adjust the value \nof the underlying model? Essentially the expert defines a new Q-value problem at every state \nfor the learner? In some sense are we are defining a model for the action taken by the expert?\n\n\nADDITIONAL THOUGHTS\n\nWhile the authors compare to an unassisted baseline, they don’t compare to methods that use an action model\nwhich is not a fatal flaw but would have been nice. \n\nOne can imagine there might be scenarios where the local guidance rewards of this \nform could be problematic, particularly in scenarios where the expert and learner are not identical\nand it is possible to return to previous states, such as the grid worlds the authors discuss:\nIf the expert’s first few transitions were easily approximable,\nthe learner would get local rewards that cause it to mimic expert behavior.\nHowever, if the next step in the expert’s path was difficult to approximate, \nthen the reward for imitating the expert would be lower.\nWould the learner then just prefer to go back towards those states that it can approximate and endlessly loop?\nIn this case, perhaps expressing heuristic rewards as potentials as described in Ng’s shaping paper might solve the problem.\n\n\nPROS AND CONS\n\nImportant problem generally. Avoiding the estimation of a dynamics model was stated as a given, but perhaps more could be put into motivating this goal. Hopefully it is possible to streamline the methodology section to communicate the intuitions more easily.\n", "The paper presents a method that leverages demonstrations from experts provided in the shape of sequences of states (actually, state transitions are enough, they don't need to come in sequences) to faster learn reinforcement learning tasks. The authors propose to learn subgoals (actually local rewards) to encourage the agent to go towards the same direction as the expert when encountering similar states. The main claimed advantage is that it doesn't require the knowledge of the actions taken by the expert, only observations of states. \n\nTo me, there is a major flaw in the approach. Ho and Ermon 2016 extensively study the fact that imitation is not possible in stochastic environment without the knowledge of the actions. As the author say, learning the actions from state transitions in a standard stochastic MDP would require to learn the model. Yet, the authors demonstrate their approach in environments where the controlable dynamics is mainly deterministic (if one decides to turn right, the agents indeed turns right). So by subtracting features from successive states, the method mainly encodes the action as it almost encodes the one step dynamics in one shot. \n\nAlso the main assumption is that there is an easy way to compute similarity between states. This assumption is not met in the HealthGathering environment as several different states may generate very similar vision features. This causes the method not to work. This brings us back to the fact that features encoding the actual dynamics, potentially on many consecutive states (e.g. feature expectations used in IRL or occupancy probability used in Ho and Ermon 2016), are mandatory. \n\nThe method is also very close to the simplest IRL method possible which consists in placing positive rewards on every state the expert visited. So I would have liked a comparison to that simple method (using similar regression technique to generalize over states with similar features). \n\nFinally, I also think that using expert data generated by a pre-trained network makes the experimental section very weak. Indeed, it is unlikely that this kind of data can be obtained and training on this type of data is just a kind of distillation of the optimal network making the weights of the network close to the right optimum. With real data, acquired from humans, the training is likely to end up in a very different minima. \n\nConcerning the related work, the authors didn't mention the Universal Value Function Approximation (Schaul et al, @ICML 2015) which precisely extends V and Q functions to generalize over goals. This very much relates to the method used to generalize over subgoals in the paper. Also, the state if the art in IRL and learning from demonstration is lacking a lot of references. For instance, learning via RL + demonstrations was already studied into papers by Farahmand et al (APID, @NIPS 2013), Piot et al (RLED, @ ECML 2014) or Chemali & Lazaric (DPID, @IJCAI 2015) before Hester et al (DQfD @AAAI 2018). Some work is cited in the wrong context. For instance, Borsa et al 2017 doesn't do inverse RL (as said in the related work section) but learn to perform a task only from the extrinsic reward provided by the environment (as said in the introduction). BTW, I would suggest to refer to published papers if they exist instead of their Arxiv version (e.g. Hester et al, DQfD). ", "The authors propose to speed up RL techniques, such as DQN, by utilizing expert demonstrations. The expert demonstrations are sequences of consecutive states that do not include actions, which is closer to a real setting of imitation learning. The goal of this process is to extract a function that maps any given state to a subgoal. Subgoals are then used to learn different Q-value functions, one per subgoal. \nTo learn the function that maps states into subgoals, the authors propose a surrogate reward model that corresponds to the angle between: the difference between two consecutive states (which captures velocity or direction) and a given subgoal. A von Mises- Fisher distribution policy is then assumed to be used by the expert to generate actions that guide the agent toward the subgoal. Finally, the mapping function state->subgoal is learned by performing a gradient descent on the expected total cost (based on the surrogate reward function, which also has free parameters that need to be learned).\nFinally, the authors use the DQN platform to learn a Q-value function using the learned surrogate reward function that guides the agent to specific subgoals, depending on the situation.\nThe paper is overall well-written, and the proposed idea seems interesting. However, there are rather little explanations provided to argue for the different modeling choices made, and the intuition behind them. From my understanding, the idea of subgoal learning boils down to a non-parametric (or kernel) regression where each state is mapped to a subgoal based on its closeness to different states in the expert's demonstration. It is not clear how this method would generalize to new situations. There is also the issue of keeping tracking of a large number of demonstration states in memory. This technique reminds me of some common methods in learning from demonstrations, such as those using GPs or GMMs, but the novelty of this technique is the fact that the subgoal mapping function is learned in an IRL fashion, by tacking into account the sum of surrogate rewards in the expert's demonstration. \nThe architecture of the action value estimator does not seem novel, it's basically just an extension of DQN with an extra parameter (subgoal g).\nThe empirical evaluation seems rather mixed. Figure 3 shows that the proposed method learns faster than DQN, but Table I shows that the improvement is not statistically significant, except in two games, DefendCenter and PredictPosition. Are these the results after all agents had converged? \nOverall, this is a good paper, but focusing on only a single game (Doom) is a weakness that needs to be addressed because one cannot tell if the choices were tailored to make the method work well for this game. Since the paper does not provide significant theoretical or algorithmic contribution, at least more realistic and diverse experiments should be performed. ", "Thank you for your review. Table 1 shows the final performance of the learned models. Our claim of speeding up learning is about the learning curves in Figure 3.", "Thank you for your review. The major point you raised was about the technical soundness of our approach in the context of Ho and Ermon 2016. Our problem setting is still in reinforcement learning but not imitation learning. Our results are not trying to support imitation learning is possible given only states. \n\nRegarding the potential issue of using pre-trained network in collecting demonstration data, the network used in collecting the demonstration is the standard DQN and it is different from the architecture of our proposed approach. \nWe appreciate the proposed the simple IRL baseline and the pointed papers. We will include them in a revision. \n\nWe appreciate the proposed simple IRL baseline and the pointed papers. We will include them in a revision. ", "Thank you for your review. The responses to the questions asked are itemized below: \n\n-- However, it would also be useful to do an ablation study of the “factorization” of action values. Is this important to achieving better results as well or is the guidance reward enough?\n\nWe have the ablation results in Figure 3. The ``ablation (no guidance reward)’’ is the agent where only factorization is used and the guidance reward is not used. \n\n-- There is only one layer here so we don’t have complex non-linear things going on?\n\nWhen we compute the Q values by multiplying the image features (\\phi(s)), W^{a} and demonstration features (g), nonlinearity is not used.\n\n-- This allows the Q-function more flexibility to capture each subgoal in a different linear space? I don’t really get the intuition behind this formulation.\n\nBy utilizing the sub-goals from the demonstration, we are able to propagate learning experience through successive states in the demonstration trajectories as well as the agents’ experience trajectories. \n\n-- Would the learner then just prefer to go back towards those states that it can approximate and endlessly loop?\n\nThe external rewards are also provided to the agent and the endless looping behavior is not optimal with respect to the external rewards. \n\n", "Reproducibility of work in machine learning is critical to assuring the veracity of research results. \nThis is a challenge that has become increasingly important due to the expanding space of model architectures and the evaluation environments used for comparing these models. \n\nWe attempted to reproduce experiments in \"Faster Reinforcement Learning with Expert State Sequences\" (FRLWESS), comparing the learning speed of a DQN baseline against the author's proposed algorithm. The claim of this paper is that using the proposed algorithm for learning from expert state sequences we can learn faster than an agent without expert knowledge and that we can achieve the same performance as state-of-the-art imitation learning models without using expert action information.\n\nHowever, we were only able to partially reproduce results discussed in the paper. The challenges we faced included difficulty in finding the correct values for certain parameters used in the original paper, such as the size of the experience replay buffer, the constant term $\\beta$ for weighting guidance rewards, and the update rules for the expert states dictionary used in the sub-goal extractor.\n\nWhile we were not able to replicate the results successfully. We would like to share our approach and the source code we used to replicate the experiments, which can be found here: https://www.overleaf.com/read/gntbmmpqwykr.\nOur approach to reproducing this paper involved three experiments.\n(1.) The first was implementing the DQN agent which was used as a baseline for comparison against FRLWESS.\n(2.) The second was implementing and testing the FRLWESS algorithm itself to see if we could replicate the results presented by its authors. As a requirement for running this experiment, we had to collect state sequences from our expert pre-trained DQN model for the imitation learning agent to learn from.\n(3.) Finally, we attempted to replicate the ablation study of the guidance rewards for training the action value estimator in FRLWESS.", "Thanks for your interest in our paper. We will release the source codes after the internal approval. ", "Hello, I am working on reproducing your work in “Faster Reinforcement Learning with Expert State Sequences” for the ICLR 2018 Reproducibility Challenge. I was wondering if you were planning to open source the code used to perform your experiments, and if you are when it might be available.\n \nThanks and best regards" ]
[ 6, 5, 6, -1, -1, -1, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJ7d0fW0b", "iclr_2018_BJ7d0fW0b", "iclr_2018_BJ7d0fW0b", "Hk-OLVKeM", "H1lsqQdeG", "r1ke1YDlz", "iclr_2018_BJ7d0fW0b", "HJv1OsDyM", "iclr_2018_BJ7d0fW0b" ]
iclr_2018_HkpRBFxRb
Learning to Mix n-Step Returns: Generalizing Lambda-Returns for Deep Reinforcement Learning
Reinforcement Learning (RL) can model complex behavior policies for goal-directed sequential decision making tasks. A hallmark of RL algorithms is Temporal Difference (TD) learning: value function for the current state is moved towards a bootstrapped target that is estimated using the next state's value function. lambda-returns define the target of the RL agent as a weighted combination of rewards estimated by using multiple many-step look-aheads. Although mathematically tractable, the use of exponentially decaying weighting of n-step returns based targets in lambda-returns is a rather ad-hoc design choice. Our major contribution is that we propose a generalization of lambda-returns called Confidence-based Autodidactic Returns (CAR), wherein the RL agent learns the weighting of the n-step returns in an end-to-end manner. In contrast to lambda-returns wherein the RL agent is restricted to use an exponentially decaying weighting scheme, CAR allows the agent to learn to decide how much it wants to weigh the n-step returns based targets. Our experiments, in addition to showing the efficacy of CAR, also empirically demonstrate that using sophisticated weighted mixtures of multi-step returns (like CAR and lambda-returns) considerably outperforms the use of n-step returns. We perform our experiments on the Asynchronous Advantage Actor Critic (A3C) algorithm in the Atari 2600 domain.
rejected-papers
This is an interesting paper, but was quite difficult to follow. As they stand, the empirical results are not altogether convincing nor warrant acceptance.
train
[ "ryLFZHGgG", "BkWlPOFlM", "rkVjnRYef", "S1M2zLpXz", "B1wS3LaXz", "rk_TOUa7z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "SUMMARY\nThe major contribution of the paper is a generalization of lambda-returns called Confidence-based Autodidactic Returns (CAR), wherein the RL agent learns the weighting of the n-step returns in an end-to-end manner. These CARs are used in the A3C algorithm. The weights are based on the confidence of the value function of the n-step return. Even though this idea in not new the authors propose a simple and robust approach for doing it by using the value function estimation network of A3C.\n\nDuring experiments, the autodidactic returns perform better only half of the time as compared to lambda returns.\n\n\nCOMMENTS\nThe j-step returns TD error is not written correctly\n\nIn Figure 1 it is not obvious how the confidence of the values is estimated.\nFigure 1 is unreadable.\n\n\nA lighter version of Algorithm 1 in Appendix F should be moved in the text, since this is the novelty of the paper.\n", "The authors present confidence-based autodidactic returns, a Deep learning RL method to adjust the weights of an eligibility vector in TD(lambda)-like value estimation to favour more stable estimates of the state. The key to being able to learn these confidence values is to not allow the error of the confidence estimates propagate back though the deep learning architecture.\n\nHowever, the method by which these confidence estimates are refined could be better described. The authors describe these confidences variously as: \"some notion of confidence that the agent has in the value function estimate\" and \"weighing the returns based on a notion of confidence has been explored earlier (White & White, 2016; Thomas et al., 2015)\". But the exact method is difficult to piece together from what is written. I believe that the confidence estimates are considered to be part of the critic and the w vector to be part of the theta_c parameters. This would then be captured by the critic gradient for the CAR method that appears towards the end of page 5. If so, this should be stated explicitly.\n\nThere is another theoretical point that could be clearer. The variation in an autodidactic update of a value function (Equation (4)) depends on a few things, the in variation future value function estimates themselves being just one factor. Another two sources of variation are: the uncertainty over how likely each path is to be taken, and the uncertainty in immediate rewards accumulated as part of some n-step return. In my opinion, the quality of the paper would be much improved by a brief discussion of this, and some reflection on what aspects of these variation contribute to the confidence vectors and what isn't captured.\n\nNonetheless, I believe that the paper represents an interesting and worthy submission to the conference. I would strongly urge the authors to improve the method description in the camera read version though. A few additional comments are as follows:\n\n • The plot in Figure 3 is the leading collection of results to demonstrate the dominance of the authors' adaptive weight approach (CAR) over the A3C (TD(0) estimates) and LRA3C (truncated TD(lambda) estimates) approaches. However, the way the results are presented/plotted, namely the linear plot of the (shifted) relative performance of CAR (and LRA3C) versus A3C, visually inflates the importance of tasks on which CAR (and LRA3C) perform better than A3C, and diminishes the importance of those tasks on which A3C performs better. It would be better kept as a relative value and plotted on a log-scale so that positive and negative improvements can be viewed on an equal setting.\n • On page 3, when Gt is first mentioned, Gt should really be described first, before the reader is told what it is often replaced with.\n • On page 3, where delta_t is defined (the j step return TD error, I think the middle term should be $gamma^j V(S_{t+j})$\n • On page 4 and 5, when describing the gradient for the actor and critic, it would be better if these were given their own terminology, but if not, then use of the word respectively in each case would help.", "This paper revisits the idea of exponentially weighted lambda-returns at the heart of TD algorithms. The basic idea is that instead of geometrically weighting the n-step returns we should instead weight them according to the agent's own estimate of its confidence in it's learned value function. The paper empirically evaluates this idea on Atari games with deep non-linear state representations, compared to state-of-the-art baselines.\n\nThis paper is below the threshold because there are issues with the : 1) motivation, 2) the technical details, and (3) the empirical results.\n\nThe paper begins by stating that the exponential weighting of lambda returns is ad-hoc and unjustified. I would say the idea is well justified in several ways. First the lambda return definition lends itself to online approximations that achieve a fully incremental online form with linear computation and nearly as good performance of the off-line version. Second, decades of empirical results illustrating good performance of TD compared with MC methods. And an extensive literature of theoretical results. The paper claims that the exponential has been noted to be ad-hoc, please provide a reference for this.\n\nThere have been several works that have noted that lambda can and perhaps should be changed as a function of state (Sutton and Barto, White and White [1], TD-Gammon). In fact, such works even not that lambda should be related to confidence. The paper should work harder to motivate why adapting lambda as a function of state---which has been studied---is not sufficient.\n\nI don't completely understand the objective. Returns with higher confidence should be weighted higher, according to the confidence estimate around the value function estimate as a function of state? With longer returns, n>>1, the role of the value function in the target is down-weighted by gamma^n---meaning its accuracy is of little relevance to the target. How does your formalism take this into account? The basic idea of the lambda return assumes TD targets are better than MC targets due to variance, which place more weight on shorter returns.\n\nI addition I don't understand how learning confidence of the value function has a realizable target. We do not get supervised targets of the confidence of our value estimates. What is your network updating toward?\n\nThe work of Konidaris et al [1] is a more appropriate reference for this work (rather than the Thomas reference provided). Your paper does not very clearly different itself from Konidaris's work here. Please expand on this.\n\nThe experiments have some issues. One issue is that basic baselines could more clearly illustrate what is going on. There are two such baselines: random fixed weightings of the n-step returns, and persisting with the usual weighting but changing lambda on each time step (either randomly or according to some decay schedule). The first baseline is a sanity check to ensure that you are not observing some random effect. The second checks to see if your alternative weighting is simply approximating the benefits of changing lambda with time or state.\n\nI would say the current results indicate the conventional approach to TD is working well if not better than the new one. Looking at fig 3, its clear the kangaroo is skewing the results, and that overall the new method is performing worse. This is further conflated by fig7 which attempts to illustrate the quality of the learned value functions. In Kangaroo, the domain where your method does best, the l2 error is worse. On the other hand in sea quest and space invaders, where your method does worse, the l2 error is better. These results seem conflicting, or at least raise more questions than they answer.\n\n[1] A Greedy Approach to Adapting the Trace Parameter for Temporal Difference Learning . Adam White and Martha White. Autonomous Agents and Multi-agent Systems (AAMAS), 2016\n[2] G. D. Konidaris, S. Niekum, and P. S. Thomas. TDγ: Re-evaluating complex backups in temporal difference learning. In Advances in Neural Information Processing Systems 24, pages 2402–2410. 2011. ", "We thank the reviewer for the valuable feedback.\n\n1) LRA3C is the best algorithm in 9 out of 22 games and CARA3C is the best in 9 other games. In total, sophisticated mixtures of multi-step return methods perform better in 18 out of 22 games. This supports one of our claims that “sophisticated mixtures of multi-step return methods like lambda-returns and Confidence-based Autodidactic Returns leads to considerable improvement in the performance of a DRL agent”. We agree that CARA3C does not always outperform LRA3C. A binary comparison does seem to put both methods in an equal footing. But Table 1, shows that the average and median improvement in scores is clearly better for CARA3C compared to LRA3C. We believe that our work presents a new direction for research in value function learning to proceed. Due to limitation of computational resources we have access to and time, we were able to evaluate CARA3C only in the Atari domain. But considering that estimation of value function is fundamental to RL, our work is widely applicable. Our analysis of the confidence values show that the learned confidence values are indeed non-trivial and enables the agent to dynamically weigh the n-step returns (Section 4.2). Section 4.4 even shows certain instances where this has clearly enables to agent to achieve better game play. Our work proposes a simple and robust method for generalising the idea behind weighted returns. With all these in mind, we feel evaluation of our work based on the underlying theory, concept and it’s fundamentality is more reasonable at this stage rather than just based on it’s single-scenario performance in a limited domain such as Atari 2600 games.\n\n2) This is a typo. We have fixed it in the revised version.\n\n3) For predicting the confidence values, a distinct neural network is created which shares all but the last layer with the value function estimation network. So, every forward pass of the network on state s_t now outputs the value function V(s_t) and the confidence c(s_t). This is indicated in Figure 1. Figure 1 shows the CARA3C network unrolled over time and it visually demonstrates how the confidence values are calculated using the network for s_1. This figure demonstrates for m=20 and thus unrolls the network till s_21. So now we have V(s_2), V(s_3).......till V(s_21) and C(s_2), C(s_3).......till C(s_21). Using eqn(5), we now compute w(s_1)(i) for i=1 to i=20 and using eqn(2), we compute G_1(i) for 1=1 to i=20. Finally using eqn(1), we compute G_1(w) and this serves as an estimate for V(s_1). This completes the computation of the confidence values and how they are used. Now for learning the parameters we use back-propagation. However, during back-propagation of gradients in the A3C network, the parameters specific to the computation of the autodidactic return do not contribute to the gradient which flows back into the LSTM layer. This ensures that the parameters of the confidence network are learned while treating the LSTM outputs as fixed feature vectors. This entire scheme of not allowing gradients to flow back from the confidence value computation to the LSTM outputs has been demonstrated in Figure 1. The forward arrows depict the parts of the network which are involved in forward propagation whereas the backward arrows depict the path taken by the back-propagation of gradients. The complete reasoning behind this can be found in Section 3.5.\n\n4) For the legibility of the image, we have ensured that the image has a high resolution and we feel that the elements are quite clear when one zooms in. However, we will add a larger image in the camera-ready version.\n\n5) Thanks for raising this point. We will add a concise version of the algorithm in the main paper for our camera-ready version.\n", "We thank the reviewer for the valuable feedback.\n\n1) Can the reviewer please point us to justifications in favour of exponential weighting of lambda-returns? We will try to find such a reference. However, this seems more like an opinion than a fact and hence we are also ok with removing that second statement.\n\n2) Adapting lambda as a function of state is not sufficient for a simple reason: if the best next state (say s_{t+k}) for estimating the returns for current state (s_t) is not the immediately next one (s_{t+1}), no value of lambda can capture this. Thus we wanted to move past this dependence on lambda and give the agent the complete freedom learn the importance of all i-step returns for n >= i >=1 for every state s, on it’s own. This is how the idea of autodidactic returns and eventually confidence-based autodidactic returns was formulated. \n\n3) Comment on feasibility of n>>1:\nFor n-step returns, it has been empirically observed [cite A3C] that it is in fact for an intermediate value of n (like n = 5) that best results are obtained. This is because of the bias-variance trade-off in estimating the returns with larger values of n favouring lower bias but higher variance. \n\nOur formalism: \nOur formalism allows the agent to learn the importance of all i-step returns for n >= i >=1 for every state s. This is regardless of how large n is. It is true that when n is large the down-weighted value function’s importance in the target is low, but this is reflected only indirectly in our formalism. Which is to say, if such returns (i-step returns for i >>1) are not beneficial to the learning problem, the agent will automatically learn small weights corresponding to them. If it doesn’t, this perhaps means that the variance in the empirical estimation of the return is low. \n\n4) The network is updating the confidence values using the same objective function which is used for training the value function. We request the reviewer to go through second equation in section 3.4 and equation (5). Our answer for Reviewer 2’s first point may also provide more clarity.\n\n5) We thank the reviewer for introducing us to this work that we were unfamiliar with. We however feel that the issues addressed in that paper are quite different from that of our work and we are not exactly sure where the reviewer wants us to differentiate. Can the reviewer please elaborate?\n\n6) We believe the analysis presented in Sections 4.2, 4.3 and 4.4 sufficiently demonstrate the utility of our approach. While the additional experiments would certainly help, we feel that our current results already show that CARA3C does learn something non-trivial and does not explicitly raise the necessity for such sanity checks.\n\n7) Reviewer 3 seems to have a similar concern. We believe that our answer to his first point will address this as well.\n\n8) The value function that we are using here is that corresponding to the policies learned by the agents. This in no way talks about the quality of the policy. We are only talking about the approximation of the value function of the policy learned by the agent. Since in Kangaroo the other agents learn uniformly zero value policies, it is easier to approximate and hence they do a better job. When the policies are non-trivial, then CARA3C does a much better job of approximating the value function. \n\n\n", "We thank the reviewer for the generally positive review and valuable feedback.\n\n1) This is correct. To be clear, as shown in Figure 1, the network outputs the value function, confidence and policy. The additional output of confidence is our primary modification to the original A3C network. Specifically, the LSTM controller which aggregates the observations temporally is shared by the policy and the value networks. We extend the A3C network to predict the confidence values by creating a new output layer which takes as input the LSTM output vector (LSTM outputs are the pre-final layer). The confidence estimate are used to compute G_t^{w}(s_t) and the LSTM-to-Confidence parameters (confidence network) are updated using the gradient from the critic. But as described in Section 3.5, we do not allow these gradients to flow back from the confidence values computation last layer to the LSTM layer’s outputs. The parameters of the confidence network are learned while treating the LSTM outputs as fixed feature vectors. It’s however important to keep in mind that since all the three outputs (policy, value function, confidence on value function) share all but the last layer, G_t^{w}(s_t) depends on the parameters of the network which are used for value function prediction. But the gradients arising from the confidence estimates are used to update only newly added confidence network parameters and not the shared parameters.\n\nAll these points have already been mentioned in the paper (spread across sections 3.2, 3.4 and 3.5). However, we hope this summarization picks out the relevant points and provides some clarity.\n\n2) The variation in the reward ​is an \"irreducible\" component, and as such would not play a role in the variation of the confidence scores. The variation in the path taken is also irreducible in some sense, but can lead to larger variation in the estimates. But even this would average out in the long run. What would happen is that states that have highly uncertain trajectories after that would always have lower confidence scores. There is an irreducible level of uncertainty in the environment and we believe in the long run this will divert the agent into regions of the state where the irreducible variance or \"risk\" is lower. We agree that these points however require further analysis. Our current attempts at understanding the confidence values are however presented in Sections 4.2, 4.3 and 4.4 and we believe that it definitely provides some preliminary insights. \n\n3) Figure: https://imgur.com/a/ZjHE1\nOur goal behind the plot was to use A3C as a baseline and show the improvements achieved by the CARA3C and LRA3C. With that in mind, showing improvement % seemed to be more appropriate. But please refer to the figure above where we plotted the relative value in log-scale. Though this version of the plot does also highlight games where A3C performs better, our claim that CARA3C and LRA3C does significantly better on an overall is still clearly visible.\n\n4) We have introduced G_t first in Section 2.1 before using it in Page 3.\n\n5) This is a typo. We have fixed it in the revised version.\n\n6) We didn’t realise this would lead to a confusion. We have fixed it in the revised version.\n\n\n" ]
[ 5, 6, 5, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_HkpRBFxRb", "iclr_2018_HkpRBFxRb", "iclr_2018_HkpRBFxRb", "ryLFZHGgG", "rkVjnRYef", "BkWlPOFlM" ]
iclr_2018_HyunpgbR-
Structured Exploration via Hierarchical Variational Policy Networks
Reinforcement learning in environments with large state-action spaces is challenging, as exploration can be highly inefficient. Even if the dynamics are simple, the optimal policy can be combinatorially hard to discover. In this work, we propose a hierarchical approach to structured exploration to improve the sample efficiency of on-policy exploration in large state-action spaces. The key idea is to model a stochastic policy as a hierarchical latent variable model, which can learn low-dimensional structure in the state-action space, and to define exploration by sampling from the low-dimensional latent space. This approach enables lower sample complexity, while preserving policy expressivity. In order to make learning tractable, we derive a joint learning and exploration strategy by combining hierarchical variational inference with actor-critic learning. The benefits of our learning approach are that 1) it is principled, 2) simple to implement, 3) easily scalable to settings with many actions and 4) easily composable with existing deep learning approaches. We demonstrate the effectiveness of our approach on learning a deep centralized multi-agent policy, as multi-agent environments naturally have an exponentially large state-action space. In this setting, the latent hierarchy implements a form of multi-agent coordination during exploration and execution (MACE). We demonstrate empirically that MACE can more efficiently learn optimal policies in challenging multi-agent games with a large number (~20) of agents, compared to conventional baselines. Moreover, we show that our hierarchical structure leads to meaningful agent coordination.
rejected-papers
The reviewers feel there are two issues that make this paper fall short of acceptance: first, the lack of a clear emphasis and focus (evidenced by the significant revisions) and second, a lack of comparison to similar, existing methods for multi-agent reinforcement learning.
train
[ "Hk4yYjNef", "SJmr_aFgf", "B1hxXS9xM", "HkULJrp7M", "r1WspD_7f", "BywLPDO7M", "rJiQSDumf", "SJHUqM2MM", "rJ_4ESsff", "Bkz6yknZf", "HJITw-LWf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "public", "public" ]
[ "This paper proposes an approach to improve exploration in multiagent reinforcement learning by allowing the policies of the individual agents to be conditioned on an external coordination signal \\lambda. In order to find such parametrized policies, the approach combines deep RL with a variational inference approach (ELBO optimization). The paper presents an empirical evaluation, which seems encouraging, but that is also somewhat difficult to interpret given the lack of comparison to other state-of-the-art methods.\n\nOverall, the paper seems interesting, but (in addition to the not completely convincing empirical evaluation), it has two main weaknesses: lack of clarity and grounding in related literature.\n\n=Issues with clarity=\n\n-\"This problem has two equivalent solutions\". \nThis is not so clear; depending on the movement of the preys it might well be that the optimal solution will switch to the other prey in certain cases?\n\n-It is not clear what is really meant with the term \"structured exploration\". It just seems to mean 'improved'?\n\n-It is not clear that the improvements are due to exploration; my feeling is that is is due to improved statistical strength on a more abstract state feature (which is learned), not unlike:\nGeramifard, Alborz, et al. \"Online discovery of feature dependencies.\" Proceedings of the 28th International Conference on Machine Learning (ICML-11). 2011.\nHowever, there is no clear indication that there is an improved exploration policy.\n\n-The problem setting is not quite clear:\nThe paper first introduces \"multi-agent RL\", which seems to correspond to a \"stochastic game\" (also \"Markov game\"), but then moves on to restrict to the \"fully cooperative setting\" (which would make it a \"Multiagent MDP\", Boutilier '96).\n\nIt subsequently says it deals only with deterministic problems (which would reduce the problem further to a learning version of a multiagent classical planning problem), but in the experiments do consider stochastically moving preys.\n\n-The paper says the problem is fully observable, but fails to make explicit if this is *individually* fully observable, or jointly. I am assuming the former, but is it not clear how the agents observe this full state in the experimental evaluation.\n\nThis is actually a crucial confusion, as it completely changes the interpretation of what the approach does: in the individually observable case, the approach is adding a redundant source of information which is more abstract and thus seems to facilitate faster learning. In the latter case, where agents would have individual observations, it is actually providing the agents with more information.\n\nAs such, I would really encourage the authors to better define the task they are considering. E.g., by building on the taxonomies of problems that researchers have developed in the community focusing on decentralized POMDPs, such as:\nGoldman, Claudia V., and Shlomo Zilberstein. \"Decentralized control of cooperative systems: Categorization and complexity analysis.\" (2004).\n\n-\"Compared to the single-agent RL setting, multi-agent RL poses unique difficulties. A central issue\nis the exploration-exploitation trade-off\"\nThat now in particular happens to be a central issue in single agent RL too.\n\n-\"Finding the true posteriors P (λ t |s t ) ∝ P (s t |λ t )P (λ t ) is intractable in general\"\nThe paper did not explain how this inference task is required to solve the RL problem.\n\n-In general, I found the technical description impossible to follow, even after carefully looking at the appending. For instance, (also) there the term P (λ |s ) is suddenly introduced without explaining what the term exactly is? Why is the term P(a|λ) not popping up here? That also needs to be optimized, right? I suppose \\phi is the parameter vector of the variational approximation, but it is never really stated. The various shorthand notations introduced for clarity do not help at all, but only make the formulas very cryptic.\n\n-The main text is not readable since definitions, e.g., L(Q_r,\\tehta,\\phi), that are in the appendix are now missing.\n\n-It is not clear to me how the second term of (10) is now estimated?\n\n-\"Shared (shared actor-critic): agents share a deterministic hidden layer,\"\nWhat kind of layer is this exactly? How does it relate to \\lambda ?\n\n-\"The key difference is that this model does not sample from the shared hidden layer\"\nWhy would sampling help? Given that we are dealing with a fully observable multiagent MDP, there is no inherent need to randomize at all? (there should be a optimal deterministic joint policy?)\n\n-\"There is shared information between the agents\"\nWhat information is referred to exactly? \nAlso: It is not quite clear if for these domains cloned would be better than completely independent learners (without shared weights)?\n\n-I can't seem to find anywhere what is the actual shape (or type? I am assuming a vector of reals) of the used \\lambda.\n\n-in figure 5, rhs, what is being shown exactly? What do the colors mean? Why does there seem to be a \\lambda *per* agent now?\n\n\n\n=Related work=\n\nI think the paper could/should be hugely improved in this respect. \n\nThe idea of casting MARL as inference has also been considered by:\n\nLearning for Decentralized Control of Multiagent Systems in Large, Partially-Observable Stochastic Environments.\nM Liu, C Amato, EP Anesta, JD Griffith, JP How - AAAI, 2016\n\nStick-breaking policy learning in Dec-POMDPs\nM Liu, C Amato, X Liao, L Carin, JP How\nInternational Joint Conference on Artificial Intelligence (IJCAI) 2015\n\nWu, F.; Zilberstein, S.; and Jennings, N. R. 2013. Monte-carlo\nexpectation maximization for decentralized POMDPs. In Proc.\nof the 23rd Int’l Joint Conf. on Artificial Intelligence (IJCAI-\n13).\n\nI do not think that these explicitly make use of a mechanism to coordinate the policies, since they address to true Dec-POMDP setting where each agent only gets its own observations, but in the Dec-POMDP literature, there also is the notion of 'correlation device', which is an additional controller (say corresponding to a dummy agent), which of which the states can be observed by other agents and used to condition their actions on:\n\nBernstein DS, Hansen EA, Zilberstein S. Bounded policy iteration for decentralized POMDPs. InProceedings of the nineteenth international joint conference on artificial intelligence (IJCAI) 2005 Jun 6 (pp. 52-57).\n\n(and clearly this could be directly included in the aforementioned learning approaches). \n\n\nThis notion of a correlation device also highlights to potential relation to methods to learn/compute correlated equilibria. E.g.,:\n\nGreenwald A, Hall K, Serrano R. Correlated Q-learning. In ICML 2003 Aug 21 (Vol. 3, pp. 242-249).\n\n\nA different connection between MARL and inference can be found in:\n\nZhang, Xinhua and Aberdeen, Douglas and Vishwanathan, S. V. N., \"Conditional Random Fields for Multi-agent Reinforcement Learning\", in (New York, NY, USA: ACM, 2007), pp. 1143--1150.\n\n\nThe idea of doing something hierarchical of course makes sense, but also here there are a number of related papers:\n\n-putting \"hierarchical multiagent\" in google scholar finds works by Ghavamzadeh et al., Saira & Mahadevan, etc.\n\n-Victor Lesser has pursued coordination for better exploration with a number of students.\n\nI suppose that Guestrin et al.'s classical paper:\nGuestrin, Carlos, Michail Lagoudakis, and Ronald Parr. \"Coordinated reinforcement learning.\" ICML. Vol. 2. 2002.\nwould deserve a citation, and the MARL field is moving ahead fast, an explanation of the differences with COMA:\nCounterfactual Multi-Agent Policy Gradients\nJ Foerster, G Farquhar, T Afouras, N Nardelli, S Whiteson\nAAAI 2018\nis probably also warranted.\n\n\n\n\n\n\n\n\n", "The paper proposes a method to coordinate agent behaviour by using policies that have a shared latent structure. The authors derive a variational policy optimisation method to optimise the coordinated policies. The approach is investigated empirically on 2 predator prey type games.\n\nThe method presented in the paper seems quite novel. The authors present a derivation of their variational, hierarchical update. Not all steps in this derivation are equally well explained, especially the introduction of the variational posterior could be more detailed. The appendix also offers very little extra information compared to the main text, most paragraphs concerning the derivations are identical. The comparison to existing approaches using variational inference is quite brief. It would be nice to have a more detailed explanation of the novel steps in this approach.\n\n It also seems that by assuming a shared model, shared global state and a fully cooperative problem, the authors remove many of the complexities of a multi-agent system. This also brings the derivations closer to the single agent case.\n\nA related potential criticism is the feasibility of using this approach in a multi-agent system. The authors are essentially creating a (partially) centralised learner. The cooperative rewards and shared structure assumptions structures mentioned above seem limiting in a multi-agent system. Even giving each agent local state observations is known to potentially create coordination problems. The predator prey games where agents with agents physically distributed over the environment are probably not the best motivational examples.\n\nOther remarks: \n\nEmpirical result show a clear advantage for this method over the baselines. The evaluation domains are relatively simple, but it was nice to see that the authors also make an attempt to investigate the qualitative behaviour of their method.\n\nThe overview of related work was relatively brief and focused mostly on recent deep MARL approaches. There is a very large body on coordination in multi-agent RL. It would be nice to situate the research somewhat better within this field (or at least refer to an overview such as Busoniu et al, 2010).\n\nIt seems like a completely factorised approach (i.e. independent agents) would make a nice baseline for the experiments, in addition to the shared architecture approaches.\n\n", "This paper suggests an interesting algorithmic innovation, consisting of hierarchical latent variables for coordinated exploration in multi-agent settings. \n\nMain concern: This work heavily relies on the multi-agent aspect for novelty : \n\"Houthooft et al. (2016) learned exploration policies via information gain using variational methods. However, these only consider 1 agent\". However, in the current form of the paper this is a questionable claim. As the problems investigated combine fully observable states, purely cooperative payouts and global latent variables, they reduce to single agent problems with a large action space. Effectively the 'different agents' are nothing but a parameterized action space of a central controller. \nUsing hierarchical latent variables for large action spaces is like a good idea, but placing the work into multi-agent seems like a red herring. \n\nGiven that this is a centralized controller, it would be really helpful to compare quantitatively to other approaches for structured exploration, eg [3] and [4].\n\nDetailed comments:\n-\"we restrict to fully cooperative MDPs that are fully observable, deterministic and episodic.\" Since the rewards are also positive, a very relevant baseline (from a MARL point of view) is distributed Q-learning [1].\n-Figure 3: Showing total cumulative terminal rewards is difficult to interpret. I would be interested in seeing standard 'training curves' which show the average return per episode after a given amount of training episodes. Currently it is difficult to judge whether training has converged on not.\n-Related work is missing a lot of relevant research. Apart from references below, please see [2] for a relevant, if dated, overview.\n-\"Table 1: Total terminal reward (averaged over 5 best runs)\" - how does the mean and median compare across methods for all runs rather than just the top 5? \n\n\nReferences:\n[1] Lauer, M., Riedmiller, M.: An algorithm for distributed reinforcement learning in cooperative\nmulti-agent systems. In: Proceedings 17th International Conference on Machine Learning\n(ICML-00), pp. 535–542. Stanford University, US (2000)\n[2] L. Bus¸oniu, R. Babuska, and B. De Schutter: Multi-Agent Reinforcement Learning: An Overview\n[3] Nicolas Usunier, Gabriel Synnaeve, Zeming Lin, Soumith Chintala: Episodic Exploration for Deep Deterministic Policies: An Application to StarCraft Micromanagement Tasks\n[4] Matthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor, Richard Y. Chen, Xi Chen, Tamim Asfour, Pieter Abbeel, Marcin Andrychowicz: Parameter Space Noise for Exploration\n\n\n\n", "We'd like to notify reviewers that based on the reviews, we have uploaded a revision of our paper that addresses their comments and suggestions. Revised sections: \n\n1. Title\n2. Abstract\n3. Introduction\n4. Theory \n5. Related work\n6. Appendix", "Thank you for your interest and response! \n\nWe think the partial observable setting is very interesting to study wrt modeling policies as deep graphical models. In the partial observable case, one can choose a multi-agent learning problem definition to operate under. This has implications for how the joint / individual policies factorize / are shared. However, our method of modeling the hidden structure of the joint / individual policies can largely be generalized to these settings and are complementary to other works that operate under partial observability.\n\n1. For instance, one can provide only partial observations to the individual agent's policies, but give the model for lambda access to the full state. Then, the model for lambda can be interpreted as a sort of \"correlation device\". In this case, our variational approach and policy factorization can still be applied.\n\n2. Agent policies can be fully decoupled, where each agent now maintains their own coordination model (e.g. Q(lambda | observation_i). In this case, each agent can e.g. predict the observations / actions of other agents as well (see e.g. \"Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments\", Lowe et al.). This could give the individual agents a way to learn an implied coordination model from their imputations of other agents' state. Here, the latent variables could compactly encode distributions that are efficient for state/action imputation for many agents simultaneously.\n\nNote that in our paper, the policy support centralized learning and decentralized execution, where the agents only have to share a random seed to sample lambdas during execution. Similarly, in the fully decoupled case, agents could still agree on a random seed during training and execution. \n\n\n\n\n", "Thanks for your interest in our paper! \n\n\"We think this paper is not a clear paper. First, the definition of symbols in the paper is not clear. Many symbols used in the paper is not defined.\"\n- We've uploaded a paper revision that clarifies the writing and definitions. Please let us know if there are further details that are unclear, so we can clarify further if needed.\n\n\"Second, the decomposition of distribution of latent variables in the paper is not clear and looks less reasonable.\"\n- We model the policy distribution P(a_t|s_t) = \\int dlambda P(a_t, lambda_t | s_t) using a latent variable lambda_t for each time-step t. The distribution of the latent variables lambda_t is learned by Q(lambda_t | s_t), which gives a principled variational lower bound on the log-likelihood of the policy distribution (and hence the expected reward). This is (one of) the simplest latent structure that one can assume, and that can capture a wide range of complex policy distributions P(a|s), given powerful neural network models for Q(lambda_t | s_t) and the \"decoder\" P(a|lambda, s).\n\n\"Third, it is not clear how the agents observe this full states of the environment, etc. The paper states that the architecture of their approach helps the coordination of agents. But we think it is nothing but a centralized controller and we do not know the advantage of it over centralized controller method.\"\n\n- The centralized controller observes the positions of all agents (predators and prey). The coordination of the predators' actions is correlated through the latent variable lambda. \n\nIt is unclear to us what is meant by \"centralized controller method\". We invite the commenter to clarify this. \n\n", "Thank you for your interest! We're working on cleaning the code and making it easy to use. We will let you know as soon as possible when we have a version that can be shared. ", "Really enjoyed reading the paper, one of better papers at ICLR in our opinion. It seems to us that you should be able to extend this to partially observable scenarios as well, and not just fully observable. Could you comment where things break down when assuming partially observability? ", "We think this paper is not a clear paper. First, the definition of symbols in the paper is not clear. Many symbols used in the paper is not defined. Second, the decomposition of distribution of latent variables in the paper is not clear and looks less reasonable. Third, it is not clear how the agents observe this full states of the environment, etc. The paper states that the architecture of their approach helps the coordination of agents. But we think it is nothing but a centralized controller and we do not know the advantage of it over centralized controller method. To sum up, we think this paper is not in a high quality.\n", "zhanzhao@umich.edu", "I am a master student at Umich. We are now taking part in ICLR 2018 Reproducibility Challenge. So I wonder if I can get your code for the reproduce of the results in your paper. It is encouraged to get the code in this challenge and do some further investigations. So it will be great if you could offer us the code. My Email address is zhanzhao@umich.edu. Thank you so much!" ]
[ 4, 7, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyunpgbR-", "iclr_2018_HyunpgbR-", "iclr_2018_HyunpgbR-", "iclr_2018_HyunpgbR-", "SJHUqM2MM", "rJ_4ESsff", "HJITw-LWf", "iclr_2018_HyunpgbR-", "iclr_2018_HyunpgbR-", "HJITw-LWf", "iclr_2018_HyunpgbR-" ]
iclr_2018_BJvWjcgAZ
Sample-Efficient Deep Reinforcement Learning via Episodic Backward Update
We propose Episodic Backward Update - a new algorithm to boost the performance of a deep reinforcement learning agent by fast reward propagation. In contrast to the conventional use of the replay memory with uniform random sampling, our agent samples a whole episode and successively propagates the value of a state into its previous states. Our computationally efficient recursive algorithm allows sparse and delayed rewards to propagate effectively throughout the sampled episode. We evaluate our algorithm on 2D MNIST Maze Environment and 49 games of the Atari 2600 Environment and show that our agent improves sample efficiency with a competitive computational cost.
rejected-papers
The reviewers agree the proposed idea is relatively incremental, and the paper itself does not do an exemplary job in other areas to make up for this.
train
[ "SJ3y_pYxM", "H1gBrkcgM", "HyxmggJbM", "S1CtiJ4EM", "BJ1nRs3Xf", "Hync2bzMz", "B1__SU0Zz", "B11nkS0-M", "r1lBnW0Zf", "Hyp9C2pbf", "rkbujh2-M", "B1Q8wv2ZG", "Bk-FxwW-G", "HJIuwJGZz", "HygPguZ-M", "B1fm4OW-G", "ByT_zd-Wz", "H1GqLfb-f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "public", "author", "public", "author", "public", "author", "author", "author", "public" ]
[ "This paper proposes a new variant of DQN where the DQN targets are computed on a full episode by a « backward » update (i.e. from end to start of episode). The targets’ update rule is similar to a regular tabular Q-learning update with high learning rate beta: this allows faster propagation of rewards obtained at the end of the episode (while beta=0 corresponds to regular DQN with no such reward propagation). This mechanism is shown to improve on Q-learning in a toy 2D maze environment (with MNIST-based pixel states providing cell coordinates) with beta=1, and on DQN and its optimality tightening variant on Atari games with beta=0.5.\n\nThe intuition behind the algorithm (that one should try to speed up the propagation of rewards across multiple steps) is not new, in fact it has inspired other approaches like n-step Q-learning, eligibility traces or more recently Retrace(lambda) in deep RL. Actually the idea of replaying experiences in backward order can be traced back to the origins of experience replay (« Programming Robots Using Reinforcement Learning and Teaching », Lin, 1991), something that is not mentioned here. That being said, to the best of my knowledge the specific algorithm proposed in this submission (Alg. 2) is novel, even if Alg. 1 is not (Alg. 1 can be seen as a specific instance of Lin’s algorithm with a very high learning rate, and clearly only makes sense in toy deterministic environments).\n\nIn the absence of any theoretical analysis of the proposed approach, I would have expected an in-depth empirical validation. Unfortunately this is not the case here. In the toy environment (4.1) I am surprised by the really poor quality of the results (paths 5-10 times longer than the shortest path on average): have algorithms been run for a long enough time? Or maybe the average is a bad performance measure due to outliers? I would have also appreciated a comparison to Retrace(lambda), which is a more principled way to use multi-step rewards than n-step Q-learning (which is technically an on-policy method). Similar remarks can be made on the Atari experiments (4.2), where 10M frames is really low (the original DQN paper had results on 50M frames, and Rainbow reports 200M frames in only ~2x the training time reported here). The comparison also should have included prioritized experience replay, which has been shown to provide a significant boost in DQN, but may be tricky to combine with the proposed algorithm. Overall comparing only to vanilla DQN and its optimality tightening variant is too limited when there have been so many other meaningful improvements over DQN. This makes it really hard to tell whether the proposed algorithm would actually help when combined with a state-of-the-art method like Rainbow for instance.\n\nA few additional small remarks and questions:\n- « Second, there is no point in updating a one-step transition unless the future transitions have not been updated yet. »: should « unless » be replaced by « if »?\n- In 4.1 is there a maximum number of steps per episode and can you please confirm that training is done independently for each maze?\n- Typo in eq. 3: the - in the max should be a comma\n- There is a good amount of typos and grammar errors, though they do not harm the readability of the paper\n- Citations for « Deep Reinforcement Learning with Double Q-learning » and « Dueling Network Architectures for Deep Reinforcement Learning » could refer to their conference versions\n- « epsilon starts from 1 and is annealed to 0 at 200,000 steps in a quadratic manner »: please specify the exact formula\n- Fig. 7 is really confusing, there seem to be typos and it is not clear why the beta updates appear in these specific cells, please revise it if you want to keep it", "The authors propose a simple modification to the DQN algorithm they call Episodic Backward Update. The algorithm selects transitions in a backward order fashion from end of episode to be more effective in propagating learning of new rewards. This issue of fast propagation of updates is a common theme in RL (cf eligibility traces, prioritised sweeping, and more recently DQN with prioritised replay etc.). Here the proposed update applies the max Bellman operator recursively on a trajectory (unsure whether this is novel), with some decay to prevent accumulating errors with the nested max.\n\nThe paper is written in a clear way. The proposed approach seems reasonable, but I would have guessed that prioritized replay would also naturally sample transitions in roughly that order - given that TD-errors would at first be higher towards the end of an episode and progress backwards from there. I think this should have been one of the baselines to compare to for that reason.\n\nThe experimental results seem promising in the illustrative MNIST domain. Atari results seem decent, especially given that experiments are limited to 10M frames, though the advantage compared to the related approach of optimality tightening is not obvious. \n", "This paper proposes a new way of sampling data for updates in deep-Q networks. The basic principle is to update Q values starting from the end of the episode in order to facility quick propagation of rewards back along the episode.\n\nThe paper is interesting, but it lacks the proper comparisons to previously published techniques.\n\nThe results presented by this paper shows improvement over the baseline. But the Atari results is still significantly worse than the current SOTA.\n\nIn the non-tabular case, the authors have actually moved away from Q learning and defined an objective that is both on and off-policy. Some (theoretical) analysis would be nice. It is hard to judge whether the objective defined in the non-tabular defines a contraction operator at all in the tabular case.\n\nThere has been a number of highly relevant papers. Prioritized replay, for example, could have a very similar effect to proposed approach in the tabular case.\n\nIn the non-tabular case, the Retrace algorithm, tree backup, Watkin's Q learning all bear significant resemblance to the proposed method. Although the proposed algorithm is different from all 3, the authors should still have compared to at least one of them as a baseline. The Retrace algorithm specifically has also been shown to help significantly in the Atari case, and it defines a convergent update rule.", "Thanks for the reply, and for the revisions to the manuscript. It's great that you added more material, in particular more experiments and a theoretical analysis. Unfortunately I'm afraid it's a bit too much for a paper revision, as it would require a re-review, thus I am reluctant to improve my score without re-reading the whole thing carefully (which I lack time for).\n\nI'm still quite concerned by the small number of steps (10M) in experiments. I guess you are limited by your single GPU, which is sad, but I don't think one can draw meaningful conclusions on such small-scale experiments. I'm worried that prioritized experience replay doesn't seem to work much better than Vanilla DQN (no improvement on Median score for instance), while previous work suggests it is an important ingredient (ex: Dueling networks, Rainbow). Assuming this is not an implementation issue, the small number of steps could be the culprit.", "We have uploaded our revised paper. Below is the list of major revisions we made.\n\n1. Added Lin, 1991\n\nOriginal idea of replaying backward is described in the introduction and the related work sections.\nClarified that alg.1 is a special case of Lin's algorithm.\n\n2. MNIST DQN\n\nFigure 3 plotted until 200,000 steps.\nTable 1 reports both mean and median scores at 100,000 steps.\nMore detailed explanations on epsilon scheduling and the time step limit in Appendix C.\n\n3. Arcade Learning Environment\n\nAdded Prioritized ER and Retrace algorithm as baselines.\nFigure 5: changed the set of games from (Atlantis, Breakout, Gopher and Pong) to (Assault, Breakout, Gopher and Video Pinball) and also included results of new baselines.\nAppendix A and Appendix B: results from new baselines added. Appendix A no longer includes standard deviation information due to the margin.\nAppendix C contains specifications of baselines.\n\n4. Supplementary figure of Appendix D\n\nChanged the notations: capital 'A' means the realizations of the sampled episode, lowercase 'a' means the possible action index of the environment.\nDescirbed the update step by step: the first and second iterations.\n\n5. Theoretical Guarantees \n\nAdded a theorem in section 3 that episodic backward update with a diffusion coefficient beta in (0,1) defines a contraction operator and converges to the optimal value in finite and deterministic MDPs.\nStated the proof in Appendix E.", "As part of our efforts to reproduce the results of the original paper, we exhaustively researched about DQN, Optimality Tightening and other algorithms in the context of the original paper, and gained an understanding of the proposed EBU algorithm and setup the environment required to implement the algorithm. We attempted to use the code provided at https://github.com/ShibiHe/\nQ-Optimality-Tightening, the outline in the paper and the authors’ assistance to reproduce a subset of their results. \n\nObstacles to reproducibility:\nThe major obstacles we faced when attempting to reproduce the planned subset of the results were:\nDifficulty in translating the conceptual changes pointed out by the authors in their implementation and Q-Optimality Tightening (OT) codebases.\nComputational costs, and inefficiencies in the original OT code which meant that even on a Titan XP, only about 90 steps a second and 30% GPU utilization could be achieved(1). \nOutputs generated from the OT code were the only ones native to the code, therefore core evaluations (assessed and compared in the paper) couldn’t be run.\nLack of original EBU implementation code coupled with a lack of prior exhaustive theoretical knowledge meant that despite the modest number of changes, implementation of EBU was very challenging.\n\nReproduced results: We were able to establish two kinds of baselines, first being the average reward per episode over each of the 40 epochs for 3 games in the Arcade Learning Environment (Breakout, Video Pinball and Pong) and secondly we were able to reproduce and confirm the run time for the OT baseline for all the 3 aforementioned games through extrapolation, noticing signs of decay as epoch progressed for Pong, when running on GTX970, and a few spikes when running Breakout and Video Pinball on Titan XP. We did this using the original hyperparameters of the paper (which were almost the same as the OT code), as specified by the authors. Due to the significant computational cost of each run, we were not able to attempt a wide variety of alternative parameters. We were however, unable to definitively reproduce the author’s original results. The attempt to incorporate EBU was hampered by the size and complexity of the _do_training function and a lack of comments or documentation for the original OT code, which made correlating the pseudo code to the actual code difficult.\n\n\n\n(1). The limitation of GPU occupation saturating at 30% is also mentioned in the Readme of Q-Optimality Tightening implementation on Github.\n\n\nTyler Kolody, Yacine Mahdid and Rajat Bhateja", "I guess your source of confusion is that the same number 30 is used twice.\n\n30 no-op evaluation method does not mean that we test the agent for 30 episodes. But means that each episode starts with at most 30 no-op actions. It is already implemented in the \"_init_episode\" function of \"ale_experiment.py\". And we take the average score of 30 (just by chance, has nothing to do with the \"30\" in 30 no ops evaluation) episodes generated by 30 no-op evaluation method.\n\nSo we modified the \"run_epoch\" function and \"run\" function of \"ale_experiment.py\". When the \"testing\" parameter is true, we ran the \"run_episode\" function 30 times and saved the average score. So STEPS_PER_TEST = 125000 is not used. \n\nRefer to page 7 of « Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening», He et al., 2017 :\n\"\"We strictly follow the evaluation procedure in (Mnih et al., 2015) which is often referred to as ‘30\nno-op evaluation.’ During both training and testing, at the start of the episode, the agent always\nperforms a random number of at most 30 no-op actions. During evaluation, our agent plays each\ngame 30 times for up to 5 minutes, and the obtained score is averaged over these 30 runs. An \u000f-\ngreedy policy with \u000f = 0:05 is used. Specifically, for each run, the game episode starts with at most\n30 no-op steps, and ends with ‘death’ or after a maximum of 5 minute game-play, which corresponds\nto 18000 frames.\"\"\n\n\n", "I think my confusion stems from the fact that the only test I see is one that is dictated by steps, default=125000, rather than episodes, and explicitly is commented as \"runtime evaluation, not 30 no-op evaluation\" and I'm wondering how to use the 30 op evaluation. I have been unable to track down any way to get scores, but instead only get the standard 3 csv files, based on the training. \n\nThank you", "1. The max_steps variable should be 4500 steps. For a fair comparison, we set the same parameters for EBU and other baselines.\n\n2. There are some sources of randomness in the algorithm. \n 1) epsilon greedy exploration\n 2) sampling episode\n 3) number of steps for no-ops\nTo test the robustness of the algorithm, we used 8 different random seeds for the randomness. For each seed, at the end of every epoch we test the agent for 30 episodes with epsilon = 0.05. Since one epoch is 250,000 frames and we train for 1,000,000 frames in total, we have 40 test results for an agent with single random seed. Since there are oscillations in the test score, as mentioned in the paper, we take the best result out of 40 test scores as the result of the agent with that random seed. (following common practice (van Hasselt et al., 2015; Mnih et al., 2015)). Since we have 8 agents with different random seeds, we have 8 such results and we take mean of them to output the raw score.\n\nexample) suppose we have 10 epochs and 2 random seeds.\n\nepoch | 1 2 3 4 5 6 7 8 9 10\n\nseed 1 test score | 10 20 30 40 50 60 40 20 50 50 --> seed 1 result = 60\n\nseed 2 test score | 5 10 20 30 50 40 30 40 50 40 --> seed 2 result = 50 \n\nWe output mean of the results from all random seeds: (60+50)/2 = 55 as the result\n\n", "For the final step size, we found the max_steps variable and were wondering if it should be set to 18000 (the number of frames), or if the variable refers to steps and it should be set to 18000/4 frames per step = 4500? Was this change made for the OT baseline, or just the EBU code?\n\nRegarding outputs, I apologize if i've just overlooked something obvious in the code, but how do you get the raw scores for Appendix A? I see the 30 no-ops evaluation, but am not sure where to look/what I'm looking for regarding the output. \n\nUPDATE: The author of the original code mentioned they wrote something separate for those evaluations: was that the case for you as well?\n\nThanks as always", "Thank you for your efforts.\n\n1. What we mean by the final time step is not the parameter freeze_interval. But the maximum number of frames that each episode can last. This is a conventional way taken by all other algorithms on the Atari domain since some games may not terminate if the agent takes no significant actions. Refer to \"run_episode\" function of \"ale_experiment.py\". \n\n2. As mentioned in the paper, we trained the agent for 40 epochs (we set 1 epoch = 62,500 steps = 250,000 frames). So that makes a total of 10M frames of training.\n\n3. If you mean the Nature DQN, you may want to use one of the following codes:\n 1) The original Lua code by Deepmind ( https://sites.google.com/a/deepmind.com/dqn/)\n 2) Theano based Deep-Q RL code (https://github.com/spragunr/deep_q_rl)\n\n4. We do not really have any tweaks to accelerate the learning. But the Theano version of the DQN code tends to be faster than the original Lua code. For its simplicity, \"Pong\" takes the least amount of training time out of the 49 games we tried. \n\nBest of luck.\n", "Hi, as part of our efforts to reproduce the experiments suggested in your paper, we wanted to ask a few more questions:\n\n1. When you mention the changing the final time step to 18000 frames, did you mean the parameter freeze_interval in q_network.py and did you notice degrading steps per second when you were training it. \n\n2. When running a baseline for nature DQN, how many epochs did you run it for\n\n3. Is there a way to test the DQN without using Optimally Tightening, and finally\n\n4. Did you make any changes or tweaks to accelerate the learning to get the training times you mentioned and if you remember, which game took the least amount of time to train.\n\nOnce again, thanks for your previous response and hoping to hear from you soon.\n", "We are planning to upload our code after the revision process since we cannot reveal our identity before the final decision. \n\nBut as described in the paper, our code is built upon the codes of the paper (« Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening», He et al., 2017) https://github.com/ShibiHe/Q-Optimality-Tightening\nAll the hyperparameters and network structures are the same as those of above, except that we applied the final time step of 18000 frames (5 mins) for each episode. \n\nThe two major differences between our code and that of Optimality Tightening are the followings.\n1. To implement our backward target generation, we modified the \"_do_training\" function of \"ale_agents.py\".\n2. To sample a random episode, we defined \"random_batch\" function in \"ale_data_set.py\". This function is run only after all steps of previously sampled episode are updated.\n\nThank you.\n", "We really appreciate the quick response and details. \n\nBest of luck", "Thank you for your detailed feedback and questions.\nI'd like to answer some of questions and share our plan to revise the paper with regards to your feedback.\n\n1. Limited comparison\nWe strongly agree that we need more baseline algorithms to show the effectiveness of our algorithm. As other reviewers have suggested, we will include the performance of prioritized experience replay and retrace algorithm in the revised version.\n\n2. Idea of replaying experiences in backward\nThank you for the reference, we will mention the relationship between Lin's idea and our methods in the revised version.\n\n3. Poor performance in MNIST DQN\nLearning curve tends to converge so fast for all algorithms when we used simple 2D maze, so it was difficult to compare different algorithms. So we used MNIST images as the state representation to make the learning process of general state transitions harder. We trained the agents for 200,000 steps, and all three algorithms (backward DQN, vanilla DQN, n-step DQN) converge to 1. In the paper, we showed the plots over 100,000 steps to show the effectiveness of our method in the early stages of training. To avoid any confusion, we will show the results until 200,000 steps in the revised version. Note that the vanilla DQN is trained for 50M steps (200M frames) in the Atari domain. Since the MNIST DQN environment is much simple, it is reasonable that the training is done for 0.2M steps. \n\n4. A few more comments on MNIST DQN:\nWe terminated the episode when the agent stays in the maze for more than 1000 time steps.\nWe trained 50 different independent agents each in a different random maze and reported the mean score. But as you suggested, mean may be a bad measure due to outliers. So we will show both mean and median of 50 agents' scores as the result in the revised version.\n\n5. Running time compared to RAINBOW\nRunning time may vary a lot depending on which device and distributed method you use. We used a single GPU to train an agent. As reported in the paper, it took 152 hours to train 490M frames (49 games x 10M frames). RAINBOW takes 10 days to train 200M frames. We will mention that the training time is not the 'mean' training time of 49 games but the 'sum' of training time in the revised version.\n\n6. The last figure\nWe apologize for the confusion. The first column and fourth rows of initialization and recursive updates part should be changed as \"s_1\" -> \"s_2\". The beta is applied only for the positions where the actions were taken in the replay memory, as the update is done from right to left. a_T = A_2, a_(T-1) = A_1 in the example. We will make this clear in the revised version.\n\n7. Typos and Citations\nWe will correct the typos and citations as your suggestions.\n\nThank you so much for your ideas and suggestions.\nAny further comments are appreciated.\n\n\n\n\n", "Thank you for your time and suggestions.\n\nAs you mentioned, we guess there may be some relation between prioritized experience replay and our method. As all the reviewers have mentioned, we will add prioritized experience replay and retrace algorithm as the baseline to compare in the revised version.\n\nAny further suggestions are appreciated.", "Thank you for your time and suggestions.\n\nAs you and other reviewers have mentioned, we strongly agree that we lack the comparisons to other related methods. We will try to compare our results and those of prioritized experience replay and retrace algorithm in the revised version. Also we will try to add some theoretical analysis to compare our algorithm to others.\n\nAny further comments and thoughts are appreciated.", "I'm taking part in the reproducibility challenge put forward by Prof. Joelle Pineau and was wondering if we could have access to your code and any other information that might be pertinent when recreating your experiment. Any information such as hyperparameter values not mentioned in the paper and library versions would be extremely useful. \n\nThank you" ]
[ 4, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJvWjcgAZ", "iclr_2018_BJvWjcgAZ", "iclr_2018_BJvWjcgAZ", "HygPguZ-M", "iclr_2018_BJvWjcgAZ", "iclr_2018_BJvWjcgAZ", "B11nkS0-M", "r1lBnW0Zf", "Hyp9C2pbf", "rkbujh2-M", "B1Q8wv2ZG", "Bk-FxwW-G", "H1GqLfb-f", "Bk-FxwW-G", "SJ3y_pYxM", "H1gBrkcgM", "HyxmggJbM", "iclr_2018_BJvWjcgAZ" ]
iclr_2018_Sy_MK3lAZ
PARAMETRIZED DEEP Q-NETWORKS LEARNING: PLAYING ONLINE BATTLE ARENA WITH DISCRETE-CONTINUOUS HYBRID ACTION SPACE
Most existing deep reinforcement learning (DRL) frameworks consider action spaces that are either discrete or continuous space. Motivated by the project of design Game AI for King of Glory (KOG), one the world’s most popular mobile game, we consider the scenario with the discrete-continuous hybrid action space. To directly apply existing DLR frameworks, existing approaches either approximate the hybrid space by a discrete set or relaxing it into a continuous set, which is usually less efficient and robust. In this paper, we propose a parametrized deep Q-network (P-DQN) for the hybrid action space without approximation or relaxation. Our algorithm combines DQN and DDPG and can be viewed as an extension of the DQN to hybrid actions. The empirical study on the game KOG validates the efficiency and effectiveness of our method.
rejected-papers
The idea studied here is fairly incremental and the empirical evaluation could be improved.
train
[ "SkQBXUfJG", "r1j4GjKeM", "BkO0bWclz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper examines a modified NN architecture and algorithm (P-DQN) for learning in hybrid discrete/continuous action spaces. The authors come up with a clever way of modifying the architecture of parameterized-action-space DDPG (as in Hausknecht & Stone 16) in such a way that the actor only outputs values for the continuous actions and the critic outputs values for all discrete actions, parameterized by the actor’s choice of continuous actions. Overall, I think this is an interesting and valid modification to the DDPG architecture, with results to show improved sample complexity. However, there is no quantitative analysis of why the new architecture works better, insufficient understanding of the new domain and learning task, and overall rough presentation.\n\nClarity: The writing clarity is rough, but understandable, with numerous minor grammar mistakes. The paper is overly long and could be improved by a more compact presentation of background, algorithms, and results.\n\nOriginality: The paper builds on DDPG and explores a novel modification to the architecture. \n\nSignificance: It’s hard to evaluate the significance of this result because of the lack of videos + information on the Moba environment. The proposed P-DQN architecture is interesting and, if the results on the Moba environment are general, could be of use in future hybrid-discrete-continuous action space domains. \n\nPros:\n•\tThe modification to DDPG is genuinely interesting and does result in an algorithm that is a hybrid between DQN and DDPG.\n•\tThe learning curves show evidence of faster learning using the P-DQN architecture.\n\nCons:\n•\tIt’s difficult to confidently evaluate the merits of P-DQN vs DDPG based only on learning curves from a single, new domain. It would be nice to have explored results on additional domains such as Robot Soccer (HFO), where algorithm could have been compared directly to DDPG.\n•\tThere is very little analysis of why P-DQN exhibits better sample complexity. The authors claim the difference stems from explicit computation over the discrete actions, but this is never analyzed.\n•\tVery difficult to read the axes on the plots in Fig 3.\n•\tNot much detail is given about the domain – who or what is the agent playing against? Is the agent playing against a bot or just learning to kill creeps? Would be great to have a video of the learned policy (or evaluation against human / scripted opponent) so that others can understand the quality of the learned policy.\n", "In this paper, the authors investigate RL agents whose action space contains discrete dimensions, and some continuous dimensions. They approach the problem by tackling the continuous dimensions with DDPG, max-marginalizing out the continuous actions, and tackling the remaining dimensions with classical Q-learning. They apply their method to a MOBA-game, King of Glory.\nMethodologically, the method is a somewhat straightforward combination of DDPG and Q-learning; experimentally, they demonstrate improved performance (2-3x sample efficiency) compared to a modified DDPG algorithm from Hausknecht and Stone. Overall, methodologically, the paper is on the incremental side; experimentally, the authors attack a hard problem, and obtain moderate improvements. The most interesting part of the paper in my mind is the challenging domain of application; maybe trying their algorithm on slightly more difficult settings (different 'heroes', higher AI level) would have made the benefits of their method more evident.\n\nMinor:\n- Paper is significantly over the page limit; in many places, writing could be improved, many typos in paper\n(in the first page: \"project of design\"-> \"project of designing\"; \"farmework\"->\"framework\"; \"we consider the scenario\" \"problems that are once\" are clumsy).\nThe used of 'parameters' for what is effectively a continuous action is a bit confusing; I realize this is borrowed from Hausknecht and Stone, but the use of the term deserves a bit more clarification (they are effectively continuous actions, but in this particular game, they parametrize a particular discrete action).\n- Equation 2.2: Note that the term (r_t+ \\gamma ...) is not differentiated even though it appears in the loss, various paper use different notations to denote this. As it is, the loss is slightly incorrect; same issue with the last equation on page 3.\n- just after equation 2.3, the multiplier of \\grad \\log p_\\theta for REINFORCE is not the reward r_t but the return R_t.\n", "This paper presents a new reinforcement learning approach to handle environments with a mix of discrete and\ncontinuous action spaces. The authors propose a parameterized deep Q-network (P-DQN) and leverage learning\nschemes from existing algorithms such as DQN and DDPG to train the network. The proposed loss function and\nalternating optimization of the parameters are pretty intuitive and easy to follow. My main concern is\nwith lack of sufficient depth in empirical evaluation and analysis of the method.\n\nPros:\n1. The setup is an interesting and practically useful one to investigate. Many real-world environments require individual actions\n that are further parameterized over a continuous space.\n2. The proposed method is simple and intuitive.\n\nCons:\n1. The evaluation is performed only on a single environment in a restricted fashion. I understand the authors are restricted in the choice of environments which require a hybrid action space. However,\n even domains like Atari could be used in a setting where the continuous parameter x_k refers to the number of\n repetitions for action k. This is similar to the work of Lakshminarayanan et al. (2017). Could you test your algorithm in such a setting?\n2. Comparison of the algorithm is performed only against DDPG. Have you tried other options like PPO (Schulman et al., 2017)?\n Also, considering that the action space is simplified in the experimental setup (\"we use the default parameters of skills provided by the game environment, usually pointing to\nthe opponent hero's location\"), with only the move(\\alpha) action being a hybrid, one could imagine discretizing the move\ndirection \\alpha and training a DQN (or any other algorithms over discrete action spaces) as another baseline.\n3. The reward structure seems to be highly engineered. With so many components in the reward, it is not clear\nwhat the individual contributions are and what policies are actually learned.\n4. The authors don't provide any analysis of the empirical results. Do the P-DQN and DDPG converge to the same policy?\nWhat factor(s) contribute most to the faster learning of P-DQN? Do the values of \\alpha and \\beta for the two-timescale\nupdates affect the results considerably?\n5. (minor) The writing contains a lot of grammatical errors which makes this draft below par for an ICLR paper.\n\n\nOther Questions:\n1. In eq. 5.3, the loss over \\theta is defined as the sum of Q values over different k. Did you try other formulations of\nthe loss? (say, product of the Q values for instance) One potential issue with the sum could be that if some values of k dominate this sum, Q(s, k, x_k; w) might not be maximized for all k.\n2. Some terms of the reward function seem to be overly dependent on historic actions (ex. difference in gold and hitpoints). This could swamp the\ninfluence of the other terms which are more dependent on the current action a_t, which might be an issue, especially with the Markovian assumption?\n\nReferences:\n Lakshminarayanan et al, 2017; Dynamic Action Repetition for Deep Reinforcement Learning; AAAI\n Schulman et al., 2017; Proximal Policy Optimization Algorithms; Arxiv\n" ]
[ 5, 5, 4 ]
[ 4, 3, 4 ]
[ "iclr_2018_Sy_MK3lAZ", "iclr_2018_Sy_MK3lAZ", "iclr_2018_Sy_MK3lAZ" ]
iclr_2018_S1GDXzb0b
Model-based imitation learning from state trajectories
Imitation learning from demonstrations usually relies on learning a policy from trajectories of optimal states and actions. However, in real life expert demonstrations, often the action information is missing and only state trajectories are available. We present a model-based imitation learning method that can learn environment-specific optimal actions only from expert state trajectories. Our proposed method starts with a model-free reinforcement learning algorithm with a heuristic reward signal to sample environment dynamics, which is then used to train the state-transition probability. Subsequently, we learn the optimal actions from expert state trajectories by supervised learning, while back-propagating the error gradients through the modeled environment dynamics. Experimental evaluations show that our proposed method successfully achieves performance similar to (state, action) trajectory-based traditional imitation learning methods even in the absence of action information, with much fewer iterations compared to conventional model-free reinforcement learning methods. We also demonstrate that our method can learn to act from only video demonstrations of expert agent for simple games and can learn to achieve desired performance in less number of iterations.
rejected-papers
The paper is hard to follow at times. The heuristic reward has little justification -- not clear how this would extend to other domains. Lack of empirical comparisons (see e.g. Hester et al., Deep Q-Learning from Demonstrations, 2017).
train
[ "rJynoO9zM", "SJlNadqfM", "ry8Bn_5Mz", "rJfWQ9OeG", "SymLN__gM", "HJO3Kl0ef", "B1V5vHb1M" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "Thank you for the overall encouraging review. We address some of the concerns in the following,\n\nQ : Not clear that method converges on all problems. \nA: Yes it does not converge on all dynamics models. Currently, the main drawback of the method is that it cannot model complex dynamics models like raw video transitions as mentioned in the anonymous comment also.\n\nQ : Not clear that the method is able to extract the state from video — authors had to extract position manually\nA: Learning the useful state representations from raw video is a challenging problem. In literature, Pathak. et al. ICML 2017 proposes to use a feature extractor \\phi, which learns to predict the action given the current and next state. However, in our case, we simplify the assumption by manually specifying parts of the state that depends on the actions. This is a limitation of the proposed method but we hope to address this issue in the future versions using methods in literature, such as Pathak. et al. ICML 2017.\n\nThe overall approach and algorithms are described fairly clearly. Some minor typos here and there.\nA : We changed the typos and reordered the figures.\n", "Thank you for illustrating an alternative model-based method. We believe this is a useful baseline method that we should compare our method against. We agree that since our experiments are simple in nature, this proposed alternative method might perform equally well compared to the proposed method. \n\nHowever, the main difference between the suggested method and the proposed method, is that, the suggested method attempts to learn the inverse dynamics of the system (eg. given two locations (a,b) in space of the end-effector, find the torque value for moving a robotic arm from a to b) which might be difficult to learn in a general setting. Our proposed method learns the forward dynamics (eg. given current locations 'a' of the end-effector and torque values, find the next location), which might have a well-defined equation in mechanics in most general cases. However, we do agree that for the simple experimental evaluations that we have performed both, inverse and forward dynamics might be of equal difficulty.\n\nNo, in our case we manually specify the location of the flappy-bird as \\phi(s_t). It is challenging to directly learn dynamics model between raw video streams as has been already pointed out in the comment, and it is a limitation of the current proposed method. One method can be to automatically learn \\phi(s_t) in case of high dimensional inputs, using action prediction from consecutive states, using ideas in prior methods, like Pathak. et al. ICML 2017.\n", "We thank the reviewer for the overall constructive comments.\n\nQ : It does not cite or discuss a very important piece of related work: \"Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation\" (Liu et al., 2017)\nA: Thank you for pointing out the relevant prior work using observations only. We added citation to this work in the introduction section of the paper. Our main contribution in this work is to show that the proposed method uses a combination of model-based and model-free methods for acceleration in imitation learning from observations alone. Although the mentioned prior work is similar to the proposed method, it's main focus is on transferring learned tasks on expert observations in a source domain to a novel target domain.\n\nQ: The empirical results are unconvincing - it seems like in all problems they use there is a straightforward mapping from state feature differences to actions, as pointed out in an anonymous comment.\nA : We agree that our experiments are simple in nature, with easy to learn dynamics model, which is a drawback of the current evaluation scheme. However, the main contribution of this paper is to present the novel idea that combination of proposed model-based and model-free training has the advantage of accelerated training for imitation learning from observation alone, which can be illustrated by these simple setups. In the future, we plan to build upon the current idea on complex dynamics model setup as well for future work.", "Model-Based Imitation Learning from State Trajectories\n\nSIGNIFICANCE AND ORIGINALITY:\n\nThe authors propose a model-based method for accelerating the learning of a policy\nby observing only the state transitions of an expert trace.\nThis is an important problem in many fields such as robotics where\nfinding a feasible policy is hard using pure RL methods.\n\nThe authors propose a unique two step method to find a high-quality model-based policy.\n\nFirst: To create the environment model for the model-based learner, \n they need a source of state transitions with actions ( St, At,xa St+1 ).\nTo generate these samples, they first employ a model-free algorithm.\nThe model-free algorithm is trained to try to duplicate the expert state at each trajectory.\nIn continuous domains, the state is not unique … so they build a soft next state predictor\nthat gives a probability over next states favoring those demonstrated by the expert.\nSince the transitions were generated by the agent acting in the environment,\nthese transitions have both states and actions ( St, At, St+1 ).\nThese are added to a pool.\n\nThe authors argue that the policy found by this model-free learner is\nnot highly accurate or guaranteed to converge, but presumably is good at\ngenerating transitions relevant to the expert’s policy.\n(Perhaps slowly reducing the \\sigma in the reward would improve accuracy?)\nI guess if expert trace data is sparse, the model-free learner can generate a lot \nof transitions which enable it to create accurate dynamics models which in turn\nallow it to extract more information out of sparse expert traces?\n\nSecond: They then train a model based agent using the collected transitions ( St, At, St+1 ).\nThey formulate the problem as a maximum likelihood problem with two terms: \nan action dynamics model which is learned from local exploration using the learner’s own actions and outcomes\nand expert policy model in terms of the actions learned above \nthat maximizes the probability of the observed expert’s trajectory.\nThis is a nice clean formulation that integrates the two processes.\nI thought the comparison to an encoder - decoder network was interesting.\n\nThe authors do a good job of positioning the work in the context of recent work in IML.\n\nIt looks like the authors extract position information from flappy bird frames, \nso the algorithm is only using images for obstacle reasoning?\n\n\nQUALITY\n\nThe propose model is described fairly completely and evaluated on \na “reaching\" problem and the \"flappy bird” game domain.\nThe evaluation framework is described in enough detail to replicate the results.\n\nInterestingly, the assisted method starts off much higher in the “reacher” task.\nPresumably this task is easy to observe the correct actions.\n\nThe flappy bird test shows off the difference between unassisted learning (DQN),\nmodel free learning with the heuristic reward (DQN+reward prediction) \nand model based learning. \n\nInterestingly, DQN + heuristic reward approaches expert performance\nwhile behavioral cloning never achieves expert performance level even though it has actions.\n\nWhy does the model-based method only run to 600 steps and stopped before convergence??\nDoes it not converge to expert level?? If so, this would be useful to know.\n\nThere are minor grammatical mistakes that can be corrected.\n\nAfter equation 5, the authors suggest categorical loss for discrete problems, \nbut cross-entropy loss might work better. Maybe this is what they meant.\n\n\nCLARITY\n\nThe overall approach and algorithms are described fairly clearly. Some minor typos here and there.\n\nAlgorithm 1 does not make clear the relationship between the model learned in step 2 and the algorithms in steps 4 to 6.\n\nI would reverse the order of a few things to align with a right to left ordering principle. \nIn Figure 1, put the model free transition generator on the left and the model-based sample consumer on the right.\nIn Figure 3, put the “reacher” test on the left and the “flappy bird” on the right.\n\n\nPROS AND CONS\n\nInteresting idea for learning quickly from small numbers of samples of expert state trajectories. \n\nNot clear that method converges on all problems. \n\nNot clear that the method is able to extract the state from video — authors had to extract position manually\n(this point is more about their deep architecture than the imitation framework they describe -\nthough perhaps a key argument for the authors is the ability to work with small numbers of \nexpert samples and still be able to train deep methods ) ??\n\n\nPOST REVIEW SUBMISSION:\n\nThe authors make a number of clarifying comments to improve the text and add the reference suggested by another reviewer. ", "The problem addressed here is imitation learning when no action information is available, which is an important problem in robotics for instance. The main idea of the proposed method is to produce a policy that matches the states observed in the expert trajectories, and this is achieved via a somewhat complex mix of model-free and model-based learning.\n\nMy main issues with the paper are:\n- It does not cite or discuss a very important piece of related work: \"Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation\" (Liu et al., 2017)\n- The empirical results are unconvincing - it seems like in all problems they use there is a straightforward mapping from state feature differences to actions, as pointed out in an anonymous comment.\n\nAdditionally, it would have been nice to show empirically how helpful the model-based component of their approach is.\n", "The paper presents a model-based imitation learning framework which learns the state transition distribution of the expert. A model-based policy is learned that should matches the expert transition dynamics. The approach can be used for imitation learning when the actions of the expert are not observed, but only the state transitions (which is an important special case). \n\nPros:\n- The paper concentrates on an interesting special case of imitation learning\n\nCons:\n- The paper is written very confusingly and hard to understand. The algorithm needs to be better motivated and explained and the paper needs proof reading.\n- The algorithm is based on many heuristics that are not well motivated. \n- The algorithm is only optimizing the one step error function for imitation learning but not the long term behavior. It heavily relies on the learned transition dynamics of the expert p(s_t+1|s_t). This transition model will be wrong if we go away from the expert's trajectories. Hence, I do not see why we should use p(s_t+1|s_t) to define the reward function. It does not prevent the single step \nerrors of the policy to accumulate (which is the main goal of inverse reinforcement learning)\n- The results are not convincing\n- Other algorithms (such as GAIL) could be used in the same setup (no action observations). Comparisons to other imitation learning approaches are needed.\n\nIn summary, this is a poorly written paper that seems to rely on a lot of heuristics that are not well motivated. Also the results are not convincing. Clear reject.\n\n\nMore detailed comments\n- It is unclear why a model-based and model-free policy need to be used. Is the model-based policy used at any time in the algorithm? If it is just used as final result, why train it iteratively? Why can we not just also use the model-based policy for data collection?\n- It is unclear why the heuristic reward function makes sense. First of all, the defined reward is stochastic as \\hat{s}_t+1 is a sample from the next state from the expert's transition model. Why do not we use the mean of the transition model here, then it would not be stochastic any more. Second, a much simpler reward could be used that essentially does the same thing. Instead of requiring a learned dynamics model f_E for predicting the next state, we can just use the experienced next state s_t+1. Note that the reward function for time step t can depend on s_t+1 in an MDP. \n- The objective that is optimized (Eq. 4) is not well defined. A function is not an objective function if we can only optimize part of it for theta while keeping theta fixed for the other part. It is unclear which objective the real algorithm optimizes\n- There are quite a few confusions in terms of notation. Sometimes, a stochastic transition model p(s_t+1|s_t, a_t) is used and sometimes a deterministic model f_E(s,a). It is unclear how they relate. \n- Many other imitation learning techniques could be used in this setup including max-entropy inverse RL [1], IRL by distribution matching [2] and the approach given in [3] and GAIL. A comparison to at least a subset of these methods is needed\n\n[1] B. Ziebart et al, Maximum Entropy Inverse Reinforcement Learning, AAAI 2008\n[2] Arenz, O.; Abdulsamad, H.; Neumann, G. (2016). Optimal Control and Inverse Optimal Control by Distribution Matching, Proceedings of the International Conference on Intelligent Robots and Systems (IROS)\n[3] P Englert, A Paraschos, J Peters, MP Deisenroth, Model-based Imitation Learning by Probabilistic Trajectory Matching, IEEE International Conference on Robotics and Automation", "I think there is an important baseline missing, namely:\n- Train a model to predict the action a_t using (phi(s_t), phi(s_{t+1})) as input, using samples obtained from the model-free policy. \n- Use this model to predict actions performed in the expert trajectories. \n- Train a standard imitation learner using the expert state trajectories together with the predicted actions as targets.\n\nThis should perform similarly to behavior cloning using true actions if the learned action predictor is accurate, which I think should be the case due to the way states are represented. For example, if I understand correctly in the 2D Obstacle Avoider task the action is simply a_t = phi(s_{t+1}) - phi(s_t). In Flappy Bird, the action is 1 if phi(s_{t+1})-phi(s_t) > 0 and -1 otherwise. It should be possible to learn both of these action predictors using very few samples. \n\nThis baseline would have more trouble if the inputs were videos and the hardcoded phi(s_t) was not provided, because the classifier would receive a high-dimensional input and would need more samples with known actions to fit its parameters. Did you try your method on Flappy Bird using only video, without providing the phi(s_t) as input?\n" ]
[ -1, -1, -1, 7, 4, 3, -1 ]
[ -1, -1, -1, 3, 4, 5, -1 ]
[ "rJfWQ9OeG", "B1V5vHb1M", "SymLN__gM", "iclr_2018_S1GDXzb0b", "iclr_2018_S1GDXzb0b", "iclr_2018_S1GDXzb0b", "iclr_2018_S1GDXzb0b" ]
iclr_2018_HyDAQl-AW
Time Limits in Reinforcement Learning
In reinforcement learning, it is common to let an agent interact with its environment for a fixed amount of time before resetting the environment and repeating the process in a series of episodes. The task that the agent has to learn can either be to maximize its performance over (i) that fixed amount of time, or (ii) an indefinite period where the time limit is only used during training. In this paper, we investigate theoretically how time limits could effectively be handled in each of the two cases. In the first one, we argue that the terminations due to time limits are in fact part of the environment, and propose to include a notion of the remaining time as part of the agent's input. In the second case, the time limits are not part of the environment and are only used to facilitate learning. We argue that such terminations should not be treated as environmental ones and propose a method, specific to value-based algorithms, that incorporates this insight by continuing to bootstrap at the end of each partial episode. To illustrate the significance of our proposals, we perform several experiments on a range of environments from simple few-state transition graphs to complex control tasks, including novel and standard benchmark domains. Our results show that the proposed methods improve the performance and stability of existing reinforcement learning algorithms.
rejected-papers
The reviewers agree that this paper suffers from a lack of novelty and does not make sufficient contributions to warrant acceptance.
train
[ "S1nWk0YgG", "ryeKvcAeM", "rkUhtNy-M", "BJA7nJp7z", "H122t1a7f", "Sk0rt1amM", "HyVrdyT7z", "ry-v6YFff", "SJB4mKFGM", "H1XFBYKMf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "official_reviewer" ]
[ "Summary: This paper explores how to handle two practical issues in reinforcement learning. The first is including time remaining in the state, for domains where episodes are cut-off before a terminal state is reached in the usual way. The second idea is to allow bootstrapping at episode boundaries, but cutting off episodes to facilitate exploration. The ideas are illustrated through several well-worked micro-world experiments.\n\nOverall the paper is well written and polished. They slowly worked through a simple set of ideas trying to convey a better understanding to the reader, with a focus on performance of RL in practice.\n\nMy main issue with the paper is that these two topics are actually not new and are well covered by the existing RL formalisms. That is not to say that an empirical exploration of the practical implications is not of value, but that the paper would be much stronger if it was better positioned in the literature that exists.\n\nThe first idea of the paper is to include time-remaining in the state. This is of course always possible in the MDP formalism. If it was not done, as in your examples, the state would not be Markov and thus it would not be an MDP at all. In addition, the technical term for this is finite horizon MDPs (in many cases the horizon is taken to be a constant, H). It is not surprising that algorithms that take this into account do better, as your examples and experiments illustrate. The paper should make this connection to the literature more clear and discuss what is missing in our existing understanding of this case, to motivate your work. See Dynamic Programming and Optimal Control and references too it.\n\nThe second idea is that episodes may terminate due to time out, but we should include the discounted value of the time-out termination state in the return. I could not tell from the text but I assume, the next transition to the start state is fully discounted to zero, otherwise the value function would link the values of S_T and the next state, which I assume you do not want. The impact of this choice is S_T is no longer a termination state, and there is a direct fully discounted transition to the start states. This is in my view is how implementations of episodic tasks with a timeout should be done and is implemented this way is classic RL frameworks (e.g., RL glue). If we treat the value of S_T as zero or consider gamma on the transition into the time-out state as zero, then in cost to goal problems the agent will learn that these states are good and will seek them out leading to suboptimal behavior. The literature might not be totally clear about this, but it is very well discussed in a recent ICML paper: White 2017 [1]\n\nAnother way to pose and think about this problem is using the off-policy learning setting---perhaps best described in the Horde paper [2]. In this setting the behavior policy can have terminations and episodes in the classic sense (perhaps due to time outs). However, the agent's continuation function (gamma : S -> [0,1]) can specify weightings on states representing complex terminations (or not), completely independent of the behavior policy or actual state transition dynamics of the underlying MDP. To clearly establish your contributions, the authors must do a better job of relating their work to [1] and [2].\n\n[1] White. Unifying task specification in reinforcement learning. Martha White. International Conference on Machine Learning (ICML), 2017.\n\n[2] Sutton, R. S., Modayil, J., Delp, M., Degris, T., Pilarski, P. M., White, A., & Precup, D. (2011). Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In The 10th International Conference on Autonomous Agents and Multiagent Systems: 2, 761--768. \n\nSmall comments that did not impact paper scoring:\n1) eq 1 we usually don't use the superscript \\gamma\n2) eq2, usually we talk about truncated n-step returns include the value of the last state to correct the return. You should mention this\n3) Last paragraph of page 2 should not be in the intro\n4) in section 2.2 why is the behavior policy random instead of epsilon greedy?\n5) It would be useful to discuss the average reward setting and how it relates to your work.\n6) Fig 5. What does good performance look like in this domain. I have no reference point to understand these graphs\n7) page 9, second par outlines alternative approaches but they are not presented as such. Confusing ", "The majority of the paper is focused on the observation that (1) making policies that condition on the time step is important in finite horizon problems, and a much smaller component on that (2) if episodes are terminated early during learning (say to restart and promote exploration) that the values should be bootstrapped to reflect that there will be additional rewards received in the true infinite-horizon setting.\n\n1 is true and is well known. This is typically described as finite horizon MDP planning and learning and the optimal policy is well known to be nonstationary and depend on the number of remaining time steps. There are a number of papers focusing on this for both planning and learning though these are not cited in the current draft. \n\nI don’t immediately know of work that suggests bootstrapping if an episode is terminated early artificially during training but it seems a very reasonable and straightforward thing to do. \n\n", "This paper considers the problem of Reinforcement Learning in time-limited domains. It begins by observing that in time-limited domains, an agent unaware of the remaining time can experience state-aliasing. To combat this problem, the authors suggest modifying the state representation of the policy to include an indicator of the amount of remaining time. The time-aware agent shows improved performance in a time-limited gridworld and several control domains. Next, the authors consider the problem of learning a time-unlimited policy from time-limited episodes. They show that by bootstrapping from the final state of the time-limited domain, they are able to learn better policies for the time-unlimited case.\n\nPros:\nThe paper is well-written and clear, if a bit verbose. \nThe paper has extensive experiments in a variety of domains.\n\nCons:\nIn my opinion, the substance of the contribution is not enough to warrant a full paper and the problem of time-limited learning is not well motivated: \n\n1) It's not clear how frequently RL agents will encounter time-limited domains of interest. Currently most domains are terminated by failure/success conditions rather than time. The author's choice of tasks seem somewhat artificial in that they impose time limits on otherwise unlimited domains in order to demonstrate experimental improvement. Is there good reason to think RL agents will need to contend with time-limited domains in the future? \n\n2) The inclusion of remaining-time as a part of the agent's observations and resulting improvement in time-limited domains is somewhat obvious. It's well accepted that in any partially observed domain, inclusion of the latent variable(s) as a part of the agent's observation will result in a fully observed domain, less state-aliasing, more accurate value estimates, and better performance. The author's inclusion of the latent time variable as a part of the agent's observations reconfirms this well-known fact, but doesn't tell us anything new.\n\n3) I have the same questions about Partial Episode bootstrapping: Is there a task in which we find our RL agents learning in time-limited settings and then evaluated in unlimited ones? The experiments in this direction again feel somewhat contrived by imposing time limits and then removing them. The proposed solution of bootstrapping from the value of the terminal state v(S_T) clearly works, and I suspect that any RL-practitioner faced with training time-limited policies that are evaluated in time-unlimited settings might come up with the same solution. While the experiments are well done, I don't think the substance of the algorithmic improvement is enough.\n\nI think this paper would improve by demonstrating how time-aware policies can help in domains of interest (which are usually not time-limited). I could imagine a line of experiments that investigate the idea of selectively stopping episodes when the agent is no longer experiencing useful transitions, and then showing that the partial episode bootstrapping can save on overall sample complexity compared to an agent that must experience the entirety of every episode. ", "We are grateful for the valuable feedback. Below is our response:\n\nWe agree with the fact that the paper should be better positioned in the existing literature. Considering the first part of the paper on time-awareness, literature in dynamic programming and optimal control [3] generally suggests either a model-based backward induction method or to learn value functions for each time step. We feel that considering time as part of the MDP states and including it in the agent observations in the RL setting has been largely overlooked. This oversight has affected the design of the current benchmarks and we show that accounting for it can significantly improve the performance. We have also shown the specific effects time-based state-aliasing has on the learned value functions and policies of the agents and how discounting helps to somewhat mitigate this issue.\n\nThe provided reference [1] introduces a way to consider episodic tasks as a continuing one with a variable discount factor to account for terminations in episodic tasks. The paper then uses the framework to introduce soft-terminations, where the terminal state of an episode retains part of the value of the starting state of the next episode by using a non-zero discount factor at the terminal state. Such an approach is different to ours in several ways:\n- Their framework is mainly valuable when episodes can be identified inside of a continuing task e.g. in the Taxi task, pick up and drop off passengers, similar to pushing cubes to targets in our InfiniteCubePusher task, however we show that our proposed partial-episode bootstrapping also makes sense in tasks that do not have an underlying episodic structure such as for Hopper.\n- Their framework do not permit environmental reset for soft-terminations, as the state after soft-termination may not be related. Our approach does not update the value of the last state of partial episodes, thus the transition between this one and the starting state of the new episode does not exist in the view of the agent.\n- Soft-terminations use a different discount factor to the rest of the updates. We bootstrap with the same discount factor, showing in the Two-Goal Gridworld task that a correct indefinite-time value function can be learned from short episodes, just as it would be from indefinite time episodes.\nAn advantage of the method proposed in the reference [1] and illustrated with the Taxi domain is that it encourages termination in a state that is good to start the next episode. If we take our InfiniteCubePusher task, this means, e.g. to not lose control of the cube once moved to the target by pushing it too hard and waste time for the next episode. With our approach, we too achieve this objective, as can be seen in a video linked in the paper (https://www.youtube.com/watch?v=ckgVLgFi-sc).\n\nAs proposed in the Discussion section, partial-episode bootstrapping could be extended to some more sophisticated early-termination reasons, e.g. when an already well-known state is encountered. Furthermore, the proposed approach is compatible with on- or off-policy algorithms as the termination is not decided by the agent.\n\nBelow is our response to some additional comments:\n\n4) Since we are using an off-policy method (Q-learning), having a fully exploratory behavior policy does not prevent the agent from learning the optimal policy.\n\n6) The video linked in the paper (https://www.youtube.com/watch?v=ckgVLgFi-sc) shows a learned good policy that manages to push the cube to several targets. For further information on the average number of targets reached during evaluation, you may refer to Section B.2. Since the rewards are 1 for reaching a target and 0 otherwise, thus the sums of rewards correspond to the average number of targets reached (approximately 17 during 1000 time steps for the proposed agent). \n\n[3] Bertsekas, Dimitri P., Dynamic programming and optimal control. Belmont, MA: Athena scientific, 2017.", "We are grateful for the valuable feedback. Below is our response:\n\nWe agree with the fact that the paper should be better positioned in the existing literature. Considering the first part of the paper on time-awareness, literature in dynamic programming and optimal control [1] generally suggests either a model-based backward induction method or to learn value functions for each time step. We feel that considering time as part of the MDP states and including it in the agent observations in the RL setting has been largely overlooked. This oversight has affected the design of the current benchmarks and we show that accounting for it can significantly improve the performance. We have also shown the specific effects time-based state-aliasing has on the learned value functions and policies of the agents and how discounting helps to somewhat mitigate this issue.\n\n[1] Bertsekas, Dimitri P., Dynamic programming and optimal control. Belmont, MA: Athena scientific, 2017.", "We are grateful for the valuable feedback. Below is our response:\n\n1) We disagree with the assertion that most domains are terminated by failure/success with no time limit. Many tasks can consist of maximizing a score within a time limit; e.g. robotic vacuum cleaning, taking an exam, or manufacturing where appropriate schedule is of essence. In OpenAI Gym, currently featuring the most popular set of benchmark environments, all the tasks are time-limited. In fact, many of the environments never terminate before the time limit, such as Reacher, Pusher, Striker, Thrower, Swimmer, HumanoidStandup, and Pendulum. Furthermore, even in environments that can terminate by failure, such as InvertedPendulum or Hopper, we show that time-awareness is valuable.\n\n2) Indeed time-awareness seems somewhat obvious and solves the state-aliasing due to time, but since it has been largely overlooked in the RL literature, we believe it was necessary to produce a comprehensive account. Moreover, the role of state-aliasing merely as a consequence of not observing a notion of the remaining time has not been explicitly discussed previously. We elaborate on this scenario and the specific issues it causes. We stand to believe that the RL community could indeed benefit from such clarification.\n\n3) Early termination of episodes can be helpful, or even necessary, to an RL algorithm; specifically, if an agent gets stuck in some parts of the state space because of traps in the environment or poor exploration. We have encountered this issue for the InfiniteCubePusher task with PPO. The agent would very commonly learn to permanently push against the wall if the environment was never reset. However, if early terminations are used, we show that it is possible to learn a good long-term policy by continuing to bootstrap at early termination. This approach, however intuitive, is not discussed explicitly in any literature to the best of our knowledge. In fact, as we show in the paper, available implementations of the state-of-the-art algorithms are not taking this into consideration. Therefore, we believe such an account to bring clarity to the field is valuable to the community.\n\n4) Indeed, early-termination based on known states instead of time limits is very interesting and is proposed in the discussion section. However, to make a coherent paper solving a specific and very recurrent problem in the RL literature, we decided to focus on time limits.", "Thank you kindly for your work reproducing the first part of the paper. We believe this is a very important initiative for the research in the field and we indeed did our best to make the paper reproducible.\n\nAs you mention in the report, you used Roboschool while our results were obtained using the more popular Gym MuJoCo environments. The differences between the environments could certainly explain the contrast in our results.\n\nHowever, in order to illustrate that our figures can be easily reproduced, we created a very simple notebook that shows similar results as the ones you tried to replicate. For simplicity, the notebook is using the average of the last 100 rewards collected during training, which is different from the way we evaluated the performance in the paper (one complete episode every five training cycles) and averaged it (sliding window and 40 seeds). The notebook can be found here:\nhttps://gist.github.com/anonymous/dfd23c90a3bb69b650d76f690d6cd501", "You are right, thank you for the correction.", "As part of the ICLR Reproducibility Challenge, we have made an attempt to reproduce some of the experiments found in this paper. \n\nWe started from the OpenAI baseline inlementation of PPO as did the authors of this paper and we modified the environment wrappers to allow for the time remaining in an episode to be used as a feature in the observation space. We tested this time aware agent on Hopper, InvertedPendulum and Reacher.\n\nThe authors of this paper made strong efforts to make their work reproducible. They starte their work building off of a publicly available code base and several of their experiments were on common Mujoco robot environments. All hyperparameters, the training time, the random seeds and the specifics of smoothing for graphs were reported. \n\nWe did our best to make the same changes to the OpenAI enrivonment wrapper to allow for time remaining to be a feature of the opservation space and thus used as a feature in the neural network for prediction.\n\nWe successfully ran experiments with the time aware agent on the Hopper, Reacher and InvertedPendulum environments. \n\nOn Hopper with gamma = 0.99, our results closely matched those reported in the paper. There was a clear benefit to using time as a feature. However for undiscounte rewards with gamma set to 1, our result showed the same advantage for time awareness as with gamma = 0.99. The original authors reported a sharp decline in rewards for the time unaware agent. \n\nOn InvertedPendulum, within bound of uncertainty, our results seem to confirm the results reported in this paper for both the time aware and time unaware agents with gamma set to both 0.99 and to 1. \n\nOn Reacher, the authors reported an advantage for time awareness at early states in training for epilon = 0.99 and a very strong advantage for time awareness at all stages when gamma = 1. Our results did not reflect this. Our experiment indicated roughly equal performance for time aware and time unaware agents for both gamma = 0.99 and gamma = 1. \n\nIn conclusion, there are clear signs of advantages of time awareness on some environments. However, our implementation failed to realizefully those advantages on all environments. \n\nOur full report may be read here: https://drive.google.com/file/d/1wiVVj_zSg4t-w8x6LHBYhHkxF5Est2SX/view?usp=sharing\n\nEdited: epsilon -> gamma", "I think you mean gamma=0.99 above" ]
[ 5, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyDAQl-AW", "iclr_2018_HyDAQl-AW", "iclr_2018_HyDAQl-AW", "S1nWk0YgG", "ryeKvcAeM", "rkUhtNy-M", "SJB4mKFGM", "H1XFBYKMf", "iclr_2018_HyDAQl-AW", "SJB4mKFGM" ]
iclr_2018_rkc_hGb0Z
A dynamic game approach to training robust deep policies
We present a method for evaluating the sensitivity of deep reinforcement learning (RL) policies. We also formulate a zero-sum dynamic game for designing robust deep reinforcement learning policies. Our approach mitigates the brittleness of policies when agents are trained in a simulated environment and are later exposed to the real world where it is hazardous to employ RL policies. This framework for training deep RL policies involve a zero-sum dynamic game against an adversarial agent, where the goal is to drive the system dynamics to a saddle region. Using a variant of the guided policy search algorithm, our agent learns to adopt robust policies that require less samples for learning the dynamics and performs better than the GPS algorithm. Without loss of generality, we demonstrate that deep RL policies trained in this fashion will be maximally robust to a ``worst" possible adversarial disturbances.
rejected-papers
The reviewers are unanimous that the paper is not sufficiently clear and could be improved with better empirical results.
test
[ "rJAdpODef", "ryha-G5lM", "HJ_WDu5xM", "SyVqfP7Wz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "The authors propose to incorporate elements of robust control into guided policy search, in order to devise a method that is resilient to perturbations and (presumably) model mismatch.\n\nThe idea behind the method and the discussion in the introduction and related work is interesting and worthwhile, and I think that combining elements from robust control and reinforcement learning is a very promising direction to explore. However, in its present state, the paper is very hard to evaluate, perhaps because the submission was a bit rushed. It may be that the authors can clarify some of these issues in the response period.\n\nFirst, the authors repeatedly state that perturbations are applied to the policy parameters. This seems very strange to me, as typically robust control considers perturbations to the state or control. And reading the actual method, I can't actually figure out how perturbations are applied to the parameters -- as near as I can tell, the perturbations are indeed applied to the controls. So which is it?\n\nThere is quite a lot of math in the derivation, and it's unclear which parts relate to the standard guided policy search algorithm, and which parts are new. After reading the technical sections several times, my best guess is that the method corresponds to using an adversarial trajectory optimization setup to generate supervision for training a policy. So only the trajectory optimization phase is actually different. Is that true? Or are there other modifications? Some sort of summary of the overall method would have been appreciated, or else a clearer separation of new and old components.\n\nThe evaluation also leaves a lot to be desired. What kind of perturbations are actually being considered? Are they all adversarial perturbations? Do the authors actually test model mismatch or other more natural conditions where robustness would be beneficial? In the end, I was unable to really interpret what the experiments are trying to get across, which makes it hard for me to tell if the method actually works or improves on anything.\n\nIn its present state, the paper is very hard to parse, and the evaluation appears too rushed for me to be able to deduce how well the method works. Hopefully the authors can clarify some of these issues in the response period.", "The paper presents a method for evaluating the sensitivity and robustness of deep RL policies, and proposes a dynamic game approach for learning robust policies.\n\nThe paper oversells the approach in many ways. The authors claim that \"experiments confirm that state-of-the-art reinforcement learning algorithms fail in the presence of additive disturbances, making them brittle when used in situations that call for robustness\". However, their methods and experiments are only applied to Guided Policy Search (GPS), which seems like a specialized RL algorithm. Conclusions drawn from empirically running GPS on a problem cannot be generalized to all \"state-of-the-art RL algorithms\".\n\nIn Fig 3, the authors conclude that \"our algorithm uses lesser number for the GMMs and requires fewer samples to generalize to the real-world\". I'm not sure how this can be concluded from Fig 3 [LEFT]. The two line graphs for different values of gamma almost overlay each other, and the cost seems to go up and down, even with number of samples on a log scale. If this shows the variance in the procedure, then the authors should run enough repeats of the experient to smooth out the variance and show the true signal (with error bars if possible). All related conclusions with regards to the dynamic game achieving higher sample efficiency for GMM dynamics fitting need to be backed up with better experimental data (or perhaps clearer presentation, if such data already exists).\n\nFigures 2 and 3 talk about optimal adversarial costs. The precise mathematical definition of this term should be clarified somewhere, since there are several cost functions described in the paper, and it's unclear which terms are actually being plotted here.\n\nThe structure of the global policies used in the experiments should be mentioned somewhere.\n\nNote about anonymity: Citation [21] breaks anonymity, since it's referred to in the text as \"our abstract\". The link to the YouTube video breaks author anonymity. Further, the link to a shared dropbox folder breaks reviewer anonymity, hence I have not watched those videos.", "There are two anonymity violations in the paper. The first is in the sentence \"The 7-DoF robot result presented shortly previously appeared in our abstract that introduced robust GPS [21]\". The second is in the first linked video, which links to a non-anonymized youtube video. The second linked video, a dropbox link, does not have the correct permissions set, and thus cannot be viewed. Also, the citation style does not seem to follow the ICLR style guidelines.\n\nDisregarding the anonymity and style violations, I will review the paper. I do not have background in H_inf control theory, but I will review the paper to the best of my ability.\n\nThis paper proposes a guided policy search method for training deep neural network policies that are robust to worst-case additive disturbances. To my knowledge, the approach presented in the paper is novel, though some relevant references are missing. The experimental results demonstrate the method on two simulated experimental domains, demonstrating robustness to adversarial perturbations. The paper is generally well-written, but has some bugs and typos. The paper is substantially longer than the strongly suggested page limit. There are parts of the paper that should be moved to an appendix to accommodate the page limit.\n\nRegarding the experiments:\nMy main concerns are with regard to the completeness of the experiments. First, the experimental results report performance in terms of cost/reward, which is extremely difficult to interpret. It would be helpful to also provide success rate for all experiments, where the authors can define success as, e.g. getting the peg in the hole or being within a certain threshold of the goal.\nSecond, the paper should provide a comparison of policy robustness between the proposed approach and (1) a policy trained with standard GPS, (2) a policy trained with GPS and random perturbations, and ideally, (3) prior approaches to robustness, e.g. Pinto et al., Madelkar et al. [1], or Rajeswaran et al. [2].\n\nRegarding related work and clarity:\nThere are a few papers that consider the problem of building deep neural network policies that are robust [1,2] that should be discussed and ideally compared to.\nRecent deep reinforcement learning work has studied the problem of robustness to adversarial perturbations in the observation space, e.g. [3,4,5,6]. As such, it would be helpful to clarify in the introduction that this paper is considering additive perturbations in the action space.\nThe paper switches between using rewards and costs. It would be helpful to pick one term and stick with it for the entire paper, rather than switching. Further, it seems like there are errors due to the switching. e.g. on page 3, \\ell is defined as the expected reward and in equation 3, it seems like the protaganist policy is trying to minimize \\ell, contradicting the earlier definition.\nLastly, section 5.1 is currently rather difficult to follow. It would help to see more top-down direction in the derivation and more details in section 5.1, 5.2, and 5.3 to be moved to an appendix.\n\nRegarding correctness:\nThere seem to be some issues in the math and/or notation:\nThe most major issue is in Algorithm 2, which is probably the most important part of the paper to be correct, given that it provides a complete picture of the algorithm. I believe that steps 3 and 4 are incorrect and/or incomplete. Step 4 should be referring to the local policy p rather than the global policy pi (assuming the notation in sections 5.1 and 5.3). Also, the local policy p(v|x) appears nowhere in the algorithm, and probably should appear in step 4. In step 5, how is this regression different from step 7?\nIn equation 1 and elsewhere in section 4, there is a mix of notation, using u and z inter-changably. (e.g. in equation 1 and the following equation, I believe the u should be switched to z or the z should be switched to u).\n\nMinor feedback:\n> \"requires fewer samples to generalize to the real world\"\nNone of these experiments are in the real world, so the term \"real world\" should not be used.\n> \"algoruithm\" -> \"algorithm\"\n> \"when unexpected such as when there exists\" - bad grammar / typo\n> \"ertwhile sensy-sensitivity parameter\" - typo\n- reference 1 is missing authors\n\nIn summary, I think the paper would be significantly improved with further experimental comparisons, further discussion of related work, and clarifications/corrections on the notation, equations, and algorithms. In its current form, I don't think the paper is fit for publication. My rating is between a 4 and a 5.\n\n[1] http://vision.stanford.edu/pdf/mandlekar2017iros.pdf\n[2] https://arxiv.org/abs/1610.01283\n[2] https://arxiv.org/abs/1701.04143\n[3] https://arxiv.org/abs/1702.02284\n[4] https://www.ijcai.org/proceedings/2017/0525.pdf\n[5] https://arxiv.org/abs/1705.06452", "Hi, \n\nThank you for your time and evaluation.\n\nI now respond to your queries in the order that they were presented\n\n>1. First, the authors repeatedly state that perturbations are applied to the policy parameters. This seems very strange to me, as typically robust control considers perturbations to the state or control. And reading the actual method, I can't actually figure out how perturbations are applied to the parameters -- as near as I can tell, the perturbations are indeed applied to the controls. So which is it?\n\nSorry for the confusion. The perturbations are applied to the local controls, p(u|x). Since the policy is trained on all the possible local controllers, the global neural network policies that are learnt through supervised learning are perturbed. We will amend the text to express this more clearly.\n\n> 2.1 There is quite a lot of math in the derivation, and it's unclear which parts relate to the standard guided policy search algorithm, and which parts are new. \n\n\nThe math in the derivations (mostly appendices I - II) relate to the trajectory optimization phase of the algorithm. The recursions for the value function and Q functions are now slightly more complicated due to the presence of the adversarial perturbation term. Most of these math is buried in the appendix. \n\n> 2.2 After reading the technical sections several times, my best guess is that the method corresponds to using an adversarial trajectory optimization setup to generate supervision for training a policy. So only the trajectory optimization phase is actually different. Is that true? Or are there other modifications? Some sort of summary of the overall method would have been appreciated, or else a clearer separation of new and old components.\n\nYes, you are absolutely correct. We reformulate the trajectory optimization phase of the GPS algorithm in a two-player Markov decision process (with adversarial components to the Q-function expansion in equation 7). We then cast the optimization as an alternating best response supervised learning update of global control and adversary policies to obtain convergence to saddle-point equilibrium. The major difference is the trajectory optimization update but if you look in algorithm II, the C-Step involves a min-max over the augmented cost function ell(x_t, u_t, v_t) and not just the cost function min_{pi \\in \\Pi} max_{\\mu \\in M} \\ell_t(x_t, u_t) so that we end up with a joint stage cost function \\ell_t(x_t, u_t, v_t) = c(x_t, u_t) - \\gamma \\alpha(v_t), where \\alpha is a 2-norm in our implementation.\n\n\n> 3. The evaluation also leaves a lot to be desired. What kind of perturbations are actually being considered? Are they all adversarial perturbations? Do the authors actually test model mismatch or other more natural conditions where robustness would be beneficial? In the end, I was unable to really interpret what the experiments are trying to get across, which makes it hard for me to tell if the method actually works or improves on anything.\n\nThanks for asking. For our evaluations, we considered additive adversarial perturbation terms introduced by the gamma disturbance parameter on the joint stage cost (eq. 4). The result for testing various adversarial perturbations are in figure 2 for the peg insertion task.\n \t* as γ --> 1, optimal adversary policy does nothing\n \t* as γ decreases, adversary actions have larger effect on closed-loop system\n \t* smallest γ where adversarys policy causes unacceptable performance provides measure\nof robustness of control policy π\n\t* any existing (deep) RL method can be used to train adversary policy" ]
[ 5, 3, 5, -1 ]
[ 4, 3, 2, -1 ]
[ "iclr_2018_rkc_hGb0Z", "iclr_2018_rkc_hGb0Z", "iclr_2018_rkc_hGb0Z", "rJAdpODef" ]
iclr_2018_rkvDssyRb
Multi-Advisor Reinforcement Learning
We consider tackling a single-agent RL problem by distributing it to n learners. These learners, called advisors, endeavour to solve the problem from a different focus. Their advice, taking the form of action values, is then communicated to an aggregator, which is in control of the system. We show that the local planning method for the advisors is critical and that none of the ones found in the literature is flawless: the \textit{egocentric} planning overestimates values of states where the other advisors disagree, and the \textit{agnostic} planning is inefficient around danger zones. We introduce a novel approach called \textit{empathic} and discuss its theoretical aspects. We empirically examine and validate our theoretical findings on a fruit collection task.
rejected-papers
The reviewers agree this is an interesting paper with interesting ideas, but is not ready for publication in its current shape. In particular, there is a need for strong empirical results.
train
[ "rJbjUB3JM", "B1m1clFlM", "H18ZJWAgG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents Multi-Advisor RL (MAd-RL), a formalized view of many forms of performing RL by training multiple learners, then aggregating their results into a single decision-making agent. Previous work and citations are plentiful and complete, and the field of study is a promising approach to RL. Through MAd-RL, the authors analyze the effects of egocentric, agnostic, and empathic planning at the sub-learner level on the resulting applied aggregated policy. After this theoretical discussion, the different types of sub-learners are used on a Pac-Man problem.\n\nI believe an interesting paper lies within this, and were this a journal, would recommend edits and resubmission. However, in its current state, the paper is too disorganized and unclear to merit publication. It took quite a bit of time for me to understand what the authors wanted me to focus on - the paper needs a clearer statement early summarizing its intended contributions. In addition, more care to language usage is needed - for example, \"an attractor\" refers to an MDP in Figure 3, a state in Theorem 2, and a set in the Theorem 2 discussion. Additionally, the theoretical portion focuses on the effects of the three different sub-learner types, but the experiments are \"intend[ed] to show that the value function is easier to learn with the MAd-RL architecture,\" which is an entirely different goal.\n\nI recommend the authors decide what to focus on, rethink how paper space is allocated, and take care to more clearly drive home their intended point.", "This paper presents MAd-RL, a method for decomposition of a single-agent RL problem into a simple sub-problems, and aggregating them back together. Specifically, the authors propose a novel local planner - emphatic, and analyze the newly proposed local planner along of two existing ones - egocentric and agnostic. The MAd-RL, and theoretical analysis, is evaluated on the Pac-Boy task, and compared to DQN and Q-learning with function approximation.\n\nPros:\n1. The paper is well written, and well-motivated.\n2. The authors did an extraordinary job in building the intuition for the theoretical work, and giving appropriate examples where needed.\n3. The theoretical analysis of the paper is extremely interesting. The observation that a linearly weighted reward, implies linearly weighted Q function, analysis of different policies, and local minima that result is the strongest and the most interesting points of this paper.\n\nCons:\n1. The paper is too long. 14 pages total - 4 extra pages (in appendix) over the 8 page limit, and 1 extra page of references. That is 50% overrun in the context, and 100% overrun in the references. The most interesting parts and the most of the contributions are in the Appendix, which makes it hard to assess the contributions of the paper. There are two options: \n 1.1 If the paper is to be considered as a whole, the excessive overrun gives this paper unfair advantage over other ICLR papers. The flavor and scope and quality of the problems that can be tackled with 50% more space is substantially different from what can be addressed within the set limit. If the extra space is necessary, perhaps this paper is better suited for another publication? \n 1.2 If the paper is assessed only based on the main part without Appendix, then the only novelty is emphatic planner, and the theoretical claims with no proofs. The results are interesting, but are lacking implementation details. Overall, a substandard paper.\n2. Experiments are disjoint from the method’s section. For example:\n 2.1 Section 5.1 is completely unrelated with the material presented in Section 4.\n 2.2 The noise evaluation in Section 5.3 is nice, but not related with the Section 4. This is problematic because, it is not clear if the focus of the paper is on evaluating MAd-RL and performance on the Ms.PacMan task, or experimentally demonstrating claims in Section 4.\n\nRecommendations:\n1. Shorten the paper to be within (or close to the recommended length) including Appendix.\n2. Focus paper on the analysis of the advisors, and Section 5. on demonstrating the claims.\n3. Be more explicit about the contributions.\n4. How does the negative reward influence the behavior the agent? The agent receives negative reward when near ghosts.\n5. Move the short (or all) proofs from Appendix into the main text.\n6. Move implementation details of the experiments (in particular the short ones) into the main text.\n7. Use the standard terminology (greedy and random policies vs. egoistic and agnostic) where possible. The new terms for well-established make the paper needlessly more complex. \n8. Focus the literature review on the most relevant work, and contrast the proposed work with existing peer reviewed methods.\n9. Revise the literature to emphasize more recent peer reviewed references. Only three references are recent (less than 5 years), peer reviewed references, while there are 12 historic references. Try to reduce dependencies on non-peer reviewed references (~10 of them).\n10. Make a pass through the paper, and decouple it from the van Seijen et al., 2017a\n11. Minor: Some claims need references:\n 11.1 Page 5: “egocentric sub-optimality does not come from the actions that are equally good, nor from the determinism of the policy, since adding randomness…” - Wouldn’t adding epsilon-greediness get the agent unstuck?\n 11.2 Page 1. “It is shown on the navigation task ….” - This seems to be shown later in the results, but in the intro it is not clear if some other work, or this one shows it. \n12. Minor:\n 12.1 Mix genders when talking about people. Don’t assume all people that make “complex and important problems”, or who are “consulted for advice”, are male.\n 12.2 Typo: Page 5: a_0 sine die\n 12.3 Page 7 - omit results that are not shown\n 12.4 Make Figures larger - it is difficult, if not impossible to see\n 12.5 What is the difference between Pac-Boy and Ms. Pacman task? And why not use Ms. Packman?\n \n", "Summary\n\nThe paper is well-written but does not make deep technical contributions and does not present a comprehensive evaluation or highly insightful empirical results.\n\nAbstract / Intro\n\nI get the entire focus of the paper is some variant of Pac-Man which has received attention in the RL literature for Atari games, but for the most part the impressive advances of previous Atari/RL papers are in the setting that the raw video is provided as input, which is much different than solving the underlying clean mathematically abstracted problem (as a grid world with obstacles) as done here and evident in the videos. Further it is honestly hard for me to be strongly motivated about a paper that focuses on the need to decompose Pac-man into sub-agents/advisor value functions.\n\nSection 2\n\nAnother historically well-cited paper for MDP decomposition:\n\n Flexible Decomposition Algorithms for Weakly Coupled Markov Decision Problems, Ronald Parr. UAI 98.\n https://dslpitt.org/uai/papers/98/p422-parr.pdf\n\nSection 3\n\nIs the additive reward decomposition a required part of the problem specification? It seems so, i.e., there is no obvious method for automatically decomposing a monolithic reward function over advisors.\n\nSection 4\n\n* Egocentric:\n\nDefinition 1: Sure, the problem will have local optima (attractors) when decomposed suboptimally -- I'm not sure what new insight we've gained from this analysis... it is a general problem with any function approximation scheme that does not guarantee that the rank ordering of actions for a state is preserved.\n\n* Agnostic\n\nOther than approximating some type of myopic rollout, I really don't see why this approach would be reasonable? I am surprised it works at all though my guess is that this could simply be an artifact of evaluating on a single domain with a specific structure.\n\n* Empathic\n\nThis appears to be the key contribution though related work certainly infringes on its novelty. Is this paper then an empirical evaluation of previous methods in a single Pac-man grid world variant?\n\nI wonder if the theory of DEC-MDPs would have any relevance for novel analysis here?\n\nSection 5\n\nI'm disappointed that the authors only evaluate on a single domain; presumably the empathic approach has applications beyond Pac-Man?\n\nThe fact that empathic generally performs better is not at all surprising. The fact that a modified discount factor for egocentric can also perform well is not surprising given that lower discount factors have often been shown to improve approximated MDP solutions, e.g.,\n\n Biasing Approximate Dynamic Programming with a Lower Discount Factor\n\n Marek Petrik, Bruno Scherrer (NIPS-08).\t\n http://marek.petrik.us/pub/Petrik2009a.pdf\n\n***\n\nSide note:\n\nThe following part is somewhat orthogonal to the review above in that I would not expect the authors to address this on revision, *but* at the same time I think it provides a connection to the special case of concurrent action decomposition into advisors, which could potentially provide a high impact direction of application for this work (i.e., concurrent problems are hard and show up in numerous operations research problems covering inventory control, logistics, epidemic response).\n\nFor the special case that each advisor is assigned to one action in a factored space of concurrent actions, the egocentric algorithm would be very close to the Hindsight approximation in Section 6 of this paper (including an additive decomposition of rewards):\n\n Planning in Factored Action Spaces with Symbolic Dynamic Programming\n Aswin Nadamuni Raghavan, Alan Fern, Prasad Tadepalli, Roni Khardon, and Saket Joshi (AAAI-12).\n https://www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/download/5012/5336\n\nThis simple algorithm is hard to beat for the following reason that connects some details of your egocentric and empathic settings: rather than decomposing a concurrent MDP into independent problems per concurrent action, the optimization of each action (by each advisor) is done in sequence (advisors are ordered) and gets to condition on the previously selected advisor actions. So it provides an alternate paradigm where advisors actually get to see and condition their policy on what other advisors are doing. In my own work comparing optimal concurrent solutions to this approach, I have found this approach to be near-optimal and much more efficient to solve since it exploits decomposition.\n\nWhy is this relevant to this work? Because (a) it suggests another variant of the advisor decomposition that at least makes sense in the case of concurrent actions (and perhaps shared actions though this would require some extension) and (b) it suggests there are more options than just the full egocentric and empathic settings in this important class of concurrent action problems that are necessarily solved in practice for large action spaces by some form of decomposition. This could be an interesting direction for future exploration of the ideas in this work, where there might be additional technical novelty and more space for empirical contributions and observations." ]
[ 4, 4, 4 ]
[ 4, 4, 5 ]
[ "iclr_2018_rkvDssyRb", "iclr_2018_rkvDssyRb", "iclr_2018_rkvDssyRb" ]
iclr_2018_rJIgf7bAZ
An inference-based policy gradient method for learning options
In the pursuit of increasingly intelligent learning systems, abstraction plays a vital role in enabling sophisticated decisions to be made in complex environments. The options framework provides formalism for such abstraction over sequences of decisions. However most models require that options be given a priori, presumably specified by hand, which is neither efficient, nor scalable. Indeed, it is preferable to learn options directly from interaction with the environment. Despite several efforts, this remains a difficult problem: many approaches require access to a model of the environmental dynamics, and inferred options are often not interpretable, which limits our ability to explain the system behavior for verification or debugging purposes. In this work we develop a novel policy gradient method for the automatic learning of policies with options. This algorithm uses inference methods to simultaneously improve all of the options available to an agent, and thus can be employed in an off-policy manner, without observing option labels. Experimental results show that the options learned can be interpreted. Further, we find that the method presented here is more sample efficient than existing methods, leading to faster and more stable learning of policies with options.
rejected-papers
The reviewers are unanimous that this is an interesting paper, but that ultimately the empirical results are not sufficiently promising to warrant the added complexity.
train
[ "B1LVBmqgf", "B1F3lO2lG", "ByV--6Xbz", "SyDFSOp7z", "H1uwNUJMz", "BkGEN81fz", "BkV07I1fM", "H1ci0S1zG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper treats option discovery as being analogous to discovering useful latent variables. The proposed formulation assumes there is a policy over options, which invokes an option’s policy to select actions at each timestep until the option’s termination function is activated. A contribution of this paper is to learn all possible options that might have caused an observed trajectory, and to update parameters for all these pertinent option-policies with backprop. The proposed method, IOPG, is compared to A3C and the option-critic (OC) on four continuous control tasks in Mujoco, and IOPG has the best performance on one of the four domains.\n\nThe primary weakness of this paper is the absence of performance or conceptual improvements in exchange for the additional complexity of using options. The only domain where IOPG outperforms both A3C and OC is the Walker2D-v1 domain, and the reported performance on that domain (~800) is far below the performance of other methods (shown on OpenAI’s Gym site or in the PPO paper). Also, there is not much analysis on what kind of options are learned with this approach, beyond noting that the options seem clustered on tSNE plots. Given the close match between the A3C agent and the IOPG agent on the other three domains, I expect that the system is mostly relying on the base A3C components with limited contributions from the extensions introduced in the network for options. \n\nThe clarity of the paper’s contributions could be improved. The contribution of options might be made more clearly in smaller domains or in more detailed experiments. How is the termination beta provided from the network? How frequently did the policy over options switch between them? How was the number of options selected, and what happens when the number of possible options is varied from 1 to 4 or beyond 4? To what extent was there overlap in the learned policies to realize the proposed algorithmic benefit of learning multiple option-policies from the same transitions? The results in this paper do not provide strong support for using the proposed method.\n\n", "The paper presents a new policy gradient technique for learning options. The option index is treated as latent variable and, in order to compute the policy gradient, the option distribution for the current sample is computed by using a forward pass. Hence, a single sample can be used to update all options and not just the option that has been used for this sample.\n \nThe idea of the paper is good but the novelty is limited. As noted by the authors, the idea of using inference for option discovery has already been presented in Daniel2016. Note that the option discovery process is Daniel2016 is not limited to linear sub-policies, only the policy update strategy is. So the main contribution is to use a new policy update strategy, i.e., policy gradients, for inference based option discovery. Thats fine but should be stated more clearly in the paper. The paper is also written very well and the topic is relevant for the ICLR conference. \n\nHowever, the paper has two main problems:\n- The results are not convincing. In most domains, the performance is similar to the A3C algorithm (which does not use inference based option discovery), so the impact of this paper seems limited. \n\n- One of the main assumptions of the algorithm is wrong. The assumption is that rewards from the past are not correlated with actions in the future conditioned on the state s_t (otherwise we would always have a correlation) ,which is needed to use the policy gradient theorem. The assumption is only true for MDPs. However, using the option index as latent variable yields a PoMDP. There, this assumption does not hold any more. Example: Reward at time step t-1 depends on the action, which again depends on the option o_t-1. Action at time step t depends on o_t. Hence, there is a strong correlation between reward r_t-1 and action a_t+1 as o_t and o_t+1 are strongly correlated. o_t is not a conditional variable of the policy as it is not part of the state, thats why this assumption does not work any more.\n\nSummary: The paper is well written and presents a good extension of inference based option discovery. However, the results are not convincing and there is a crucial issue in the assumptions of the algorithm. \n", "This paper proposes what is essentially an off-policy method for learning options in complex continuous problems. The idea is to use policy gradient style algorithms to update a suite of options using relatively \n\nOn the positive side, I like the core idea of this paper. The idea of updating multiple options at once is a good one. I think the authors should definitely continue to investigate this line of work. I also appreciated that the authors took the time to try and visualize what was learned. The paper is generally well-written and easy to read.\n\nOn the negative side: ultimately, the algorithm doesn't seem to work all that well. Empirically, the method doesn't seem to perform substantially better than other algorithms, although there seems to be some slight advantage. A clearly missing comparison would be something like TRPO or DDPG.\n\nFigure 1 was helpful in understanding marginalization and the forward algorithm. Thanks.\n\nWas there really only 4 options that were learned? How would this scale to more?\n", "We have uploaded an updated version of the paper, which addresses some of the concerns the reviewers had, as well as providing additional information on the nature of the options learned.", "We thank the reviewer for their time and insight. Individual points are addressed below.\n\n\n> Empirically, the method doesn't seem to perform substantially better than other algorithms, although there seems to be some slight advantage. A clearly missing comparison would be something like TRPO or DDPG.\n\nWe do not expect IOPG to outperform A3C significantly in a single task, but expect benefits in interpretability and transferability. Please see our official comment for our more detailed response to this. While TRPO has been shown to outperform A3C in certain situations, we feel that the policy update strategy is largely independent of the option learning method presented here. That is, it should not be too difficult to write an algorithm that uses trust region updates with option learning. We compare to A3C so that the value of our contribution in isolation is more clear. We could also add a comparison to an inferred-option extension of a more powerful policy search algorithm such as PPO, TRPO, or DDPG.\n\n\n> Was there really only 4 options that were learned? How would this scale to more?\n\nThe number of options learned is prespecified as a hyperparameter, as is the case in several option learning methods. The computational complexity is quadratic in the number of options, with linear memory complexity. We will add an experiment comparing the number of options in the next version of the paper.\n", "Thank you to the reviewer for their insightful comments. Individual points are addressed below.\n\n\n> The only domain where IOPG outperforms both A3C and OC is the Walker2D-v1 domain, and the reported performance on that domain (~800) is far below the performance of other methods (shown on OpenAI’s Gym site or in the PPO paper).\n\nWe do not expect IOPG to outperform A3C significantly in a single task, but expect benefits in interpretability and transferability. Please see our comment for our more detailed response to this. We would like to add here that we view the option learning strategy contributed here to be largely independent to the method used for policy optimization. This is to say that it should be easy to write a PPO-IOPG algorithm with the benefits of both. We compare to A3C so that the value of our contribution in isolation is more clear. We could also add a comparison to an inferred-option extension to PPO.\n\n\n> How is the termination beta provided from the network? \n\nWe apologize for forgetting to include this. Termination is sampled for the currently active option from a linear sigmoid layer on top of the policy network, as an additional head. We will clarify this in the updated version of the paper.\n\n\n> How frequently did the policy over options switch between them? \n\nWe will add this information to the appendix in the next version of the paper.\n\n\n> How was the number of options selected, and what happens when the number of possible options is varied from 1 to 4 or beyond 4?\n\nMost existing option learning methods require specification of the number of options as a hyperparameter. In general this is optimized according to the task at hand. Here, however, we did no optimization over this parameter, but we'll be happy to add an experiment to the next version of the paper.\n\n\n> To what extent was there overlap in the learned policies to realize the proposed algorithmic benefit of learning multiple option-policies from the same transitions?\n\nAt the start of learning, the policies tend to overlap highly due to random initialization. Because of this, early training benefits from the simultaneous update, as all options are implicated in every action. As training progresses, the t-SNE experiments demonstrate that is little overlap between final policies. Each policy appears to be active in a different region of state space. This is likely due to the fact that the most likely option is most updated, rather than a single option being updated improperly in the event of an unlikely action.\n", "We thank the reviewer for taking the time to evaluate our paper. Individual points are addressed below.\n\n\n> The assumption [that rewards are independent of future actions, conditioned on the current state] is only true for MDPs. However, using the option index as latent variable yields a PoMDP. There, this assumption does not hold any more.\n\nUnder the standard set of assumptions this would be correct. As shown in the line before Eqn. 3, the conditional assumption that we make is slightly different. It is true that a_k and r_j are not independent in general. However, they are conditionally independent given s_k and s_j, and a_j. We are conditioning on all of the observed states and observed actions since the start of the trajectory. Since the reward only depends on these observed variables, no information is passed to future actions.\n\n\n> As noted by the authors, the idea of using inference for option discovery has already been presented in Daniel2016. Note that the option discovery process is Daniel2016 is not limited to linear sub-policies, only the policy update strategy is. So the main contribution is to use a new policy update strategy, i.e., policy gradients, for inference based option discovery\n\nWe agree that the graphical model employed here is the same as that used in Daniel2016. However, the option inference step is not the same, since they employ the use of backward information, while we only require forwards information. This means that our algorithm can be employed online, while the one presented in Daniel2016 can only be applied in the episodic case, where updates are made only after the episode is terminated.\n\n\n> The results are not convincing. In most domains, the performance is similar to the A3C algorithm (which does not use inference based option discovery), so the impact of this paper seems limited.\n\nWe do not expect IOPG to outperform A3C significantly in a single task, but expect benefits in interpretability and transferability. Please see our official comment for our more detailed response to this. ", "We would like to thank the reviewers for their insightful comments. Here, we focus on the issue that all three reviewers raised: that A3C does as well as IOPG in most environments.\n\nIt is our opinion that A3C ought to perform roughly as well as IOPG. The optimization performed is nearly identical between the two algorithms, where IOPG is parameterized in a particular manner such that options can be learned. We developed IOPG as a data-efficient method to optimize several options simultaneously. We present it in a general form, without any sort of regularization on the structure of the options. Even without such regularization, the options learned by IOPG express some worthwhile characteristics, which several existing option learning algorithms cannot produce: namely temporal extension, and spatial separation. Without additional problem-specific regularization on the structure of those options, there is no reason to expect performance improvements in the single-task setting.\n\nThis said, we feel that the extra structure learned by IOPG yields several benefits. Options can be useful for the interpretation of agent behaviours, as our t-SNE experiments (Fig. 3) show. Further there is strong evidence to suggest that learned options can be useful for transfer learning (OptionGAN: Henderson et al. 2018, Option-Critic: Bacon et al. 2017, Subgoal Discovery: McGovern and Barto 2001). We feel that these benefits make IOPG a worthwhile algorithm, especially since it comes at no cost to data efficiency, variance, or asymptotic learning compared to A3C. We are currently working on experiments that better quantify such upsides." ]
[ 3, 3, 4, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJIgf7bAZ", "iclr_2018_rJIgf7bAZ", "iclr_2018_rJIgf7bAZ", "iclr_2018_rJIgf7bAZ", "ByV--6Xbz", "B1LVBmqgf", "B1F3lO2lG", "iclr_2018_rJIgf7bAZ" ]
iclr_2018_Bk-ofQZRb
TD Learning with Constrained Gradients
Temporal Difference Learning with function approximation is known to be unstable. Previous work like \citet{sutton2009fast} and \citet{sutton2009convergent} has presented alternative objectives that are stable to minimize. However, in practice, TD-learning with neural networks requires various tricks like using a target network that updates slowly \citep{mnih2015human}. In this work we propose a constraint on the TD update that minimizes change to the target values. This constraint can be applied to the gradients of any TD objective, and can be easily applied to nonlinear function approximation. We validate this update by applying our technique to deep Q-learning, and training without a target network. We also show that adding this constraint on Baird's counterexample keeps Q-learning from diverging.
rejected-papers
The reviewers agree this paper is not yet ready for publication.
train
[ "H1JLMpYlM", "rk0cH1cgM", "rJHtH-clM", "SJjLLwp7z", "SkQRzPJzz", "S1HLi7-bG", "HJmlAOcxM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "public", "public" ]
[ "Summary: This paper tackles the issue of combining TD learning methods with function approximation. The proposed algorithm constrains the gradient update to deal with the fact that canonical TD with function approximation ignores the impact of changing the weights on the target of the TD learning rule. Results with linear and non-linear function approximation highlight the attributes of the method.\n\nQuality: The quality of the writing, notation, motivation, and results analysis is low. I will give a few examples to highlight the point. The paper motivates that TD is divergent with function approximation, and then goes on to discuss MSPBE methods that have strong convergence results, without addressing why a new approach is needed. There are many missing references: ETD, HTD, mirror-prox methods, retrace, ABQ. Q-sigma. This is a very active area of research and the paper needs to justify their approach. The paper has straightforward technical errors and naive statements: e.g. the equation for the loss of TD takes the norm of a scalar. The paper claims that it is not well-known that TD with function approximation ignores part of the gradient of the MSVE. There are many others.\n\nThe experiments have serious issues. Exp1 seems to indicate that the new method does not converge to the correct solution. The grid world experiment is not conclusive as important details like the number of episodes and how parameters were chosen was not discussed. Again exp3 provides little information about the experimental setup.\n\nClarity: The clarity of the text is fine, though errors make things difficult sometimes. For example The Bhatnagar 2009 reference should be Maei.\n \nOriginality: As mentioned above this is a very active research area, and the paper makes little effort to explain why the multitude of existing algorithms are not suitable. \n\nSignificance: Because of all the things outlined above, the significance is below the bar for this round. ", "This paper proposes adding a constraint to the temporal difference update to minimize the effect of the update on the next state’s value. The constraint is added by projecting the original gradient to the orthogonal of the maximal direction of change of the next state’s value. It is shown empirically that the constrained update does not diverge on Baird’s counter example and improves performance in a grid world domain and cart pole over DQN.\n\nThis paper is reasonably readable. The derivation for the constraint is easy to understand and seems to be an interesting line of inquiry that might show potential.\n\nThe key issue is that the justification for the constrained gradients is lacking. What is the effect, in terms of convergence, in modifying the gradient in this way? It seems highly problematic to simply remove a whole part of the gradient, to reduce effect on the next state. For example, if we are minimizing the changes our update will make to the value of the next state, what would happen if the next state is equivalent to the current state (or equivalent in our feature space)? In general, when we project our update to be orthogonal to the maximal change of the next states value, how do we know it is a valid direction in which to update? \n\nI would have liked some analysis of the convergence results for TD learning with this constraint, or some better intuition in how this effects learning. At the very least a mention of how the convergence proof would follow other common proofs in RL. This is particularly important, since GTD provides convergent TD updates under nonlinear function approximation; the role for a heuristic constrained TD algorithm given convergent alternatives is not clear. \n \nFor the experiments, other baselines should be included, particularly just regular Q-learning. The primary motivation comes from the use of a separate target network in DQN, which seems to be needed in Atari (though I am not aware of any clear result that demonstrates why, rather just from informal discussions). Since you are not running experiments on Atari here, it is invalid to simply assume that such a second network is needed. A baseline of regular Q-learning should be included for these simpler domains. \n\nThe results in Baird’s counter example are discouraging for the new constraints. Because we already have algorithms which better solve this domain, why is your method advantageous? The point of showing your algorithm not solve Baird’s counter example is unclear.\n\nThere are also quite a few correctness errors in the paper, and the polish of the plots and language needs work, as outlined below. \n\nThere are several mistakes in the notation and background section. \n1. “If we consider TD-learning using function approximation, the loss that is minimized is the squared TD error.“ This is not true; rather, TD minimizes the mean-squared project Bellman error. Further, L_TD is strangely defined: why a squared norm, for a scalar value? \n2. The definition of v and delta_TD w.r.t. to v seems unnecessary, since you only use Q. As an additional (somewhat unimportant) point, the TD-error is usually defined as the negative of what you have. \n3. In the function approximation case the value function and q functions parameterized by \\theta are only approximations of the expected return.\n4. Defining the loss w.r.t. the state, and taking the derivative of the state w.r.t. to theta is a bit odd. Likely what you meant is the q function, at state s_t? Also, are ignoring the gradient of the value at the next step? If so, this further means that this is not a true gradient. \n\nThere is a lot of white space around the plots, which could be used for larger more clear figures. The lack of labels on the plots makes them hard to understand at a glance, and the overlapping lines make finding certain algorithm’s performance much more difficult. I would recommend combining the plots into one figure with a drawing program so you have more control over the size and position of the plots.\n\nExamples of odd language choices:\n\t-\t“The idea also does not immediately scale to nonlinear function approximation. Bhatnagar et al. (2009) propose a solution by projecting the error on the tangent plane to the function at the point at which it is evaluated. “ - The paper you give exactly solves for the nonlinear function approximation case. What do you mean does not scale to nonlinear function approximation? Also Maei is the first author on this paper.\n\t-\t“Though they do not point out this insight as we have” - This seems to be a bit overreaching.\n- “the gradient at s_{t+1} that will change the value the most” - This is too colloquial. I think you simply mean the gradient of the value function, for the given s_t, but its not clear. ", "This is an interesting idea, and written clearly. The experiments with Baird's and CartPole were both convincing as preliminary evidence that this could be effective. However, it is very hard to generalize from these toy problems. First, we really need a more thorough analysis of what this does to the learning dynamics itself. Baring theoretical results, you could analyze the changes to the value function at the current and next state with and without the constraint to illustrate the effects more directly. I think ideally, I would want to see this on Atari or some of the continuous control domains often used. If this allows the removing of the target network for instance, in those more difficult tasks, then this would be a huge deal.\n\nAdditionally, I do not think the current gridworld task adds anything to the experiments, I would rather actually see this on a more interesting linear function approximation on some other simple task like Mountain Car than a neural network on gridworld. The reason this might be interesting is that when the parameter space is lower dimensional (not an issue for neural nets, but could be problematic for linear FA) the constraint might be too much leading to significantly poorer performance. I suspect this is the actual cause for it not converging to zero for Baird's, although please correct me if I'm wrong on that.\n\nAs is, I cannot recommend acceptance given the current experiments and lack of theoretical results. But I do think this is a very interesting direction and hope to see more thorough experiments or analysis to support it.\n\nPros:\nSimple, interesting idea\nWorks well on toy problems, and able to prevent divergence in Baird's counter-example\n\nCons:\nLacking in theoretical analysis or significant experimental results\n", "I would like to thank the commenter for their in-depth study of our algorithm.\nIt is definitely very helpful in analyzing our approach and to guide our further inquiries.\nWe apologize for not being very clear in our methodology or hyper-parameters.\nWe will definitely make a more concerted effort to maintain reproducibility and also make sure to report all training conditions and hyper-parameters in the future.\n", "In efforts to ensure that published results are reliable and reproducible in Machine Learning research, we investigated the reproducibility of empirical results of this paper. We tried to reproduce the experimental results shown in the paper. However, there were some difficulties we faced.\n\n(1) The notation and equations shown in the paper lack of clarity. For example, the authors did not mathematically define the variable g_v(s_{t+1}), they described it as the gradient at s_{t+1} that will change the value most. After some research, we found out that the authors have replaced the g_v(s_{t+1}) with gTD(s_{t+1}) in their revised version of this paper that was submitted to the Deep Reinforcement Learning Symposium at NIPS. Only after this discovery were we able to proceed with the implementation.\n(2) There are no clear mention of how they are calculating g_{TD}(s_{t+1}) which seems to be the gradient of the TD error with respect to the next state. However, after communicating with the authors we found that g_{TD}(s_{t+1}) is the gradient of the same TD error with respect to the target and all the experiments are done by taking a single step within the environment. We had difficulties in figuring out how this translates to Q learning because the target is a max operator applied to the next state action pair. Under such circumstances we will not be able to differentiate with respect to the target.\n(3) There were no code available for the experiments by the time we finished this report. We discussed with the authors regarding the availability of the code base and was assured that it is going to be released soon.\n(4) The authors did not report all the hyper-parameters they used in their experiments. \n\nAlthough we were not able to fully duplicate the experiments due to the above reasons, we would like to share what we did and our findings. You can check out the full report at https://www.overleaf.com/read/tdzyfmjzkhyj\n(1) Cartpole : we used the exact set of hyper-parameters reported by the authors. In addition, we used the default open-AI batch size as it is not mentioned by the authors. The baseline we got is quite different than the one of the authors'. However, interestingly, a model with single hidden layer of 64 units got us a baseline result that is as good as the results claimed by the authors.\n(2) GridWorld : We have run DQN in the 10x10 Grid World environment as proposed by the authors. Since the authors did not mention the starting point they used, we set it to be (0,0). For DQN we used two hidden layers, units of size 32 per hidden layer. We executed a soft max policy and feed the (x,y) coordinates of the agent in the network. As the authors did not mention the total episodes they ran. we therefore ran them over 1000 episodes and took an average over 10 independent runs as before. We computed the Q values for DQN with that of the value function obtained by running policy evaluation in this domain, and obtained a mean squared error around 0.38. Note that we only verified the DQN baseline, we did not verify the proposed algorithm in the DQN setup. \n(3) Baird's counterexample : We ran both TD and constrained TD on the Baird-6-state setup for 2000 steps each run and we made 10 independent runs. We set the discount factor to be 0.99, the learning rate to be 0.01. In addition we extracted the feature values from the graph shown in the paper. We initialized the weights the same way Sutton did in his book for the Baird's counterexample section.\nWe observed the diverging behavior when running regular TD. We obtained a similar baseline to that mentioned in the paper. Nevertheless, we found that our constrained TD produced a quite different shape, a bell shape, heavy tail curve compared to a converging straight line after iteration 100 reported by the authors. \n(4) linear function approximation : The authors claimed that the constraint can be applied to the gradients of any TD objective. Thus we also tried some experiments with linear function approximation in open-AI mountain car environment (max size of step =200 ). We made 10 independent runs and took the average. We ran both Q learning and SARSA and tried to implement the Constrained SARSA and Constrained Q learning. In terms of Q learning the target is simply a max function over the next state action pair, and in such cases this max function is not differentiable. Further more we tried SARSA as the target is differentiable. For our implementation we use RBF kernel for approximating value functions. We have incorporated a learning rate of 0.01 and a discount factor of 0.99 for our implementation. It is important to note that we observed no signs of learning for Constrained SARSA in this environment.\nIn conclusion, we hope that our above findings are helpful to the authors as well as people who are interested in this paper. We encourage the authors to publish their code and provide more details about hyper-parameters for their work. \n\n", "The algorithm does not generalize to anything more complicated than toy environments like Cartpole, as multiple reviewers have pointed out. I'm happy to be proven wrong, but I strongly doubt it would help your real task. ", "Hi I am trying to repeoduce your results. Is it possible to share the codes for this work? " ]
[ 2, 3, 4, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_Bk-ofQZRb", "iclr_2018_Bk-ofQZRb", "iclr_2018_Bk-ofQZRb", "SkQRzPJzz", "iclr_2018_Bk-ofQZRb", "HJmlAOcxM", "iclr_2018_Bk-ofQZRb" ]
iclr_2018_SyF7Erp6W
Learning to play slot cars and Atari 2600 games in just minutes
Machine learning algorithms for controlling devices will need to learn quickly, with few trials. Such a goal can be attained with concepts borrowed from continental philosophy and formalized using tools from the mathematical theory of categories. Illustrations of this approach are presented on a cyberphysical system: the slot car game, and also on Atari 2600 games.
rejected-papers
This paper does not seem completely appropriate for ICLR.
val
[ "SyI9nuuez", "SJyMuKclM", "B1_Afzy-f", "SJxoffIbM", "Hk51IWIZG", "H1nySWL-G", "Sy_hfWU-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors argue that many machine learning systems need a large amount of data and long training times. To mend those shortcomings their proposed algorithm takes the novel approach of combining mathematical category theory and continental philosophy. Instead of computation units, the concept of entities and a 'me' is introduced to solve reinforcement learning tasks on a cyber-physical system as well as the Atari environment. This allows for an AI that is understandable for humans at every step of the computation in comparison to the 'black box learning of an neural network. \n \n \nPositives:\n\t•\tNovel approach towards more explainable and shorter training times/ less data\n\t•\tSolid mathematical description in part 3.3\n\t•\tSetup well explained\n \n \nNegatives:\n\t•\tUse of colloquial language (the first sentence of the abstract alone contains the word 'very' twice)\n\t•\tSome paragraphs are strangely structured\n\t•\tIncoherent abstract \n\t•\tOnly brief and shallow motivation given (No evidence to support the claim)\n\t•. Brief and therefore confusing mention of methods \n\t•\tNo mention of results\n\t•\tVery short in general\n\t•\tMany grammatical errors (wrong tense use, misuse of a/an,... )\n\t•\tRelated Work is either Background or an explanation of the two test systems. While related approaches in those systems are also provided, the section is mainly used to introduce the test beds \n\t•\tNo direct comparison between algorithm and existing methods is given. It is stated that some extra measures from other measures such as sensors are not used and that it learns to rank with a human in under a minute. However, many questions remain unanswered: But how good is this? How long do other systems need? Is this a valid point to raise? What score functions do other papers use?\n\t•\t2.2: Title choice could have been more descriptive of the subsection. 'Video Games' indicates a broader analysis of RL in any game but the section mainly restricts itself to the Atari Environment\n\t•\tWhile many methods are mentioned they are not set in context but only enumerated. Many concepts are only named without explanation or how they fit into the picture the authors are trying to paint.\n \n\t•\tA clear statement of the hypothesis and reason/motivation behind pursuing this approach is missing. Information is indirectly given in the third section where the point is raised that the approach was chosen in contrast to 'black box NNs'. This seems to be a very crucial point that could have been highlighted more. The achieved result are by no means comparable to the NN approaches but they are faster and explainable for a human. \n\t•\tDreyfus' criticism of AI is presented as the key initiator for this idea. Ideas by other authors that utilise this criticism as their foundation are conceptually similar, they could have therefore been mentioned in the related work section.\n\t•\tThe paper fails to mention the current movement in the AI community to make AI more explainable. One of their two key advantages seems to be that they develop a more intuitive explainable system. However, this movement is completely ignored and not given a single mention. The paper, therefore, does not set their approach in context and is not able to acknowledge related work in this area. \n\t•\tThe section about continental based philosophy is rather confusing\n\t•\tInstead of explaining the philosophy, analytical philosophy is described in details and continental philosophy is only described as not following analytical patterns. A clear introduction to this topic is missing.\n\t•\tWhen described, it is stated that it's a mix of different German and French doctrines that are name dropped but not explained\\ and leave the reader confused.\n\t•\tResult section not well structured and results lack credibility:\n\t•\tLong sections in the result section describe the actual algorithm. This should have been discussed before the results.\n\t•\tResults for slot car are not convincing:\n\t•\tTable 1 only shows the first the last and the best lap (and in most of them the human is better) \n\t•\tNot even an average measure is given only samples. This is very suspicious.\n\t•\tWhy the comparison with DQN and only DQN? How was this comparison initialised? Which parameters were used? Neither is the term DQN resolved as Deep Q-Network nor is any explanation given. There are many methods/method classes performing RL on the Atari Environment. The mention of only one comparison leaves reasonable doubt about the claim that the system learns faster. \n \nSUMMARY: Reject. Even though the idea presented is a novel contribution and has potential the paper itself is highly unstructured and confusing and lacks a proper grammar check. No clear hypothesis is formed until section 3. The concept of Explainable AI which could have been a good motivation does not find any mentioning. Key concepts such as continental philosophy are not explained in a coherent way. The results are presented in a questionable way. As the idea is promising it is recommended to the authors to restructure the paper and conduct more experiments to be able to get accepted. \n", "In this paper the authors address the very important challenges of current deep learning approaches, which is that these algorithms typically need an extraordinarily large number of training rounds to learn their strategies. The authors note that in real life, this type of training will outstrip both the training and time budget of most real world problems. The solution they propose is to take a high level approach and to learn more like humans do by creating strategies that involve relationships between entities rather than trying to build up strategies from pixels. \nThe authors credit their reframing of their approach to AI to the “continental philosophers” (e.g. Heidegger) in opposition to the “analytical philosophers” such as Wittgenstein. The authors associate current machine learning approaches with the analytic philosophers, based on propositions that are either provably true or untrue and their own approach as in opposition to these, however from my reading of this paper what the authors are saying is that if you start learning with higher level concepts (relationships between entities) rather than doing analysis on low level information such as pixels. Starting with low level concepts makes learning very difficult at first and leads to a path where many trials are required. Staring from higher level concepts such as relationships between entities allows learning to happen quickly and in a manner much more similar in nature to what humans actually do.\nWhile the authors bring up many valid points, and in essence I believe that they may be correct, the flaw in this paper is that they do not provide methods for teaching computers to learn these higher level concepts. The algorithms they present all require human knowledge to be encoded in the algorithms to identify the higher level concepts. The true power of the deep learning approach is that it can actually learn from low level data, without humans hand crafting the higher level entities on their behalf.\n\nWhile I agree with Dreyfus that understanding what is important and interesting given a situation would be an incredible boon to any AI algorithm, it remains an unsolved problem as to how to teach a computer to understand what is interesting in a scene with the same intuition that a human has. In the first experiment the authors need to pre-define the concepts of a straight road and a curved road and identify them for the algorithm. They also need to tell the algorithm exactly how to count the number of sections that the track has. In the second experiment, to identify the “Me” in the game, the authors instruct the computer to recognize “me” as the things that move when the controller is activated. While in some ways this is clever, mimicking what a child might do to see what moves in the world when it issues a command to move from its own brain and thus learning what “me” is, children take year to develop a sense of “self” and part of that is learning that a “concept of self” is an interesting and useful thing to have. In their work the authors know, from their human intelligence, what are the important concepts in the game (again from a human perspective) and devise simple methods for the computer to learn these. Again the problem here is that the human has to define the important concepts for the computer and define a specific strategy for the computer to learn to identify these important policies. Data intensive deep learning algorithms are able to infer strategies without these concepts being defined for them.\n\nThis reframing does point out a different and perhaps better path for AI, but it is not entirely new and this paper does not present a method for getting from sensed data to higher level concepts. For each of the experiments, the strategies used rely on human intuition to define policies. In the first experiment with slot cars, a human needs to provide n laps of driving to imitate. The authors identify the “shortest lap” and store it for the “AI” to replay. The only “learning” is from an optimization that minimizes the difference between the AI’s lap time and the best lap time (tbest) of the human by scaling that recorded sample of the human driving. This results is a strategy that is essentially just trying to replicate (imitate) what the human is doing will not lead to a generalizable learning strategy that could ever exceed a human example. This is at best a very limited form of imitation learning. The learning process for the second example is explained in even less detail.\nOverall, this paper presents a different way of thinking about AI, one in which the amount of training time and training data required for learning is greatly reduced, however what is missing Is a generalizable algorithmic strategy for implementing this framework.m \n\n \n\n\n\n", "For me, this paper is such a combination of unusual (in the combinations of ideas that it presents) and cryptic (in its presentation) that I have found it exceedingly hard to evaluate fairly. For example, Section 4 is very unclear to me. Its relationship to Section 3.3 is also very unclear to me. \n\nBefore that point in the paper, there are many concepts and ideas alluded to, some described with less clarity than others, but the overall focus is unclear to me and the relationship to the actual algorithms and implementation is also unclear to me. That relationship (conceptual motivation --> implementation) is exactly what would be needed in order to fully justify the inclusion (in an ICLR paper) of so much wide-ranging philosophical/conceptual discussion in the paper.\n\nMy educated guess is that other reviewers at ICLR may have related concerns. In its current state, it therefore does not feel like an appropriate paper for this particular conference. If the authors do feel that the content itself is indeed a good fit, then my strong recommendation would be to begin by re-writing it so that it makes complete sense to someone with a \"standard\" machine learning background. The current presentation just makes it very hard to assess, at least for this reviewer.\n\nIf other reviewers are more easily able to read and understand this paper, then I will be glad to defer to their assessments of it and will happily retract my own.", "Thank you for the response. \n\nAlso, I should add that as a reviewer, I do appreciate that the authors respected the page limit.\n\nIn terms of whether it's worthwhile for the authors to invest time in modifying the paper by exceeding the limit in order to explain things more clearly, unfortunately I don't know the answer to that.\n\nI suspect that for this particular conference, it may not be worth the time, but I acknowledge that I may be biased due to not fully understanding the paper in its current form. From what I see from the now-posted other reviews, it may simply make it clearer that the paper does not quite fit this conference.But until the details are provided, I just don't know.\n\nOn the other hand, I believe that expanding the (machine learning) details of the paper will be helpful for publication in any machine-learning venue, so in that sense, if the authors are particularly interested in publishing this work in a machine-learning venue,it is indeed worth providing more such details.\n\nAs one possible guideline, I would suggest that the authors imagine if they were to hand the paper to a competent graduate student, and ask: would the student have enough information, in what is provided in the paper, to implement something close to what the authors have done? \n", "Dear AnonReviewer3,\nThank you for the feedback. We are glad that you understood the target at which we were aiming: the reduction of training time when controlling devices. Below we will try to clarify the points that you mentioned.\n\n- “No direct comparison between algorithm and existing methods is given”. For the slot car setup, we only compared our results to published data as mentioned in 2.1 (example: twelve hours to learn to drive a slot car). We did not reproduce the complex hardware setup described in the publications, as our target, described in the introduction, was to find an “alternative approach to teach computers to learn quickly to perform as efficiently as the existing solution with approximately one percent of the training data, time, and computing resources”. Neither the vision-based approach nor the added-sensors and embedded processor solution fit in this framework. The results we obtained with our own algorithm are tabulated in 4.1 and need, as written in our text, up to ten laps to learn. This leads to less than a minute of learning time, as the longest laps are less than 6 seconds (table 1). Even though no lower-bound was given for the lap time (which would indeed give an indication of how good drivers are), the minute versus hours of learning time was in line with our initial claim. “How long do other systems need? Is this a valid point to raise?” is thus answered by figures in our paper, and is going towards our original goal that we described in the introduction “to perform as efficiently as the existing solution with approximately one percent of the training data, time, and computing resources”.\n- We agree that 2.2 could have mislead the reader into thinking that we would go beyond the Atari 2600 games (only a Nintendo game is mentioned in Lee et al., 2014).\n- “A clear statement of the hypothesis and reason/motivation behind pursuing this approach is missing”. The motivation is given in the introduction, along with the targets: “algorithms that control cyberphysical systems to learn with very little data how to operate quickly in a partially-known environment” and the target is “teach computers to learn quickly to perform as efficiently as the existing solution with approximately one percent of the training data, time, and computing resources”.\n- About bibliographical references and Dreyfus: we are willing to add more references, including both explainable AI and other approaches relying on Dreyfus ideas. We only removed them from our submission to comply with the “strong recommendation” of 8+1 pages.\n- Continental based philosophy definition: we followed the classic explanation of continental philosophy as defined with respect to analytical philosophy. It is difficult to provide a definition of continental philosophy that is accepted by everyone in the field, except by opposition to analytical philosophy.\n- “Result section not well structured and results lack credibility”: if allowed to go beyond the 8+1 page, we will be glad to split the description of the experimental setup and the results. As proposed to other reviewers, we will then also spend more time explaining the results that are tabulated, and describing what really occurs while our algorithm is learning. Regarding Table 1, we mostly gave average times plus or minus the standard deviation. Example for circuit 2: 3.08+-0.54s for the human means 3.08s average, and a standard deviation from the mean (std) of 0.54s. The AI reached 3.13s for the mean value, which is worse than 3.08s for the human, however with a std of 0.02s compared to 0.54s for the human. The AI is thus more consistent than a human, while being slightly slower than the human. “Not even an average measure is given only samples. This is very suspicious”: the only samples are the best lap for the human being. All other results consist of an average and a standard deviation.\n- “Why the comparison with DQN and only DQN? How was this comparison initialised? Which parameters were used?” We agree on the lack of explanation for the term DQN, this was an oversight on our part. The comparison with the DQN was done following the publication referenced in our paper, reproducing the results using the same setup as explained in 4.2: “The tests are carried out with the settings from Mnih et al. (2015)”. As for why we choose the DQN, the reason is that this publication is one of the most cited in relation to Atari 2600 games, and is the de-facto benchmark to which one must refer. Although we aim to control cyberphysical systems, we needed to validate the versatility of our approach by first testing it on this standard.\n\nAs asked to the other reviewers, please tell us if adding a few pages to clarify the setup and the results could lead to an accepted paper. It would also include minor modifications to remove any ambiguity found by yourself and the other reviewers, including grammatical errors.\n\nSincerely yours,\nThe Authors.", "Dear AnonReviewer1,\nThank you for the feedback. We are glad that you understood the target at which we were aiming: the reduction of the training time when controlling devices, and the concept we borrowed from different sciences such as philosophy and linguistics. We hope to clarify the fact that the algorithm does not need a specific training for each configuration, and that it does not always replay or “scales recorded samples of the human driving” as you mentioned.\n\nWe illustrate two cases with the slot car: the case where the car drives on the same track (bijective case), and the case where the track is unknown (analogy case). While it is true that the algorithm learns from the best lap for the bijective case, as you clearly describe in your review, the analogy case is different. In the analogy case, as written on page 7, section 4.1, the algorithm “transposes knowledge previously acquired for a different track configuration”. As there is no bijection between the two circuit configurations, there is no possibility to replay something that would have been recorded. The algorithm infers in real time, from only current and voltage measurements, that the car is in a configuration that we (humans) call curve or straight. It relies on a classifier (k-nn) with two classes. This number of classes could be expanded to higher values, that would lead to a more complex description in human terms, which would in turn defeat the purpose of this toy problem. The algorithm then chooses the best control signal based on its previous experiences (best in order to reach the goal of decreasing lap time while staying on the track). We do admit that we used the terms “straight” and “curve” in our explanation, but the algorithm simply classifies current and voltage to choose a control signal so as to stay on the track while decreasing the lap time.\n\nThe algorithm uses this past knowledge (the control signal for each class) in a previously unencountered situation. In this way it generalizes its strategy and adapts to a radically different case: circuit 2 differs from circuit 1, and a replay of a recorded strategy learned on one circuit or scaled “recorded samples of the human driving” would fail on the other circuit.\n\nThe only shortcoming we did when applying this theory for the slotcar was to skip the search for the “me”, as there is only one entity with dynamic behavior. The algorithm still applies concepts outlined in 3.2 such as the search for enemies (the enemy being a car crash).\n\nHowever, our algorithm does not “count the number of sections that the track has”: please tell us what part of our document could be improved to avoid such misunderstanding. The algorithm has absolutely no way to count such sections, nor the required sensors as far as we can tell: it only measures the voltage and the current.\n\nThe second example (Atari games) does not even rely on the bijective case, because there is no human-provided reference or gameplay. It is thus discovering everything, as in classic reinforcement learning approaches. The only hard-coded concepts are:\n- the fact that there is at least one “me” among the entities (which means, in control theory terms, that there is at least one system responding to a control signal. Its transfer function is unknown).\n- the fact that going towards friends is the first strategy to apply to survive.\nThe rest is inferred by the algorithm: the friends and the enemies are updated based on the evolution of the score function, and the control signal sent is based on this information. This is the reason why it needs a few thousands frames to start increasing the score: in the beginning, it does not properly locate the “me” or, if it does, it does not yet know who is a friend. Once it is inferred (as it is inferred with deep learning or reinforcement learning methods), only the concepts relevant for the task at hand are transferred, even if there is no bijection between the structures and the states (as explained in section 3.3).\n\n\nWe are willing to add more references, including both explainable AI and other approaches relying on Dreyfus’ ideas. We only removed them from our submission to comply with the “strong recommendation” of 8+1 pages. We can also shorten the description of the analog electronics (which we included because of the added constraint of low cost when designing this kind of new solution). We could thus spend more time explaining the results that are tabulated, and what really occurs when our algorithm is learning.\n\nPlease tell us if such modifications are within the scope of what is advised (including adding a few pages for the explanation of the results), and if it could lead to an acceptation of the paper.\n\nSincerely yours,\nThe Authors.", "Dear AnonReviewer2,\nThank you for your feedback. We hope to clarify the points that you mentioned in your review:\n- the motivation is outlined in the introduction (page 1) with the target being “algorithms that control cyberphysical systems”, under the constraint “to learn with very little data how to operate quickly in a partially-known environment”. The quantitative target was given by “This work thus started as an alternative approach to teach computers to learn quickly to perform as efficiently as the existing solution with approximately one percent of the training data, time, and computing resources.”\n- The concepts, borrowed from existing and known results in sciences such as structural linguistics and continental philosophy, are described in section 3 “continental-philosophy-based theoretical approach”. We acknowledge the fact that continental-philosophy is defined in our paper as (in a simplified form) “everything but analytic philosophy” for the lack of better definition that is accepted by everyone in the field.\n- Section 3.3 deals with the case when imitation does not work, i.e. when the AI cannot reach the goal by imitating a known, working solution. The results from this section are used in the experimental part 4, for instance for the slot car as written on page 7 in 4.1: “The analogy-based approach transposes knowledge previously acquired for a different track configuration thanks to equation (1). This knowledge – a high safe speed for a given s – is transposed via non-bijective analogies presented in 3.3 with the function h((u, i) s ) evaluated with a k-NN.”\n\nAs proposed to the other reviewers, if the 9-page limit (that was strongly recommended) can be bypassed, we will gladly describe and analyze more deeply the results that we tabulated in our paper, including when imitation is used, and when analogies are used.\n\nPlease tell us if such modifications would make our paper in line with ICLR requirements.\nRegards,\nThe Authors" ]
[ 3, 2, 3, -1, -1, -1, -1 ]
[ 2, 5, 1, -1, -1, -1, -1 ]
[ "iclr_2018_SyF7Erp6W", "iclr_2018_SyF7Erp6W", "iclr_2018_SyF7Erp6W", "Sy_hfWU-z", "SyI9nuuez", "SJyMuKclM", "B1_Afzy-f" ]
iclr_2018_rJ3fy0k0Z
Deterministic Policy Imitation Gradient Algorithm
The goal of imitation learning (IL) is to enable a learner to imitate an expert’s behavior given the expert’s demonstrations. Recently, generative adversarial imitation learning (GAIL) has successfully achieved it even on complex continuous control tasks. However, GAIL requires a huge number of interactions with environment during training. We believe that IL algorithm could be more applicable to the real-world environments if the number of interactions could be reduced. To this end, we propose a model free, off-policy IL algorithm for continuous control. The keys of our algorithm are two folds: 1) adopting deterministic policy that allows us to derive a novel type of policy gradient which we call deterministic policy imitation gradient (DPIG), 2) introducing a function which we call state screening function (SSF) to avoid noisy policy updates with states that are not typical of those appeared on the expert’s demonstrations. Experimental results show that our algorithm can achieve the goal of IL with at least tens of times less interactions than GAIL on a variety of continuous control tasks.
rejected-papers
All of the reviewers found some aspects of the formulation and experiments interesting, but they found the paper hard to read and understand. Some of the components of the technique such as the state screening function (SSF) seem ad-hoc and heuristic without much justification. Please improve the exposition and remove the unnecessary component of the technique, or come up with better justifications.
train
[ "S1_na_OlG", "B1nuCculG", "S1tVQ5Kef", "SypN6BT7M", "SknsnHTQG", "S1WJnrpmz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes to extend the determinist policy gradient algorithm to learn from demonstrations. The method is combined with a type of density estimation of the expert to avoid noisy policy updates. It is tested on Mujoco tasks with expert demonstrations generated with a pre-trained network. \n\nI found the paper a bit hard to read. My interpretation is that the main original contribution of the paper (besides changing a stochastic policy for a deterministic one) is to integrate an automatic estimate of the density of the expert (probability of a state to be visited by the expert policy) so that the policy is not updated by gradient coming from transitions that are unlikely to be generated by the expert policy. \n\nI do think that this part is interesting and I would have liked this trick to be used with other imitation methods. Indeed, the deterministic policy is certainly helpful but it is tested in a deterministic continuous control task. So I'm not sure about how it generalizes to other tasks. Also, the expert demonstration are generated by the pre-trained network so the distribution of the expert is indeed the distribution of the optimal policy. So I'm not sure the experiments tell a lot. But if the density estimation could be combined with other methods and tested on other tasks, I think this could be a good paper. ", "This paper considers the problem of model-free imitation learning. The problem is formulated in the framework of generative adversarial imitation learning (GAIL), wherein we alternate between optimizing reward parameters and learner policy's parameters. The reward parameters are optimized so that the margin between the cost of the learner's policy and the expert's policy is maximized. The learner's policy is optimized (using any model-free RL method) so that the same cost margin is minimized. Previous formulation of GAIL uses a stochastic behavior policy and the RIENFORCE-like algorithms. The authors of this paper propose to use a deterministic policy instead, and apply the deterministic policy gradient DPG (Silver et al., 2014) for optimizing the behavior policy. \nThe authors also briefly discuss the problem of the little overlap between the teacher's covered state space and the learner's. A state screening function (SSF) method is proposed to drive the learner to remain in areas of the state space that have been covered by the teacher. Although, a more detailed discussion and a clearer explanation is needed to clarify what SSF is actually doing, based on the provided formulation.\nExcept from a few typos here and there, the paper is overall well-written. The proposed idea seems new. However, the reviewer finds the main contribution rather incremental in its nature. Replacing a stochastic policy with a deterministic one does not change much the original GAIL algorithm, since the adoption of stochastic policies is often used just to have differentiable parameterized policies, and if the action space is continuous, then there is not much need for it (except for exploration, which is done here through re-initializations anyway). My guess is that if someone would use the GAIL algorithm for real problems (e.g, robotic task), they would significantly reduce the stochasticity of the behavior policy, which would make it virtually similar in term of data efficiency to the proposed method.\nPros:\n- A new GAIL formulation for saving on interaction data. \nCons:\n- Incremental improvement over GAIL\n- Experiments only on simulated toy problems \n- No theoretical guarantees for the state screening function (SSF) method", "The paper lists 5 previous very recent papers that combine IRL, adversarial learning, and stochastic policies. The goal of this paper is to do the same thing but with deterministic policies as a way of decreasing the sample complexity. The approach is related to that used in the deterministic policy gradient work. Imitation learning results on the standard control problems appear very encouraging.\n\nDetailed comments:\n\n\"s with environment\" -> \"s with the environment\"?\n\n\"that IL algorithm\" -> \"that IL algorithms\".\n\n\"e to the real-world environments\" -> \"e to real-world environments\".\n\n\" two folds\" -> \" two fold\".\n\n\"adopting deterministic policy\" -> \"adopting a deterministic policy\".\n\n\"those appeared on the expert’s demonstrations\" -> \"those appearing in the expert’s demonstrations\".\n\n\"t tens of times less interactions\" -> \"t tens of times fewer interactions\".\n\nOk, I can't flag all of the examples of disfluency. The examples above come from just the abstract. The text of the paper seems even less well edited. I'd highly recommend getting some help proof reading the work.\n\n\"Thus, the noisy policy updates could frequently be performed in IL and make the learner’s policy poor. From this observation, we assume that preventing the noisy policy updates with states that are not typical of those appeared on the expert’s demonstrations benefits to the imitation.\": The justification for filtering is pretty weak. What is the statistical basis for doing so? Is it a form of a standard variance reduction approach? Is it a novel variance reduction approach? If so, is it more generally applicable?\n\nUnfortunately, the text in Figure 1 is too small. The smallest font size you should use is that of a footnote in the text. As such, it is very difficult to assess the results.\n\nAs best I can tell, the empirical results seem impressive and interesting.\n", "Thank you for your constructive comments and positive evaluations on our paper. We will clarify the role of SSF in the camera-ready version.\n\n> My interpretation is that the main original contribution of the paper (besides changing a stochastic policy for a deterministic one) is to integrate an automatic estimate of the density of the expert (probability of a state to be visited by the expert policy)\n\nThank you for clearly understanding the role of SSF.\n\n> Indeed, the deterministic policy is certainly helpful but it is tested in a deterministic continuous control task. So I'm not sure about how it generalizes to other tasks.\n\nThe expert's policy used in the experimetns is a stochastic one. Hence, the proposed method works not only on a deterministic continuous control tasks but also a stochastic one. We expect that it generalizes well to other tasks.\n", "Thank you for your constructive comments on our paper. We will fix typos and clarify the role of SSF in the camera-ready version.\n\n> The authors also briefly discuss the problem of the little overlap between the teacher's covered state space and the learner's. A state screening function (SSF) method is proposed to drive the learner to remain in areas of the state space that have been covered by the teacher.\n\nThe main purpose of introducing a SSF is not what you mentioned. Since we use the Jacobian of reward function to derive PG as opposed to prior IL works, the Jacobian is supposed to have information about how to get close to the expert's behavior for the learner. However, in the IRL objective (4), which is general in (max-margin) IRL literature, the reward function could know how the expert acts just only on the states appearing in the demonstration. In other words, the Jacobian could have information about how to get close to the expert's behavior just only on states appearing in the demonstration. What we claimed in Sec.3.2 is that the Jacobian for states which does not appear in the demonstration is just garbage for the learner since it does not give any information about how to get close to the expert. The main purpose of introducing the SSF is to sweep the garbage as much as possible.\n\n> However, the reviewer finds the main contribution rather incremental in its nature. Replacing a stochastic policy with a deterministic one does not change much the original GAIL algorithm, since the adoption of stochastic policies is often used just to have differentiable parameterized policies, and if the action space is continuous, then there is not much need for it (except for exploration, which is done here through re-initializations anyway)\n\nFigure.1 shows worse performance of Ours \\setminus SSF which just replace a stochastic policy with a deterministic one. If Ours \\setminus SSF worked well, we agree with your opinion that the main contribution is just incremental. However, introducing the SSF besides replacing a stochastic policy with a deterministic one is required to imitate the expert's behavior. Hence, we don't agree that the proposed method is just incremental. \n\n> My guess is that if someone would use the GAIL algorithm for real problems (e.g, robotic task), they would reduce the stochasticity of the behavior policy, which would make it virtually similar in term of data efficiency to the proposed method.\n\nBecause the GAIL algorithm is an on-policy algorithm, it essentially requires much interactions for an update and never uses behavior policy. Hence, it would not make it virtually similar in term of data efficiency to the proposed method which is off-policy algorithm.\n\n> Cons:\n> - Incremental improvement over GAIL\n\nAs mentioned above, we think that the proposed method is not just incremental improvement over GAIL. \n\n> - Experiments only on simulated toy problems \n\nWe wonder why you thought the Mujoco tasks are just \"toy\" problems. Even though those tasks are not real-world problems, they have not been solved until GAIL has been proposed. In addition, the variants of GAIL (Baram et al., 2017; Wang et al., 2017; Hausman et al.) also evaluated their performance using those tasks. Hence, we think that those tasks are enough difficult to solve and can be used as a well-suited benchmark to evaluate whether the proposed method is applicable to the real-world problems in comparison with other IL algorithms.\n", "Thank you for your constructive comments on our paper. We will fix typos and Figure.1. in the camera-ready version. \n\n> The justification for filtering is pretty weak. \n\nSince Figure.1 shows worse performance of Ours \\setminus SSF which does not filter states appearing in the demonstration, we think that the justification is enough.\n\n> What is the statistical basis for doing so?\n\nIntroducing a SSF is a kind of heuristic method, but it works as mentioned above.\n\n> Is it a form of a standard variance reduction approach? Is it a novel variance reduction approach? If so, is it more generally applicable?\n\nIntroducing the SSF itself is not a variance reduction approach. We would say that direct use of the Joacobian of (single-step) reward function rather than that of Q-function to derive the PG (8) might reduce the variance because the range of outputs are bounded.\nSince we use the Jacobian of reward function to derive PG as opposed to prior IL works, the Jacobian is supposed to have information about how to get close to the expert's behavior for the learner. However, in the IRL objective (4), which is general in (max-margin) IRL literature, the reward function could know how the expert acts just only on the states appearing in the demonstration. In other words, the Jacobian could have the information about how to get close to the expert's behavior just only on states appearing in the demonstration. What we claimed in Sec.3.2 is that the Jacobian for states which does not appear in the demonstration is just garbage for the learner since it does not give any information about how to get close to the expert. The main purpose of introducing the SSF is to sweep the garbage as much as possible. The prior IL works have never mentioned about the garbage." ]
[ 6, 5, 5, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_rJ3fy0k0Z", "iclr_2018_rJ3fy0k0Z", "iclr_2018_rJ3fy0k0Z", "S1_na_OlG", "B1nuCculG", "S1tVQ5Kef" ]
iclr_2018_B1mSWUxR-
Softmax Q-Distribution Estimation for Structured Prediction: A Theoretical Interpretation for RAML
Reward augmented maximum likelihood (RAML), a simple and effective learning framework to directly optimize towards the reward function in structured prediction tasks, has led to a number of impressive empirical successes. RAML incorporates task-specific reward by performing maximum-likelihood updates on candidate outputs sampled according to an exponentiated payoff distribution, which gives higher probabilities to candidates that are close to the reference output. While RAML is notable for its simplicity, efficiency, and its impressive empirical successes, the theoretical properties of RAML, especially the behavior of the exponentiated payoff distribution, has not been examined thoroughly. In this work, we introduce softmax Q-distribution estimation, a novel theoretical interpretation of RAML, which reveals the relation between RAML and Bayesian decision theory. The softmax Q-distribution can be regarded as a smooth approximation of the Bayes decision boundary, and the Bayes decision rule is achieved by decoding with this Q-distribution. We further show that RAML is equivalent to approximately estimating the softmax Q-distribution, with the temperature τ controlling approximation error. We perform two experiments, one on synthetic data of multi-class classification and one on real data of image captioning, to demonstrate the relationship between RAML and the proposed softmax Q-distribution estimation, verifying our theoretical analysis. Additional experiments on three structured prediction tasks with rewards defined on sequential (named entity recognition), tree-based (dependency parsing) and irregular (machine translation) structures show notable improvements over maximum likelihood baselines.
rejected-papers
There are some interesting ideas discussed in the paper, but the reviewers expressed difficulty understanding the motivation and the theoretical results. The experiments do not seem convincing in showing that SQDML achieves significant gains. Overall, the the paper needs either stronger and clearer theoretical results, or more convincing experiments for publication at ICLR.
train
[ "S1B8Oq7ez", "BJNeA-cgG", "B16z4vAgG", "H12WLWmXM", "r10sQDqZz", "rJQYzP9bz", "H1S5eP9Zf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper dives deeper into understand reward augmented maximum likelihood training. Overall, I feel that the paper is hard to understand and that it would benefit from more clarity, e.g., section 3.3 states that decoding from the softmax q-distribution is similar to the Bayes decision rule. Please elaborate on this.\n\nDid you compare to minimum bayes risk decoding which chooses the output with the lowest expected risk amongst a set of candidates?\n\nSection 4.2.2 says that Ranzato et al. and Bahdanau et al. require sampling from the model distribution. However, the methods analyzed in this paper also require sampling (cf. Appendix D.2.4 where you mention a sample size of 10). Please explain the difference.", "This paper interprets reward augmented maximum likelihood followed by decoding with the most likely output as an approximation to the Bayes decision rule.\n\nI have a few questions on the motivation and the results.\n- In the section \"Open Problems in RAML\", both (i) and (ii) are based on the statement that the globally optimal solution of RAML is the exponential payoff distribution q. This is not true. The globally optimal solution is related to both the underlying data distribution P and q, and not the same as q. It is given by q'(y | x, \\tau) = \\sum_{y'} P(y' | x) q(y | y', \\tau).\n- Both Theorem 1 and Theorem 2 do not directly justify that RAML has similar reward as the Bayes decision rule. Can anything be said about this? Are the KL divergence small enough to guarantee similar predictive rewards?\n- In Theorem 2, when does the exponential tail bound assumption hold?\n- In Table 1, the differences between RAML and SQDML do not seem to support the claim that SQDML is better than RAML. Are the differences actually significant? Are the differences between SQDML/RAML and ML significant? In addition, how should \\tau be chosen in these experiments?\n", "The authors claim three contributions in this paper. (1) They introduce the framework of softmax Q-distribution estimation, through which they are able to interpret the role the payoff distribution plays in RAML. Specifically, the softmax Q-distribution serves as a smooth approximation to the Bayes decision boundary. The RAML approximately estimates the softmax Q-distribution, and thus approximates the Bayes decision rule. (2) Algorithmically, they further propose softmax Q-distribution maximum likelihood (SQDML) which improves RAML by achieving the exact Bayes decision boundary asymptotically. (3) Through one experiment using synthetic data on multi-class classification and one using real data on image captioning, they show that SQDML is consistently as good or better than RAML on the task-specific metrics that is desired to optimize. \n\nI found the first contribution is sound, and it reasonably explains why RAML achieves better performance when measured by a specific metric. Given a reward function, one can define the Bayes decision rule. The softmax Q-distribution (Eqn. 12) is defined to be the softmax approximation of the deterministic Bayes rule. The authors show that the RAML can be explained by moving the expectation out of the nonlinear function and replacing it with empirical expectation (Eqn. 17). Of course, the moving-out is biased but the replacing is unbiased. \n\nThe second contribution is partially valid, although I doubt how much improvement one can get from SQDML. The authors define the empirical Q-distribution by replacing the expectation in Eqn. 12 with empirical expectation (Eqn. 15). In fact, this step can result in biased estimation because the replacement is inside the nonlinear function. When x is repeated sufficiently in the data, this bias is small and improvement can be observed, like in the synthetic data example. However, when x is not repeated frequently, both RAML and SQDML are biased. Experiment in section 4.1.2 do not validate significant improvement, either.\n\nThe numerical results are relatively weak. The synthetic experiment verifies the reward-maximizing property of RAML and SQDML. However, from Figure 2, we can see that the result is quite sensitive to the temperature \\tau. Is there any guidelines to choose \\tau? For experiments in Section 4.2, all of them are to show the effectiveness of RAML, which are not very relevant to this paper. These experiment results show very small improvement compared to the ML baselines (see Table 2,3 and 5). These results are also lower than the state of the art performance. \n\nA few questions:\n(1). The author may want to check whether (8) can be called a Bayes decision rule. This is a direct result from definition of conditional probability. No Bayesian elements, like prior or likelihood appears here.\n(2). In the implementation of SQDML, one can sample from (15) without exactly computing the summation in the denominator. Compared with the n-gram replacement used in the paper, which one is better?\n(3). The authors may want to write Eqn. 17 in the same conditional form of Eqn. 12 and Eqn. 14. This will make the comparison much more clear.\n(4). What is Theorem 2 trying to convey? Although \\tau goes to 0, there is still a gap between Q and Q'. This seems to suggest that for small \\tau, Q' is not a good approximation of Q. Are the assumptions in Theorem 2 reasonable? There are several typos in the proof of Theorem 2. \n(5). In section 4.2.2, the authors write \"the rewards we directly optimized in training (token-level accuracy for NER and UAS for dependency parsing) are more stable w.r.t. τ than the evaluation metrics (F1 in NER), illustrating that in practice, choosing a training reward that correlates well with the evaluation metric is important\". Could you explain it in more details?\n\n", "We revised the paragraph of \"Open Problems of RAML\" to make the problems of RAML more clear.\nSpecifically, we merged the first and second problem to a single one.\nWe also added a \"discussion\" paragraph before introducing the open issues of RAML.", "We thank for your detailed comments and the appreciation of our proposed theoretical interpretation of RAML.\n\nLet us first point out that our main contribution in this work is to provide a theoretical interpretation of RAML. Our experiments are designed to verify our theoretical claims. An exhaustive comparison with current state-of-the-art are however outside the scope of this work (due also to the space limit).\n\nFor your concerns about the numerical results of our experiments, from Figure 1 we see that SQDML is quite stable for different values of tau in the region from 0.1 to 3.0. The results in Figure 2 fluctuate when \\tau is pretty large, which verifies our discussion in section 3.5 that the softmax Q-distribution becomes closer to the uniform distribution when \\tau becomes larger, making it less expressive for prediction. In practice, we found that choosing \\tau in the region of (0.5, 1.5) results pretty good performance. One may perform fine-tuning of \\tau on the validation sets.\n\nThe experiments in section 4.2, as discussed in the first paragraph of this section, is to further confirm the empirical success of RAML (and SQDML) over ML. The models of NER and dependency parsing are classical feature-based statistical models, whose performance are not as good as state-of-the-art neural network models. But our experiments on machine translation are based on the state-of-the-art attentional neural sequence-to-sequence model, and we obtained better performance than Bahdanau, et al. (2017) which incorporated actor-critic algorithm with sequence-to-sequence, demonstrating the effectiveness of RAML. Moreover, from the results in Table 6 in this section, we observed that RAML outperforms ML on the directly optimized metrics while ML gets better results under exact match accuracy. This is in line with our theoretical analysis.\n\nFor your questions about details:\n(1) Bayes decision rule is a commonly used terminology in statistical decision theory.\n(2) You said that one can sample from (15) without exactly computing the summation in the denominator. We do not fully understand it. Did you say importance sampling? We have experimental results of importance sampling for machine translation, which is a little bit worse than the n-gram replacement used in this paper. Please see Appendix D 2.4 for details.\n(3) We sincerely appreciate your comment for improving the notation and will revise it in the next version.\n(4) Theorem 1 proved that when \\tau becomes larger, the approximation error tends to be zero. At the same time, however, the softmax Q-distribution becomes closer to the uniform distribution, providing less information for prediction. Thus, in practice, we cannot choose a large \\tau in order to achieve small approximation error. Theorem 2 is trying to say that under some assumptions, small approximation error is also able to be achieved even with small \\tau. The exponential tail bound assumption in Theorem 2 holds when the conditional distribution P(Y|X=x) is close to a deterministic distribution w.r.t a 'ground-truth' y*. \n(5) In the experiments of NER, to efficiently compute the objective function, we use token-level accuracy as the reward which is not exactly the same as the evaluation metric F1 score. For dependency parsing, we use the UAS as the reward which is just the official evaluation metric for this task. According to the results in Table 2 and 3, RAML is quite stable on token-level accuracy and UAS w.r.t \\tau, but less stable on F1 score.", "We thank for your time and insightful comments.\n\nFor your first quesiton about our statment of the globally optimal solution of RAML, we concede that the globally optimal solution of RAML is P_{RAML}{y|X=x) = E_{P(Y|X=x)}[q(y|Y)], which is just the Q' distribution in Eq (16) in section 3.4. \nTo post the questions in the paragraph \"Open Problems in RAML\", we want to point out that from the original form of RAML in Eq (4) and (6), it is not straight-forward to understand the behavior of the pay-off q distribution in Eq (5), neither the globally optimal solution of RAML. Moreover, as pointed out in quesiton (iii), there is no rigorous theorectical evidence showing that RAML provides a better prediction function. From our Softmax Q-distribution estimation framework in section 3, we derived that the globally optimal solution of RAML is actually Q' in Eq (16), and linked the Q' distribution with our softmax Q distribution in Eq (12) by providing two KL-based bounds in Theorem 1 and 2, demonstrating that Q' is approximating to Q. We really appreciate your comment about the confusion of this part and will definitely revise this paragraph to make the problems of RAML more clear.\n\nFor your question about Theorem 1 and 2, both of them are trying to characterizing the approximating error from Q' to Q by upper-bounding the KL divergence between them. Since the Q and Q' distributions are the \"target\" distributions that SQDML and RAML are learning, respectively, and the prediction functions of SQDML and RAML are directly generated by decoding from these two distributions, we can say that SQDML and RAML should have similar predictive rewards if Q and Q' are close. And decoding from Q distribution delivers Bayes decision rule, thus Theorem 1 and 2 guarantee that RAML would have similar predictive rewards when the assumptions hold. Further, our experiment on syntactic data empirically demonstrate that RAML is able to achieve similar predictive rewards with SQDML which asymptotically achieves Bayes decision rule.\n\nThe exponential tail bound assumption holds when the conditional distribution P(Y|X=x) is close to a deterministic distribution w.r.t a 'ground-truth' y*. \n\nResults in Table 1 illustrate that SQDML achieves better performance towards the oprimized metric. We conceded in the paper that the improvements of SQDML over RAML are not significant, and gave a possible explanation that the reference captions for each image are largely different, making it highly non-trivial for the model to predict a \"consensus\" caption that agrees with multiple references. Examples are given in Figure 3. \n\nThe selection of \\tau in practice depends on the task and the properties of reward funciton r(y, y*). In our experiments on different NLP structured prediction tasks, we found that choosing \\tau in the region of (0.5, 1.5) results pretty good performance. One may perform fine-tuning of \\tau on the validation sets.", "We thank for your time and insightful comments.\n\nIn section 3.3 we claim that decoding from the softmax Q-distribution delivers the Bayes decision rule, because via decoding from the softmax Q-distribution in Eq (12), we directly get the prediction function h(x) = \\argmax_{y \\in Y} Q(y|X=x; \\tau) = \\argmax_{y \\in Y} E_{P(Y|X=x)}[r(y, Y)], which is just the Bayes decision rule. Eq (13) gives the details of the derivation.\n\nWe discussed the relation and difference between minimum risk decoding and our method. Both RAML and SQDML are trying to learn distributions, decoding from which (approximately) provide the Bayes decision rule, while minimum risk decoding, on the other hand, attempts to (approximately) estimate the Bayes decision rule directly by compution the expectation w.r.t the learned distribution.\nFor experiments, we did not compare our methods with it.\n\nThe main difference of sampling between SQDML (or RAML) and RL-based approaches (Ranzato et al, 2016, Bahdanau et al, 2017) is that RL-based approaches require to sample from the learned model distribution which keeps updating during the training procedure. Thus, these approaches suffers from high variance, and usually require a pre-trained ML baseline to initialize model. SQDML (or RAML), however, does sampling from a fixed distribution (the pay-off distribution for RAML or the empirical softmax Q-distribution for SQDML), making the training procedure more stable and requring no pre-training initialization." ]
[ 5, 5, 6, -1, -1, -1, -1 ]
[ 2, 4, 3, -1, -1, -1, -1 ]
[ "iclr_2018_B1mSWUxR-", "iclr_2018_B1mSWUxR-", "iclr_2018_B1mSWUxR-", "iclr_2018_B1mSWUxR-", "B16z4vAgG", "BJNeA-cgG", "S1B8Oq7ez" ]
iclr_2018_rk3b2qxCW
Policy Gradient For Multidimensional Action Spaces: Action Sampling and Entropy Bonus
In recent years deep reinforcement learning has been shown to be adept at solving sequential decision processes with high-dimensional state spaces such as in the Atari games. Many reinforcement learning problems, however, involve high-dimensional discrete action spaces as well as high-dimensional state spaces. In this paper, we develop a novel policy gradient methodology for the case of large multidimensional discrete action spaces. We propose two approaches for creating parameterized policies: LSTM parameterization and a Modified MDP (MMDP) giving rise to Feed-Forward Network (FFN) parameterization. Both of these approaches provide expressive models to which backpropagation can be applied for training. We then consider entropy bonus, which is typically added to the reward function to enhance exploration. In the case of high-dimensional action spaces, calculating the entropy and the gradient of the entropy requires enumerating all the actions in the action space and running forward and backpropagation for each action, which may be computationally infeasible. We develop several novel unbiased estimators for the entropy bonus and its gradient. Finally, we test our algorithms on two environments: a multi-hunter multi-rabbit grid game and a multi-agent multi-arm bandit problem.
rejected-papers
The paper has some interesting ideas around auto-regressive policies and estimating their entropy for exploration. The use of autoregressive policies in RL is not particularly novel, and the estimate of entropy for such models is straightforward. Finally, the experiments focus on very simple tasks.
train
[ "HJs1WiFlM", "ry9X12Fgz", "Bk4yQ1Alz", "H1fJ6l2QM", "B1K3hg3mG", "S1kwnx2Qz", "rJWe3ehmz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "In this paper, the authors suggest introducing dependencies between actions in RL settings with multi-dimensional action spaces by way of two mechanisms (using an RNN and making partial action specification as part of the state); they then introduce entropy pseudo-rewards whose maximization corresponding to joint entropy maximization.\n\nIn general, the multidimensional action methods either seem incremental or non novel to me. The combined use of the chain rule and RNNs (LSTM or not) to induce correlations in multi-dimensional outputs is well know (sequence-to-sequence networks, pixelRNN, etc.) and the extension to RL presents no difficulties, if it is not already known. Note very related work in https://arxiv.org/pdf/1607.07086.pdf and https://www.media.mit.edu/projects/improving-rnn-sequence-generation-with-rl/overview/ .\n\nAs for the MMDP technique, I believe it is folklore (it can for instance be found as a problem in a problem set - http://stellar.mit.edu/S/course/2/sp04/2.997/courseMaterial/topics/topic2/readings/problemset4/problemset4.pdf). Note that both approaches could be combined; the first idea is essentially a policy method, the second, a value method. The second method could be used to provide stronger, partial action-conditional baselines (or even critics) to the first method.\n\nThe entropy derivation are more interesting - and the smoothed entropy technique is as far as I know, novel. The experiments are well done, though on simple toy environments.\n\nMinor:\n- In section 3.2, one should in principle tweak the discount factor of the modified MDP to recover behavior identical to the original one with large action space. This should be noted (alternatively, the discount between non-environment transitions should be set to 1).\n\n- From the description at the end of 3.2, and figure 1.b, it seems actions fed to the MMDP feed-forward network are not one-\nhot; I thought this was pretty surprising as it would almost certainly affect performance? Note also that the collection of feed-forward network which collectively output the joint vector can be thought of as an RNN with non-learned state transition.\n\n- Since the function optimized can be written as an expectation of reward+pseudo-reward, the proof of theorem 4 can be simplified by using generic score-function optimization arguments (see Stochastic Computation Graphs, Schulman et al).\n", "The authors present two autoregressive models for sampling action probabilities from a factorized discrete action space. On a multi-agent gridworld task and a multi-agent multi-armed bandit task, the proposed method seems to benefit from their lower-variance entropy estimator for exploration bonus. A few key citations were missing - notably the LSTM model they propose is a clear instance of an autoregressive density estimator, as in PixelCNN, WaveNet and other recently popular deep architectures. In that context, this work can be viewed as applying deep autoregressive density estimators to policy gradient methods. At least one of those papers ought to be cited. It also seems like a simple, obvious baseline is missing from their experiments - simply independently outputting D independent softmaxes from the policy network. Without that baseline it's not clear that any actual benefit is gained by modeling the joint distribution between actions, especially since the optimal policy for an MDP is provably deterministic anyway. The method could even be made to capture dependencies between different actions by adding a latent probabilistic layer in the middle of the policy network, inducing marginal dependencies between different actions. A direct comparison against one of the related methods in the discussion section would help better contextualize the paper as well. A final point on clarity of presentation - in keeping with the convention in the field, the readability of the tables could be improved by putting the top-performing models in bold, and Table 2 should almost certainly be replaced by a boxplot.", "Clarity and quality:\n\nThe paper is well written and the ideas are motivated clearly both in writing and with block diagram panels. Also the fact that the paper considers different variants of the idea adds to the quality of the paper. May main concern is with the quality of results which is limited to some toy/synthetic problems. Also the comparison with the previous work is missing.The paper would benefit from a more in depth numerical analysis of this approach both by applying it to more challenging/standard domains such as Mujoco and also by comparing the results with prior approaches such as A3C, DDPG and TRPO.\n\nOriginality, novelty and Significance:\n\nThe paper claims that the approach is novel in the context of policy gradient and Deep RL. I am not sure this is entirely the case since there is a recent work from Google Brain (https://arxiv.org/pdf/1705.05035.pdf ) which consider almost the identical idea with the same variation in the context of DQN and policy gradient (they call their policy gradient approach Prob SDQN). The Brain paper also makes a much more convincing case with their numerical analysis, applied to more challenging domains such as control suite. The paper under review should acknowledge this prior work and discuss the similarities and the differences. Also since the main idea and the algorithms are quiet similar to the Brain paper I believe the novelty of this work is at best marginal.", "Thank you for your helpful pointers to the relevant LSTM and MMDP literature. In light of your review, we have rewritten our paper to focus on the novel entropy estimates and properly acknowledge relevant previous works. We believe this new emphasis has led to a substantially improved paper. \n\nAs you requested, in the revised paper, we noted that for Modified MDP, the discount between non-environment transitions should be set to 1 to match the original MDP (which is what we did in our experiments). \n\nAs you requested, we also tried representing the actions fed to the Modified MDP feed-forward network as one-hot vectors. We noted in Appendix D that the one-hot vectors did not bring substantial improvement. \n\nFinally, we took a close look at the Schulman et al paper for proving our result for the smoothed gradient entropy estimator. Although with this approach the \"proof\" would just be a couple lines, to justify using the proof rigorously would require a lengthy explanation on how to fit our model into the model of Schulman et al. We have, however, indicated that the result could be alternatively proven using Theorem 1 of Schulman et al and provided a reference.", "In our revision, we acknowledge that the LSTM policy parameterization is not entirely new and can, in fact, be seen as an adaption of auto-regressive techniques in supervised sequence modeling to reinforcement learning (Sections 4.3 and 6). We have reorganized our paper to focus on the novel entropy estimates.\n\nAs you requested, we added experimental results for the baseline for which the policy is a FFN with multiple heads. We refer to this as Independent Sampling (IS). We ran experiments for IS with and without (estimates of) the entropy bonus for both the rabbit and bandit environments. \n\nYou also suggested that we compare our results with one of the approaches in the literature. For this, we choose the paper “Learning Multiagent Communication with Backpropagation”. Please find the results in Section 5. As you requested, we also put the top-performing models in bold and turned Table 2 into a boxplot (Table 2 is now Figure 2).\n", "We have revised our paper to acknowledge the Google Brain paper in the formulation of the MMDP and the LSTM policy parameterizations (Sections 4 and 6). We have also acknowledged other relevant previous work on autoregressive models for policy gradient.\n\nOur paper differs from the Google Brain paper with regards to exploration strategies. Whereas the Brain paper injects noise into the action space to encourage exploration, the focus of our paper is to develop novel unbiased estimates for the entropy bonus and its gradient. \n\nWe put great effort into trying to apply our approach to the Mujoco domain. However, we faced technical challenges and thus could not complete it in time. For example, the OpenAI Mujoco interface, which uses Mujoco 1.3.1, is incompatible with our workstations, which are Macs with NVMe disks. For more info on the issue, please have a look at the links below:\n\nhttps://github.com/openai/mujoco-py/issues/36\nhttp://www.mujoco.org/forum/index.php?threads/error-could-not-open-disk.3441/\n\nWe also had issues compiling Mujoco and its dependencies on our HPC, such as the Mesa 3D Graphics Library. Although we were not able to run experiments in the more complex Mujoco environments, we believe that the simplicity of the environments used in our paper help to highlight critical issues related to entropy bonus. \n\nThank you for your pointer to A3C, DDPG and TRPO. Our entropy estimators are orthogonal to these approaches and thus they potentially can be combined with them. We may explore the benefits of our entropy estimates for these approaches in future work.\n", "We would like to thank all three reviewers for your detailed reviews and useful insights. Your comments have led to a greatly improved revised paper, without substantially changing the content of the original paper. \n\nOne consensus among the reviewers seems to be that although the material on entropy estimates is novel and interesting, the material on autoregressive models (MMDP and LSTM) is less novel since this material was already known in the folklore or presented in recent papers. You also requested that in addition to the autoregressive models, we examine other baseline policies.\n\nResponding to these concerns, we have re-organized the material to place more emphasis on the entropy estimates and less emphasis on the autoregressive policy models. In doing so, we have cited earlier work on using MMDP and LSTM for policy gradient, including the recent Google Brain paper. We also made a major effort to generate results for two baseline policies: (1) a single feed-forward network with multiple heads; and (2) CommNet. For both of these baselines, we examined our various entropy estimates. \n\nPlease note that with the exception of new experimental results, there is no new material presented in our revised paper. However, the paper has undergone a major reorganization. The list of changes is below:\n\n- The title of the paper is changed to reflect the new emphasis on the entropy estimates.\n\n- The abstract and introduction are rewritten to focus on the entropy estimates and removed the claim that the autoregressive policies are novel.\n\n- The entropy estimates section is moved to before the policy parameterization section.\n\n- The smoothed mode estimator is moved from the appendix to the entropy estimates section (now subsection 3.3).\n\n- In the policy parameterization section, a new subsection 4.1 is added to explain the new baseline policy (1) above.\n\n- The MMDP subsection is shortened to present the minimal explanation and the details are moved to Appendix D. The MMDP subsection acknowledges prior work by the Google Brain paper.\n\n- The LSTM subsection acknowledges relevant prior work.\n\n- In the experimental results section, we introduce CommNet and added results for baseline policies (1), (2) for the hunter-rabbit environment and added results for baseline policy (1) for multi-arm bandits.\n\n- The hunter-rabbit result analysis is rewritten to place more emphasis on the role of the entropy estimates across different models for the policy.\n\n- In Table 1, best performing models are bolded and horizontal lines are added to improve readability.\n\n- Table 2 is turned into a boxplot and is now Figure 2.\n\n- In Related Work section, we first discussed relevant work with regards to the entropy estimates before the policy parameterizations.\n\n- The Conclusion is shortened so the paper stays within the recommended 8-page length.\n\n- The hyperparameters for the two baseline policies are added to Appendix A.\n\n- At the end of the proof of theorem 4 in Appendix C, we note that the theorem can also be proved by material introduced by Stochastic Computational Graph of Schulman et al and provided a reference.\n\n- Appendix D is added to explain the details of MMDP. We noted that the discount between non-environment transitions should be set to 1 to match the original MDP and that we tried representing the actions fed to the Modified MDP feed-forward network as one-hot vectors.\n\n- Appendix E is added to explain the state representation for baseline policy (2).\n\n- Other minor changes to improve readability.\n" ]
[ 6, 5, 5, -1, -1, -1, -1 ]
[ 3, 4, 5, -1, -1, -1, -1 ]
[ "iclr_2018_rk3b2qxCW", "iclr_2018_rk3b2qxCW", "iclr_2018_rk3b2qxCW", "HJs1WiFlM", "ry9X12Fgz", "Bk4yQ1Alz", "iclr_2018_rk3b2qxCW" ]
iclr_2018_SyPMT6gAb
Variance Regularized Counterfactual Risk Minimization via Variational Divergence Minimization
Off-policy learning, the task of evaluating and improving policies using historic data collected from a logging policy, is important because on-policy evaluation is usually expensive and has adverse impacts. One of the major challenge of off-policy learning is to derive counterfactual estimators that also has low variance and thus low generalization error. In this work, inspired by learning bounds for importance sampling problems, we present a new counterfactual learning principle for off-policy learning with bandit feedbacks.Our method regularizes the generalization error by minimizing the distribution divergence between the logging policy and the new policy, and removes the need for iterating through all training samples to compute sample variance regularization in prior work. With neural network policies, our end-to-end training algorithms using variational divergence minimization showed significant improvement over conventional baseline algorithms and is also consistent with our theoretical results.
rejected-papers
The reviewers agree that the paper studies and interesting problem with an interesting approach. The reviewers raised some concerns regarding the theoretical and empirical results. The authors have made changes to the paper, but given the theoretical nature of the paper and the extent of changes, another review is needed before publication.
train
[ "H1rSYwTQG", "BJMTTZjNM", "r18QCeLNz", "r1Sed5uez", "BJ3VB6_xG", "r1gHCrFlM", "Bk-ucO6Xf", "HJvH6w67f", "HJkIbD6mG" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Dear reviewer,\n\nThanks a lot for the inspiring comments and below are our point-by-point correspondence and hope the revision can address these concerns and make the paper more solid.\n\n- (Citations formatting) We have fixed the missing parenthesis for end-of-sentence citations. We apologize for the inconvenience because mistaking the hyper-ref box for actual parenthesis.\n\n- ( Z and loss L(z)) You are absolutely right the loss needs to be bounded in [0,1] for the theorem from Cortes et al. to be valid. In this work, because we didn't perform loss scaling to [0,1] but instead assume the loss [0,L]. We revised our theorems to reflect such bounded loss condition. \n\nAs for experiments, since the loss is only scaling factor, the minimization problem and its solution is essentially the same ( \\sum loss vs. \\sum loss/L). \n\n- (typos for +1 instead of -1) fixed\n\n- (why lower bound works) We apologize for the confusion here. \n\nthe lower bound exists because we are restricting the family of discriminators from all functions to all neural networks. This leads to the lower bound, however, since neural networks are essentially universal function approximators [1], the equality condition can be satisfied here in theory. \n\nFor the first and second inequality, the first one is an application of Fenchel duality and we also found the equality condition can be satisfied here. The swapping of sup T and E_x works because T is actually only a function of y so can be left out for integration over x.\n\nWe have updated our proofs and discussion and hope we can make things clearer.\n\n- (loop condition check) We use the estimator of divergence function obtained from empirical distributions as an approximation and check the value against the threshold.\n\nFurthermore, we have also established through proof that, the estimator using empirical distribution is a consistent estimator of the true divergence. (Sec. 4.1 proposition 1)\n\n- (typos) fixed\n\n- (baselines) why logging policy is CRF instead of NN? As our latter experiment shows, as the logging policy gets better, it will be more difficult to improve upon the logging policy, esp. IPS POEM which uses linear CRF as policy. so we only used linear CRF as the logging policy.\n\n(EDIT 1:50PM, 1/5: We just realized we have misunderstood your intent was not to suggest using NN as logging policy but show the performance of NN trained with supervised methods. We are really sorry but it's approaching the rebuttal deadline, we hope to provide such statistics asap)\n\n(EDIT : We just found out it is still possible to upload a modified manuscript, so we went ahead to upload an updated table. We used the exact architecture as of the NN policy with bandit training. The NN learned with supervised training has similar performance compared to CRF, and we think it might result from the over-fitting, as indicated by the high EXP loss)\n\n- (picking regularization hyper-parameter) This is indeed a very difficult question, as with other regularized ERM techniques. \n\nAs our Theorem 2 suggests, \\lambda is \\sqrt{L^2 log(1/\\eta) / N}, where L is the bound of the loss, \\eta is the probability of the bound to hold, and N is the number of samples, this sheds some light to how the \\lambda controls the bias-variance trade-off, where true risk < ERM + \\sqrt{(divergence-1)/N} + O(1).\n\nIn practice however, we believe the best approach is still start with the intuition from the theorem and play with cross validation.\n\n- ( \" it's unclear that such a bound can be tractably guessed \") We have provided a full proof of the bound, which is essentially an application of Bernstein inequality and Theorem 1 of our paper. But please let us know if you feel anything is unclear.\n\nAgain, we thanks for all the great comments you made to help us improve the paper and we really appreciate it.\n\nBest,\nAuthors\n\n[1] Hornik, Kurt, Maxwell Stinchcombe, and Halbert White. \"Multilayer feedforward networks are universal approximators.\" Neural networks 2.5 (1989): 359-366.", "Dear Reviewer,\n\nThanks a lot for the follow-up and clarification. You're correct about Fig. 4a) and 5a) and that the absolute loss doesn't change too much w.r.t to the stochasticity .\n\nIn the paper, when we were trying to discuss how logging policy affected the final performance, we used the relative performance (in our mind), performance(h)/performance(h0), to indicate how easy it is to improve upon h0. \n\nIn the stochastic experiment, based on the relative performance, we concluded it was harder and harder to improve upon h0, as h0 became more deterministic. However, the ability to improve upon h0 doesn't necessarily mean model performance, and our writings confused with two concepts. We've updated our writings to make a clear contrast.\n\nAnother point we were trying to make in the experiment is that NN policies without regularization performed slightly worse than the ones with regularization, which we think reflects the benefit of the proposed regularization. The trend is more observable in Fig. 4a) because of the scale of the Y-axis, so that we kept the figures the way they were.\n\nThanks again for pointing this out and the help in improving our manuscript!\n\nBest,\nAuthors", "Thank you for your reply and for updating the paper.\n\nRegarding the stochasticity, I might misunderstand Figure 5 a, so let me clarify my comment. Figure 5a seems to contain the values used to compute Figure 4a. If this is correct, then the decrease in improvement shown in Figure 4a is not due to a worse learned policy (due to stochasticity) but to the improvement of h0.\n\nIn other words, the learned policies in Figure 5a have approximately constant performance no matter the stochasticity of h0. To me, this suggests the stochasticity of h0 does not impact the performance of the learned policy. ", "In this paper the authors studied the problem of off-policy learning, in the bandit setting when a batch log of data generated by the baseline policy is given. Here they first summarize the surrogate objective functions derived by existing approaches such as importance sampling and variance regularization (Swaminathan et. al). Then they extend the results in Theorem 2 of the paper by Cortes et. al (which also uses the empirical Bernstein inequality by Maurer and Pontil), and derive a new surrogate objective function that involves the chi-square divergence. Furthermore, the authors also show that the lower bound of this objective function can be iteratively approximated by variational f-GAN techniques, which could potentially be more numerically stable and empirically has lower variance. \n\nIn general, I think the problem studied in this paper is very interesting, and the topic of counterfactual learning, especially policy optimization with the use of offline and off-policy log data, is important. However, I think the theoretical contribution in this paper on off-policy learning is quite incremental. Also the parts that involve f-GAN is still questionable to me.\n\nDetailed comments:\nIn these variance regularization formulations (for example the one proposed in this paper, or the one derived in Swaminathan's paper), \\lambda can be seen as a regularization parameter that trades-off bias and variance of the off-policy value estimator R(h) (for example the RHS of equation 6). To exactly calculate \\lambda either requires the size of the policy class (when the policy class is finite), or the complexity constants (which exists in C_1 and C_2 in equation 7, but it is not clearly defined in this paper). Then the main question is on how to choose \\lambda such that the surrogate objective function is reasonable. For example in the safety setting (off-policy policy learning with baseline performance guarantees, for example see the problem setting in the paper by P. Thomas 2015: High Confidence off-policy improvement), one always needs the upper-bound in 6) to hold. This makes the choice of \\lambda crucial and challenging. Unfortunately I don't see much discussions in this paper about choosing \\lambda, even in the context of bias-variance trade-offs. This makes me uncomfortable in believing that the results in experiments hold for other (reasonable) choices of \\lambda.\n\nThe contribution of this paper is of two-fold: 1) the authors extend the results from Cortes's paper to derive a new surrogate objective function, and 2) they show how this objective can be approximated by f-GAN techniques. The first contribution is rather incremental as it's just a direct application of Theorem 2 in Cortes's paper. Regarding the second contribution, I am a bit concerned about the derivations of Equation 9, especially the first inequality and the second equality. I see that the first inequality is potentially an application of the conjugate function inequality, but more details are needed (f^* is not even defined). For the second equality, it's unclear to me how one can swap the sup and the E_x operators. More explanations are definitely needed to show their mathematical correctness, especially when this part is a main contribution. Even if the derivations are right, the f-GAN surrogate objective is a lower bound of the surrogate objective function, while the surrogate function is an upper bound of the true objective function (which is inaccessible). How does one guarantees that the f-GAN surrogate objective is a reasonable one? \n\nNumerical comparisons between the proposed approach, and the approach from Swaminathan's paper are required to demonstrate the superiority of the proposed approach. Are there comparisons in performance between the approach from the original chi-square surrogate function and the one from the f-GAN objective (in order to showcase the need of using f-GAN) as well?\n\nMinor comments:\nIn experimental section, method POEM is not defined.\nThe paper is in an okay status. But there are several minor typos, for example \\hat{R}_{(} in page 3, and several typos in Algorithm 1 and Algorithm 2.\n\nIn general, I think this paper is studying an interesting topic, but the aforementioned issues make me feel that the paper's current status is still unsuitable for publication. ", "The paper proposes an interesting alternative to recent approaches to learning from logged bandit feedback, and validates their contribution in a reasonable experimental comparison. The clarity of writing can be improved (several typos in the manuscript, notation used before defining, missing words, poorly formatted citations, etc.).\nImplementing the approach using recent f-GANs is an interesting contribution and may spur follow-up work. There are several lingering concerns about the approach (detailed below) that detract from the quality of their contributions.\n\n[Major] In Lemma 1, L(z) is used before defining it. Crucially, additional assumptions on L(z) are necessary (e.g. |L(z)| <= 1 for all z. If not, a trivial counter-example is: set L(z) >> 1 for all z and Lemma 1 is violated). It is unclear how crucially this additional assumption is required in practice (their expts with Hamming losses clearly do not satisfy such an assumption).\n\n[Minor] Typo: Section 3.2, first equation; the integral equals D_f(...) + 1 (not -1).\n\n[Crucial!] Eqn10: Expected some justification on why it is fruitful to *lower-bound* the divergence term, which contributes to an *upper-bound* on the true risk.\n\n[Crucial!] Algorithm1: How is the condition of the while loop checked in a tractable manner?\n\n[Minor] Typos: Initilization -> Initialization, Varitional -> Variational\n\n[Major] Expected an additional \"baseline\" in the expts -- Supervised but with the neural net policy architecture (NN approaches outperforming Supervised on LYRL dataset was baffling before realizing that Supervised is implemented using a linear CRF).\n\n[Major] Is there any guidance for picking the new regularization hyper-parameters (or at least, a sensible range for them)?\n\n[Minor] The derived bounds depend on M, an a priori upper bound on the Renyi divergence between the logging policy and any new policy. It's unclear that such a bound can be tractably guessed (in contrast, prior work uses an upper bound on the importance weight -- which is simply 1/(Min action selection prob. by logging policy) ).", "This paper studies off-policy learning in the bandit setting. It develops a new learning objective where the empirical risk is regularized by the squared Chi-2 divergence between the new and old policy. This objective is motivated by a bound on the empirical risk, where this divergence appears. The authors propose to solve this objective by using generative adversarial networks for variational divergence minimization (f-GAN). The algorithm is then evaluated on settings derived from supervised learning tasks and compared to other algorithms.\n\nI find the paper well written and clear. I like that the proposed method is both supported by theory and empirical results. \n\nMinor point: I do not really agree with the discussion on the impact of the stochasticity of the logging policy in section 5.6. Based on Figure 5 a and b, it seems that the learned policy is performing equally well no matter how stochastic the logging policy is. So I find it a bit misleading to suggest that the learned policy are not being improved when the logging policy is more deterministic. Rather, the gap reduces between the two policies because the logging policy gets better. In order to better showcase this mechanism, perhaps you could try using a logging policy that does not favor the best action.\n\nquality and clarity:\n++ code made available\n+ well written and clear\n- The proof of theorem 2 is not in the paper nor appendix (the authors say it is similar to another work).\n\n\noriginality\n+ good extension of the work by Swaminathan & Joachims (2015a): derivation of an alternative objective and use of a deep networks\n. This paper leverages a set of diverse results\n\nsignificance\n- The proposed method can only be applied if propensity scores were recorded when the data was generated.\n- no test on a real setting\n++ The proposed method is supported both by theoretical insights and empirical experiments.\n+ empirical improvement with respect to previous methods\n\n\ndetails/typos:\n\n3.1, p3: R^(h) has an indexed parenthesis\n5.2; and we more details\n5.3: so that results more comparable", "As pointed out by reviewers, major concerns they had is the theoretical soundness of the paper, so we updated our derivation and proofs, and below is the summary of the major revision:\n\n1) by leveraging Bernstein inequality and our lemma of bounding the second moment of importance sampling weights, we reached a regularized erm formulation as\n True Risk < Empirical Risk + \\lambda \\sqrt{ divergence(new policy| logging policy)} /N + C/N\n, which highlights the bias-variance trade-offs.\n\n\\lambda is \\sqrt{L^2 log(1/\\eta) / N}, where L is the bound of the loss, \\eta is the probability of the bound to hold, and N is the number of samples.\n\n2) with intuition from constrained optimization, we proposed a modified regularization as:\n min Empirical Risk s.t. divergence(new policy| logging policy) \\leq \\rho\nand this is shown to be a good surrogate of the true loss\n\n3) for computing the divergence(new policy| logging policy), the major concern is that we reached a lower bound of the divergence with adversarial training, while we are minimizing an upper bound. \n\nwe apologize for the confusion here, the lower bound comes from restricting the family of discriminators from all functions to neural networks. Theoretically, because of the universal approximation ability of neural networks, the equality condition can be satisfied.\n\nwe also added a proof showing that the empirical estimator we are minimizing is a consistent estimator of the true divergence\n\n4) other minor re-organizations and small fixes such as typos", "Dear reviewer,\n\nThanks a lot for your valuable comments and we have revised our manuscript and hope it can address some of your concerns.\n\nBelow are our point-by-point correspondence.\n\n- (choosing \\lambda & bias-variance trade-offs) We agree with you this is indeed a very difficult question, as with other regularized ERM techniques. \n\nAs our Theorem 2 suggests, \\lambda is \\sqrt{L^2 log(1/\\eta) / N}, where L is the bound of the loss, \\eta is the probability of the bound to hold, and N is the number of samples, this sheds some light to how the \\lambda controls the bias-variance trade-off, where true risk < ERM + \\sqrt{(divergence-1)/N} + O(1). \n\nThe bound here is mainly controlled by the divergence between the new policy h and the empirical log policy h0, which implicitly includes the sample variance. \n\nIn ERM, we have E_h[loss(classifier, label)], where we have a distribution h, and classifier, both coming from two families. \nHere the loss is already fixed and provided by dataset, and the concentration bound exists because of the variance of importance sampling bounds of h/h0 (Theorem 2 and its proof), and doesn't concern the hypothesis space from losses/ classifiers.\n\nIn practice however, we believe the best approach is still start with the intuition from the theorem and play with cross validation.\n\n- (lower bound vs upper bound) We apologize for the confusion here. \n\nthe lower bound exists because we are restricting the family of discriminators from all functions to all neural networks. This leads to the lower bound, however, since neural networks are essentially universal function approximators [1], the equality condition can be satisfied here in theory. \n\nFor the first and second inequality, the first one is an application of Fenchel duality and we also found the equality condition can be satisfied here. The swapping of sup T and E_x works because T is actually only a function of y so can be left out for integration over x.\n\nWe have updated our proofs and discussion and hope we can make things clearer.\n\n- (numerical comparisons) The POEM algorithm is the method from the \nAdith et al paper, and we believe the results demonstrated the superior performance of our approach.\n\nThe chi-square surrogate function (NN-CoTraining: min loss + \\lambda divergence ) comparison with the f-gan objective (NN-SeparateTraining, min loss s.t. divergence \\leq \\rho) can be found in Sec. 6.5.\n\nFor separate training, chi-square and f-gan has no difference because it only has difference in the RHS constraint by a constant 1. For Co-training, we have found the two approaches don't work well, while separate training performs much better than co-training \n\nThanks a lot for your insightful comments and please let us know if you need further clarifications from us!\n\nBest,\nAuthor\n\n[1] Hornik, Kurt, Maxwell Stinchcombe, and Halbert White. \"Multilayer feedforward networks are universal approximators.\" Neural networks 2.5 (1989): 359-366.", "Dear reviewer,\n\nThanks a lot for the insightful comments! Below are our point-by-point correspondence to your reviews and hope they can address some of your concerns and clarify things a bit.\n\n- (Regarding the stochasticity vs performance). Thanks for the suggestion for using a different type of policy and we experimented with policies with random exploration, i.e., \\hat p = (1-\\epsilon) p + \\epsilon, where p is the NN policy output, and similar trends hold. \n\nAs you may have noticed in the figure, the y values, ratio = test performance / logging policy performance gets greater than 1 after the stochasticity is considerably small. This suggests that the learned policy performs worse than logging policy on average, and demonstrates our main point in this experiment: as long as there are enough stochasticity in the policy, it is possible to learn an improved policy, while it is really hard to learn from a very deterministic policy.\n\nWe have revised the paragraph and hopefully it won't create further confusion.\n\n- (The proof of theorem 2) We've added the proof in the appendix section. \n\n- (Need for propensity scores) Yes, the availability of propensity scores is crucial to the algorithm. Cortes et al. provided a way to learn propensity scores and showed similar generalization bound, and we think this will be a very interesting future work. But the scope of this paper limits our exploration in this direction.\n\n- (No test on a real setting) We agree this is one of the limitation of our algorithm, but deploying such algorithm on-line such as large-scale ads placement system, is something we don't have access right now, so we have to resort to simulation studies. \n\n- (typos) Fixed them and thanks for pointing out!\n\nWe really appreciate your positive feedbacks on our paper overall and please let us know if our response needs further explanation.\n\nThanks again for your effort in reviewing our paper!\n\nBest,\nAuthors" ]
[ -1, -1, -1, 4, 5, 7, -1, -1, -1 ]
[ -1, -1, -1, 4, 5, 3, -1, -1, -1 ]
[ "BJ3VB6_xG", "r18QCeLNz", "HJkIbD6mG", "iclr_2018_SyPMT6gAb", "iclr_2018_SyPMT6gAb", "iclr_2018_SyPMT6gAb", "iclr_2018_SyPMT6gAb", "r1Sed5uez", "r1gHCrFlM" ]
iclr_2018_By5ugjyCb
PACT: Parameterized Clipping Activation for Quantized Neural Networks
Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. To address this cost, a number of quantization schemeshave been proposed - but most of these techniques focused on quantizing weights, which are relatively smaller in size compared to activations. This paper proposes a novel quantization scheme for activations during training - that enables neural networks to work well with ultra low precision weights and activations without any significant accuracy degradation. This technique, PArameterized Clipping acTi-vation (PACT), uses an activation clipping parameter α that is optimized duringtraining to find the right quantization scale. PACT allows quantizing activations toarbitrary bit precisions, while achieving much better accuracy relative to publishedstate-of-the-art quantization schemes. We show, for the first time, that both weights and activations can be quantized to 4-bits of precision while still achieving accuracy comparable to full precision networks across a range of popular models and datasets. We also show that exploiting these reduced-precision computational units in hardware can enable a super-linear improvement in inferencing performance dueto a significant reduction in the area of accelerator compute engines coupled with the ability to retain the quantized model and activation data in on-chip memories.
rejected-papers
All of the reviewers agree that the experimental results are promising and the proposed activation function enables a decent degree of quantization. However, the main concern with the approach is its limited novelty compared to previous work on clipped activation functions. minor comments: - Even though PACT is very similar to Relu, the names are very different. - Please include a plot showing the proposed activation function as well.
test
[ "B1wlzrslf", "BkgW-ZteG", "S1-ToUJWz", "HJ1GluT7z", "r1VDedaXz", "SJgql_6Xf", "S1cLKVpZG", "Bk08vE6ZM", "HkQb4VabM", "By90AXpZM", "rJ1FCma-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "The authors have addressed my concerns, and clarified a misunderstanding of the baseline that I had, which I appreciate. I do think that it is a solid contribution with thorough experiments. I still keep my original rating of the paper because the method presented is heavily based on previous works, which limits the novelty of the paper. It uses previously proposed clipping activation function for quantization of neural networks, adding a learnable parameter to this function. \n_______________\nORIGINAL REVIEW:\n\nThis paper proposes to use a clipping activation function as a replacement of ReLu to train a neural network with quantized weights and activations. It shows empirically that even though the clipping activation function obtains a larger training error for full-precision model, it maintains the same error when applying quantization, whereas training with quantized ReLu activation function does not work in practice because it is unbounded.\n\nThe experiments are thorough, and report results on many datasets, showing that PACT can reduce down to 4 bits of quantization of weights and activation with a slight loss in accuracy compared to the full-precision model. \nRelated to that, it seams a bit an over claim to state that the accuracy decrease of quantizing the DNN with PACT in comparison with previous quantization methods is much less because the decrease is smaller or equal than 1%, when competing methods accuracy decrease compared to the full-precision model is more than 1%. Also, it is unfair to compare to the full-precision model using clipping, because ReLu activation function in full-precision is the standard and gives much better results, and this should be the reference accuracy. Also, previous methods take as reference the model with ReLu activation function, so it could be that in absolute value the accuracy performance of competing methods is actually higher than when using PACT for quantizing DNN.\n\nOTHER COMMENTS:\n\n- the list of contributions is a bit strange. It seams that the true contribution is number 1 on the list, which is to introduce the parameter \\alpha in the activation function that is learned with back-propagation, which reduces the quantization error with respect to using ReLu as activation function. To provide an analysis of why it works and quantitative results, is part of the same contribution I would say.", "The parameterized clipping activation (PACT) idea is very clear: extend clipping activation by learning the clipping parameter. Then, PACT is combined with quantizing the activations. \n\nThe proposed technique sounds. The performance improvement is expected and validated by experiments. \n\nBut I am not sure if the novelty is strong enough for an ICLR paper. \n", "This paper presents a new idea to use PACT to quantize networks, and showed improved compression and comparable accuracy to the original network. The idea is interesting and novel that PACT has not been applied to compressing networks in the past. The results from this paper is also promising that it showed convincing compression results. \n\nThe experiments in this paper is also solid and has done extensive experiments on state of the art datasets and networks. Results look promising too.\n\nOverall the paper is a descent one, but with limited novelty. I am a weak reject", "We thank Reviewer3 for time and effort for reviewing our paper.\n\nWe have updated our draft with the following significant changes in contents (all the changes are colored in blue):\n\n1) We added accuracy comparison between PACT and DoReFa-Net (state-of-the-art in quantized neural network) for ResNet-50 (in Table 1) to demonstrate superiority of PACT. Notice that PACT outperforms DoReFa-Net with > 5% higher accuracy. This confirms that PACT enables no accuracy degradation for the quantized ResNet-50, which was not achievable by previous state-of-the-art activation quantization schemes. Also note that this superior accuracy is achieved without ANY tuning in hyper-parameters; the same hyper-parameters are used in both baseline and the PACT experiments and the networks are trained from scratch. This indicates that Deep Learning practitioners can simply replace ReLU with PACT to achieve robust accuracy when quantizing activation of their neural networks. This claim is supported by our extensive experimental results in Section 5 and Appendix E.\n\n2) We added two sections in the Appendix to explain why PACT is superior than both ReLU and Clipping activation function for quantized neural networks. In Appendix A, we show theoretical analysis that PACT is as expressive as ReLU when it is used as an activation function. Furthermore, we explain in Appendix B that PACT finds a balancing point between clipping and quantization errors to minimize their impact to classification accuracy. This analysis demonstrates novelty of PACT as a superior activation function for quantized neural network.\n\n3) We reflect comments from Reviewer 1 in the draft. We merged the contribution statements in Section 1 (Introduction), and clarified experimental settings to highlight that the baseline networks use ReLU as described in the references, and the PACT experiments use the identical hyper-parameters as the baseline, except that the activation function is replaced from ReLU to PACT. \n\nPlease read the updated draft and share your thoughts. Especially, we are curious about in which aspect the reviewer thinks that our paper lacks novelty. Any detail comments would be very appreciated for improving our paper.", "We thank Reviewer1 for time and effort for reviewing our paper.\n\nWe have updated our draft with the following significant changes in contents (all the changes are colored in blue):\n\n1) We reflect your comments in the draft. We merged the contribution statements in Section 1 (Introduction), and clarified experimental settings to highlight that the baseline networks use ReLU as described in the references, and the PACT experiments use the identical hyper-parameters as the baseline, except that the activation function is replaced from ReLU to PACT. \n\n2) We added accuracy comparison between PACT and DoReFa-Net (state-of-the-art in quantized neural network) for ResNet-50 (in Table 1) to demonstrate superiority of PACT. Notice that PACT outperforms DoReFa-Net with > 5% higher accuracy. This confirms that PACT enables no accuracy degradation for the quantized ResNet-50, which was not achievable by previous state-of-the-art activation quantization schemes. Also note that this superior accuracy is achieved without ANY tuning in hyper-parameters; the same hyper-parameters are used in both baseline and the PACT experiments and the networks are trained from scratch. This indicates that Deep Learning practitioners can simply replace ReLU with PACT to achieve robust accuracy when quantizing activation of their neural networks. This claim is supported by our extensive experimental results in Section 5 and Appendix E.\n\n3) We added two sections in the Appendix to explain why PACT is superior than both ReLU and Clipping activation function for quantized neural networks. In Appendix A, we show theoretical analysis that PACT is as expressive as ReLU when it is used as an activation function. Furthermore, we explain in Appendix B that PACT finds a balancing point between clipping and quantization errors to minimize their impact to classification accuracy. This analysis demonstrates novelty of PACT as a superior activation function for quantized neural network.\n\nPlease read the updated draft and share your thoughts. Any comments would be very appreciated for improving our paper.\n", "We thank Reviewer2 for time and effort for reviewing our paper.\n\nWe have updated our draft with the following significant changes in contents (all the changes are colored in blue):\n\n1) We added accuracy comparison between PACT and DoReFa-Net (state-of-the-art in quantized neural network) for ResNet-50 (in Table 1) to demonstrate superiority of PACT. Notice that PACT outperforms DoReFa-Net with > 5% higher accuracy. This confirms that PACT enables no accuracy degradation for the quantized ResNet-50, which was not achievable by previous state-of-the-art activation quantization schemes. Also note that this superior accuracy is achieved without ANY tuning in hyper-parameters; the same hyper-parameters are used in both baseline and the PACT experiments and the networks are trained from scratch. This indicates that Deep Learning practitioners can simply replace ReLU with PACT to achieve robust accuracy when quantizing activation of their neural networks. This claim is supported by our extensive experimental results in Section 5 and Appendix E.\n\n2) We added two sections in the Appendix to explain why PACT is superior than both ReLU and Clipping activation function for quantized neural networks. In Appendix A, we show theoretical analysis that PACT is as expressive as ReLU when it is used as an activation function. Furthermore, we explain in Appendix B that PACT finds a balancing point between clipping and quantization errors to minimize their impact to classification accuracy. This analysis demonstrates novelty of PACT as a superior activation function for quantized neural network.\n\n3) We reflect comments from Reviewer 1 in the draft. We merged the contribution statements in Section 1 (Introduction), and clarified experimental settings to highlight that the baseline networks use ReLU as described in the references, and the PACT experiments use the identical hyper-parameters as the baseline, except that the activation function is replaced from ReLU to PACT. \n\nPlease read the updated draft and share your thoughts. Especially, we are curious about in which aspect the reviewer thinks that our paper lacks novelty. Any detail comments would be very appreciated for improving our paper.", "Thank you for your review and showing interest to our work. To answer your question on the novelty of PACT, we put a detail response in the first comment above. \n\nAnd here's a brief summary: We claim that PACT is a new activation function that is best suitable for activation quantization. We claim that (1) PACT demonstrates (for the first time) no-accuracy-degraded 4-bit quantization (both weight and activation) for challenging ResNet-50 for ImageNet dataset, and (2) PACT UNIVERSALLY outperforms ReLU based activation quantization schemes for all the CNN models we tested.\n\nTo better explain why PACT outperforms ReLU based activation quantization schemes, we newly added Appendix A and B for deeper analysis of PACT. We showed that (1) PACT is as expressive as ReLU, and (2) PACT balances clipping and quantization errors when activation is quantized. \n\nAlso, please note that all the robust accuracies we achieved with PACT do NOT require any modification in the original hyper-parameters and network structures the baselines use.\n", "Thank you for your review and showing interest to our work. To answer your question on the novelty of PACT, we put a detail response in the first comment above. \n\nAnd here's a brief summary: We claim that PACT is a new activation function that is best suitable for activation quantization. We claim that (1) PACT demonstrates (for the first time) no-accuracy-degraded 4-bit quantization (both weight and activation) for challenging ResNet-50 for ImageNet dataset, and (2) PACT UNIVERSALLY outperforms ReLU based activation quantization schemes for all the CNN models we tested.\n\nTo better explain why PACT outperforms ReLU based activation quantization schemes, we newly added Appendix A and B for deeper analysis of PACT. We showed that (1) PACT is as expressive as ReLU, and (2) PACT balances clipping and quantization errors when activation is quantized. \n\nAlso, please note that all the robust accuracies we achieved with PACT do NOT require any modification in the original hyper-parameters and network structures the baselines use.\n\n", "Thank you for the detail comments. Here are our answers:\n\nQ1. Over-claim that PACT’s accuracy degradation is much less than others?\nA1. There are two aspects to consider for PACT's accuracy degradation. \nFirst, PACT outperforms (in terms of accuracy degradation) for all the bit-width configuration we compared, demonstrating superior robustness of PACT compared to the other quantization schemes. This clear trend can be seen in Tables 3-8, where the bold numbers indicate the one with lowest accuracy degradation for each column. We added Appendix A and B to analyze why PACT can outperform ReLU based activation quantization schemes.\n\nSecond, PACT's accuracy degradation is much lower for the challenging activation quantization (e.g., quantizing activation of binary/ternary weight networks) for ResNet-50. For example, as shown in Table 7, accuracy degradations for HWGQ and FGQ are 11.4% and 6.7%, respectively, whereas PACT's accuracy degradations are 9.1% and 2.4% for the same bit-precision. This gap in accuracy degradation becomes even larger when PACT is compared to the LPBN technique. In case of 3-bit activation with full-precision weight, LPBN's accuracy degradation is 19.9%, whereas PACT's accuracy degradation is only 1.4%. \n\n\nQ2. Baseline uses Clipping activation function?\nA2. No, our full-precision baselines use the same activation function (i.e., ReLU) as the network structure is proposed in the original paper. Tables 3-8 show that the accuracies for our full-precision baselines are comparable to the full-precision reference of the other work we compared. We will clarify this more in Section 5 and Appendix D.\n\n\nQ3. Do not separate contribution for “Why PACT works” with “PACT”\nA3. Thanks for the suggestion. We will merge the first two contributions to one. Furthermore, we now include enhanced analysis on PACT in Appendix A and B to provide deeper understanding about why PACT outperforms previous ReLU based activation quantization schemes.\n\n ", "Furthermore, we want to emphasize that PACT's robust accuracy is achieved WITHOUT any changes to the original model, except that ReLU is replaced with PACT. In other words, we used the same hyper-parameters (learning rate schedules, weight initialization, mini batch size, optimizers (ADAM or SGD with momentum), etc.) as well as the original models and network structures in all of our experiments. Furthermore, all of the training was done from scratch, showing that this work does not require any pre-trained weights for good initialization, or any warm-start or larger number of training epochs. \n", "The authors thank reviewers for their contribution to improving this paper.\n\nWe'd like to highlight the following 3 novel aspects of our PACT paper and we're hoping this communicates to the reviewers the significance of our work:\n\n(A) NO loss of accuracy during quantization: Over the past 3 years, there has been a tremendous amount of work focused on quantization (binarization / ternarization etc.) of neural networks. Most of these publications focused on applying these techniques to simpler networks (based on the CIFAR10, SVHN and MNIST datasets) where they reported little loss of accuracy. However, in cases when the same exact techniques were applied to larger models (based on the ImageNet dataset), significant loss of accuracy has been reported - leading us to conclude that all previous quantization techniques ONLY work when there is significant redundancy in the model and do not scale well to state of the art networks. \n\nAs Table 7 shows, our work is the first paper that shows state-of-the-art accuracy (<0.5% Top-1 accuracy degradation and slight IMPROVEMENT in Top-5 accuracy) using 4-bit quantizations for both weights and activations for ResNet-50 for ImageNet dataset. Furthermore, our work shows robust accuracy for ternary and binary weight network (with 4 and 2-bit activation, respectively), when all previous techniques showed significantly (2.3 ~ 4.2% Top-1) more accuracy degradation. \n\nThis is critically important since a significant number of models in Medical Imaging [1], Automotive [2] and other domains are based on transfer learning applied to ResNet like models (based on ImageNet) - and preserving accuracy is extremely critical in these domains due to its direct impact on safety and human life. Allowing 4-bit quantizations to work with the same level of accuracy, as discussed in depth in this paper (Section 6), also allows 2X improvement in Inference/Watt throughput in co./mparison to state-of-the-art 8-bit models - which is critical for power-constrained mobile, IoT and even Cloud hardware devices.\n[1] Litjens, Geert, et al. \"A survey on deep learning in medical image analysis.\" arXiv preprint arXiv:1702.05747 (2017).\n[2] Adam Grzywaczewski, “Training AI for Self-Driving Vehicles: the Challenge of Scale,” in https://devblogs.nvidia.com/parallelforall/training-self-driving-vehicles-challenge-scale/\n\n\n(B) Second, PACT shows superior performance, quantified in terms of accuracy degradation, for all the bit-configurations and networks tested in comparison to seven state-of-the-art quantization publications. From Tables 3-8, we can observe that the accuracy degradation (averaged over all the bit-configurations) of the compared publications for AlexNet, ResNet18, and ResNet50 are 6.1%, 5.1% and 8.1% (Top-1), respectively. In contrast, PACT's accuracy degradation (averaged across all the bit-configurations) is -0.2% (i.e., achieving slightly better accuracy than reference), 3.1% and 2.7%. (Tables 3-8 also highlight in bold which scheme achieves the lowest accuracy degradation for each bit-configuration.) \n\nThis showcases the reliability of PACT for quantization. Please note that large degradation in accuracy nullifies the use of large scale DNNs - rendering previous techniques largely unusable in most scenarios. For example, ResNet-18 takes 3.6B Flops to achieve 72.12% Top-1 accuracy, whereas ResNet-50 takes 7.6B Flops to achieve 77.15% Top-1 accuracy. Thus it's better to use ResNet-18 if accuracy degradation is >3% for ResNet-50. \n\n\n(C) Third, PACT has a unique characteristic that balances clipping and quantization errors when quantizing activation. We newly added a deeper analysis on why PACT outperforms other activation functions for activation quantization in Appendix A and B. In Appendix A, we showed the expressivity of PACT that it can be trained via SGD to tune the clipping levels properly in order to produce the output that the same network with ReLU would produce. This tuning capability was validated with the CIFAR10-ResNet20 experiment shown in Fig. 7 that PACT based networks could converge to almost identical training curves in comparison to the ReLU based network. \n\nIn Appendix B, we further explained why PACT provides robustness to activation quantization. We first observed that when activation is quantized, there is a trade-off between clipping and quantization errors depending on the clipping level (Fig. 8a). We demonstrated that PACT auto-tunes the clipping level during training to achieve optimal accuracy under activation quantization (Fig. 8b). Since PACT does not require sweeping to obtain the right clipping level, this is a very computationally feasible way. \n" ]
[ 5, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_By5ugjyCb", "iclr_2018_By5ugjyCb", "iclr_2018_By5ugjyCb", "S1-ToUJWz", "B1wlzrslf", "BkgW-ZteG", "BkgW-ZteG", "S1-ToUJWz", "B1wlzrslf", "rJ1FCma-z", "iclr_2018_By5ugjyCb" ]
iclr_2018_HJDV5YxCW
Heterogeneous Bitwidth Binarization in Convolutional Neural Networks
Recent work has shown that performing inference with fast, very-low-bitwidth (e.g., 1 to 2 bits) representations of values in models can yield surprisingly accurate results. However, although 2-bit approximated networks have been shown to be quite accurate, 1 bit approximations, which are twice as fast, have restrictively low accuracy. We propose a method to train models whose weights are a mixture of bitwidths, that allows us to more finely tune the accuracy/speed trade-off. We present the “middle-out” criterion for determining the bitwidth for each value, and show how to integrate it into training models with a desired mixture of bitwidths. We evaluate several architectures and binarization techniques on the ImageNet dataset. We show that our heterogeneous bitwidth approximation achieves superlinear scaling of accuracy with bitwidth. Using an average of only 1.4 bits, we are able to outperform state-of-the-art 2-bit architectures.
rejected-papers
All of the reviewers find the approach interesting, but they have reservations regarding the practical impact and empirical evaluation. The paper needs improvement both on the motivation and on the experimental results by including more baseline methods and neural architectures.
train
[ "SkYPj5Hez", "H1Jn8QYeG", "SklRZUJ-G", "rkrJS-_Qz", "H1iBLz8Xf", "r1UdREX-M", "BJzQtmQbG", "rkiavXQ-z", "rkNfvQQbG", "Hy2OIQXWf", "H162w9x-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "This paper suggests a method for varying the degree of quantization in a neural network during the forward propagation phase.\n\nThough this is an important direction to investigate, there are several issues:\n\n1. Comparison with previous results is misleading:\na.\t1-bit weights and floating point activations: Rastegari et al. got 56.8% accuracy on Alexnet, which is better than this paper 1.4bit result of 55.2%.\nb.\tHubara et al. got 51% results on 1-bit weights and 2-bit activations included also quantization first and last layer, in contrast to this paper. Therefore, it is not clear if there is a significant benefit in the proposed method which achieves 51.5% when decreasing the activation precision to 1.4bit. \n\nTherefore, it is not clear that the proposed methods improve over previous approaches.\n\n2. It is not clear to me: in which dimension of the tensors are we saving the scale factor? If it is per feature map, or neuron, this eliminates the main benefits of quantization: doing efficient binarized operations when doing Weight*activation during the forward pass?\n\n3. The review of the literature is inaccurate. For example, it is not true that Courbariaux et al. (2016) “further improved accuracy on small datasets”: the main novelty there was binarizing the activations (which typically decreased the accuracy). Also, it is not clear if the scale factors introduced by XNOR-Net indeed allowed \"a significant improvement over previous work\" in ImageNet (e.g., see DoReFA and Hubara et al. who got similar results using binarized weigths and activations on ImageNet without scale factors). Lastly, the statement “Typical approaches include linearly placing the quantization points” is inaccurate: it was observed that logarithmic quantization works better in various cases. For example, see Miyashita, Lee and Murmann 2016, and Hubara et al.\n\n%%% After Author's Clarification %%%\nThis paper results seem more positive now, and I have therefore have increased my score, assuming the authors will revise the paper accordingly.\n\n", "The paper tries to maintain the accuracy of 2bits network, while uses possibly less than 2bits weights.\n\n1. The paper misses some more recent reference, e.g. [a,b]. The author should also have a discussion on them.\n\n2. Indeed, AlexNet is a good seedbed to test binary methods. However, it is more interesting and important to test on more advanced networks. So, I wish to see a section on testing with Resnet and GoogleNet.\n\nIndeed, the authors have commented: \"AlexNet with batch-normalization (AlexNet-BN) is the standard model ... acceptance that improvements made to accuracy transfer well to more modern architectures.\" So, please show that.\n\n3. The paper wants to find a good trade-off on speed and accuracy. The authors have plotted such trade-off on space v.s. accuracy in Figure 3(b), then how about speed v.s. accuracy?\n\nMy concern is that one-bit system is already complicated to implement. Indeed, the authors have discussed their implementation in Section 3.3, so, how their method works in practice? One example is Section 4 in [Courbariaux et al. 2016].\n\n4. Is trade-off between 1 to 2 bits really important? \n\nCompared with 2bits or ternary network, the proposed method at most achieving (1.4/2) compression ratio and (2/1.4) speedup (based on their Table 1). Is such improvement really important?\n\nReference:\n[a]. Trained Ternary Quantization. ICLR 2017\n[b]. Extremely low bit neural network: Squeeze the last bit out with ADMM. arvix 2017", "This paper presents an extension of binary networks, and the main idea is to use different bit rates for different layers so we can further reduce bitrate of the overall net, and achieve better performance (speed / memory). The paper addresses a real problem which is meaningful, and provides interesting insights, but it is more of an extension.\n\nThe description of the Heterogeneous Bitwidth Binarization algorithm is interesting and simple, and potentially can be practical, However it also adds more complication to real world implementations, and might not be an elegant enough approach for practical usages. \n\nExperiments wise, the paper has done solid experiments comparing with existing approaches and showed the gain. Results are promising.\n\nOverall, I am leaning towards a rejection mostly due to limited novelty. \n\n", "We seem to be having a little bit of a terminology mismatch. In our work, all references to first layer mean the input layer, just as references to the last layer mean the output layer. We see now how this can cause confusion and will update our lingo to exclusively use input layer and output layer.\n\nWith this terminology mismatch in mind, our description of Hubara's work matches the implementation in his github repo (https://github.com/itayhubara/BinaryNet). The architectures in our work are comparable.\n\nHopefully clarifying this point allows more direct comparison to other works and highlights the significance of our results.", "The statement \"Hubara uses 8-bit binarization for the first layer\" is not true. All layers in Hubara et al. were quantized to 1-bit weights and 2-bit activations, including the first layers (i.e. the first weight layer, and the first hidden layer). Perhaps the authors misunderstood: the input layer is indeed 8-bit, as always.\n\nThanks for the other clarifications. Still, taking into consideration all the results together, it is not clear to me that they are significantly better then previous works.", "Thanks for pointing us to this work. We weren’t aware of it, and it looks interesting and related. More details below.\n\nOur two papers focus on different problems, but have quite a bit of potential to improve each other in future work. BitNet presents a method for learning the integer number of bits to binarize each layer in a network while our work presents a method for learning the number of bits for each individual parameter within a layer. In other words, BitNet is learning bitwidths with layer granularity while we're learning bitwidths with parameter level granularity.\n\nTo dig into this difference a little deeper, noting figure 1 in BitNet, we can see that during training the bitwidth of each layer is quite noisy due to the large discontinuity between bits. This is because prior to our work, there was no method to binarize a tensor to a fractional bitwidth. If the authors of BitNet were to incorporate Heterogeneous Bitwidth layers, the figure 1 of BitNet could be made smooth and continuous. This should help quite a bit with improving the stability of BitNet's learning process.\n\nBeyond the fundamental contribution of each paper, our work also differs by binarizing to lower bitwidths. The majority of layers in BitNet end up being binarized to around 6 bits, which is a huge difference in representational power than the 2 bit and below widths we focus on. Traditionally, showing good performance on 1-2 bits has been significantly harder than e.g. 4 bits and above.\n\nAdditionally, BitNet only focuses on the CIFAR and MNIST datasets. Many works have found that success on these datasets does not always transfer to more difficult datasets such as ImageNet. In contrast we provide a detailed analysis of results of ImageNet classification, including for the first time, a state-of-the-art model, Google’s Mobilenet.", "Reviewer 1 points out flaws in comparisons with related work.\n\n[Rastegari shows 56.8% accuracy, we only show 55.2% accuracy (row 4 of Table 1)] We measure the result of binarizing *all* layers of Alexnet (similar to Dong et al, from rows 1 and 2). Rastegari et al *do not binarize the first or last layer*. We consider Dong's result to be more challenging and chose to compare to it. However, we are happy to also compare to Rastegari's configuration if reviewers think it is misleading not to. Or we can just make this difference (currently noted in section 4.3) explicit in the table.\n\n[Hubara got 51% with 1-bit weights/2-bit activations when binarizing all layers, whereas we got 51.5% without binarizing first and last layer] We admit that this partly slipped by us. We chose our binarization configuration to compare to the many other pieces of work in Table 1, none of which (to our understanding) binarize first and last layers. Please note however, that Hubara uses 8-bit binarization for the first layer, which is arguably closer to \"no binarization\" than to the conventional 1- and 2-bit binarization. Further, other work (e.g. Tang et al AAAI 2017) has shown that binarizing the last layer, unlike the first, does not result in much accuracy loss. But we are happy to report Hubara's configuration if reviewers deem this as misleading.\n\n[Are scale factors stored per feature map or neuron?] No, they are stored per kernel exactly as in Rastegari et al. Scale factor multiplication can be done after the binary product by multiplying each output feature map by it's corresponding scalar. The amount of added work is equivalent to replacing ReLU activations with PReLU activations, which does not have a significant effect on network inference time. We can make this point more explicit in the Implementation section.\n\n[Misunderstandings of key innovations and contributions in related work] We are embarrassed at our misunderstanding of the literature. We thank the reviewer and will correct these and improve our understanding.", " The reviewer questions whether the performance improvements we claim are significant. In detail:\n\n[Comparison to related work] We thank the reviewer for these references. Reference A, on ternary networks, is quite similar to the work of Li et al that we reference in related work, but we will include further discussion. Reference B suggests training improvements (not binarization techniques), which we will look to adopt.\n\n[Evaluation in larger models: please show results on Resnet/GoogleNet] We selected AlexNet to illustrate most of the work, since it has been used exclusively in the community to compare approaches. However, do note that **unlike any paper so far**, we have shown results also on Mobilenet, which is the state-of-the art object recognition model as of fall 2017, from Google (contribution 4 in the intro, and last paragraph of section 4.3). Mobilenet yields comparable accuracy to Resnet and GoogleNet, but is also much faster than them, and is therefore a challenging benchmark. Perhaps we should highlight this result better?\n\n[Complexity of implementation] Gaining performance from binarized models is indeed complex, especially on a CPU. In fact, no paper provides the many crucial details, such as machine specific vectorization/tiling/loop fusion algorithms essential to gaining real-world speedup. Even Courbariaux et al, section 4, only gives a sketch in this direction. Admittedly, heterogeneous bitwidths will add to this complexity, so this is a fair concern on a CPU. However, implementation is fairly straightforward on an FPGA, because we simply lay out custom gate patterns for each bit pattern to be XNOR'd against (see e.g. https://arxiv.org/pdf/1612.07119.pdf , esp. section 4.3.2 for a similar implementation in the homogeneous bitwidth case). The custom pattern for processing 2 bits is only slightly different from the 1-bit version. Perhaps we can focus the implementation section on sketch how to perturb this standard FPGA-based design?\n\n[No speed vs accuracy number] As mentioned above, almost no paper in this area reports measured speedups, just improvements in coarsely estimated instruction counts. In the FPGA context, we could similarly report coarse estimates the number of cycles, chip real estate and power consumed. However, roughly speaking these (especially the latter two that are our goal) are simply proportional to average bitwidth of the operations programmable into hardware. We could make this explicit in the text when we discuss the implementation above.\n\n[A 1.4/2x = 0.7x reduction is not significant] Although this is a subjective call and hard to argue against, it is worth noting that our gains are *on top of* optimized binary implementations. Further, note that FPGA implementations of DNNs are now running at cloud scale e.g. in the Azure cloud (https://www.microsoft.com/en-us/research/blog/microsoft-unveils-project-brainwave/). A 30% improvement in space/energy efficiency with no accuracy loss is considered quite significant at these scales.", "The reviewer describes our work as seeking to use different bit rates for different layers, and points out that the work is not novel enough overall.\n\nWe would like to point out that in fact, we are not looking to simply binarize different layers at different bitwidths (although as a baseline we do so in figure 3(a)). In that baseline, assuming we select *up front* what the bitwidth k of each layer is, we use the not-so-novel approach of simply applying standard k-bit binarization algorithms with different k to each layer. This is indeed simple to do, but figure 3(a) shows that such naive selection only provides \"linear\" increase in accuracy (e.g., using 1.5 bits on average gives only the average of 1-bit and 2-bit accuracies, which is interesting but perhaps not surprising). \n\nInstead, in our main contribution, we are asking the question \"if we *learned* what bitwidth to assign to *each* parameter (jointly with its value), could we get better-than-linear speedup\". This learning of bitwidths is what is novel about our goal, and techniques.\n\nLearning bitwidths requires changing the training algorithm in a non-obvious way, using the mask-generation scheme of algorithm 1, and the middle-out scheme for thresholding. We should emphasize that the operations of equation 4, Algorithm 1 and equation 6 are not performed in a one-time \"post-processing\" step, but on every forward propagation during training. We submit that this learning algorithm is quite novel.\n\nGiven that the bitwidth is such a fundamental aspect of model parameters, we hope that learning them jointly with values should be of broad interest to the ICLR community.", " We thank the reviewers for their detailed and useful feedback. Below, we make two summary points in response and follow up with some detailed responses to the issues raised.\n\nAt the highest level, we realize from the feedback we did not make the motivation and the technical significance of our work sufficiently clear. \n\nFirst, the motivation. We are looking to adapt binarization algorithms for implementation on FPGAs. On standard FPGA-based implementations (e.g., https://arxiv.org/pdf/1612.07119.pdf ), every XNOR product is implemented as a separate hardware structure proportional to the number of bits in the product. Typically, both the real estate (number of gates) and power consumed (watts) by an implementation are proportional to the average bitwidth of the data. A (2 to 1.4 =) 30% average reduction in real estate and power **at no accuracy loss** is quite attractive. We intend to rework our introduction and implementation sections to highlight this perspective.\n\nSecond, the significance. Traditionally, ML algorithms look to optimize/learn the *value* of every parameter. We extend the optimization criterion in a simple, but fundamental, way to include the representation. We ask whether it is (a) feasible and (b) useful to *jointly* learn both the value *and the bitwdith* of every parameter. We provide an affirmative answer on both counts, and to our knowledge, are the first to do so. This advance requires a non-obvious change to the training scheme (i.e., the middle-out scheme to select a variable-bitwidth mask, Algorithm 1). We also show experimentally (sec 4.2.1 and Figure 3a) that a simpler approach that does not pick the bitwidths in a data driven manner does not give the same bump in performance. Again, we will reframe our paper to highlight this.\n\nDetailed responses are provided below to each reviewer.", "A comparison with BitNet -- (https://arxiv.org/pdf/1708.04788.pdf) would be helpful. Although they do not go down to the very low precision (1-2 bit) case as with your paper, they do learn a unique precision for the parameters of each layer via SGD." ]
[ 6, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJDV5YxCW", "iclr_2018_HJDV5YxCW", "iclr_2018_HJDV5YxCW", "H1iBLz8Xf", "BJzQtmQbG", "H162w9x-z", "SkYPj5Hez", "H1Jn8QYeG", "SklRZUJ-G", "iclr_2018_HJDV5YxCW", "iclr_2018_HJDV5YxCW" ]
iclr_2018_SJgf6Z-0W
Predicting Multiple Actions for Stochastic Continuous Control
We introduce a new approach to estimate continuous actions using actor-critic algorithms for reinforcement learning problems. Policy gradient methods usually predict one continuous action estimate or parameters of a presumed distribution (most commonly Gaussian) for any given state which might not be optimal as it may not capture the complete description of the target distribution. Our approach instead predicts M actions with the policy network (actor) and then uniformly sample one action during training as well as testing at each state. This allows the agent to learn a simple stochastic policy that has an easy to compute expected return. In all experiments, this facilitates better exploration of the state space during training and converges to a better policy.
rejected-papers
All of the reviewers agree that the paper presents strong experimental results on continuous control benchmarks. The reviewers raised concerns regarding the analysis of the behavior of the algorithm, the possible impact of the technique, and requested more references and comparison with related work. The paper has significantly improved since the initial submission, but still not able fully satisfactory to the reviewers, partly due to the large extent of the changes needed.
train
[ "Sk4kCOIEM", "B1B3e0Oef", "HJoqViKlM", "HyRqndjez", "rJrwYUp7G", "r1l7FUa7M", "Hy9nKUamM", "r125_LpQf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The relationship with SVG0, is that both are off-policy stochastic algorithms learned with the reparametrization trick. Currently the comparisons you have are with DDPG (deterministic, off-policy), A3C(stochastic, on-policy) and MAPG(stochastic, on-policy). So it is difficult to separate which gains are simply due to stochastic, off-policy learning and which might be due to the specific multi-modal distribution used.\n\nOverall, the paper reads significantly better now and does a better job of placing this work in the context of earlier results, but I believe it will be limited interest and still misses a key control.", "This work introduces a particular parametrization of a stochastic policy (a uniform mixture of deterministic policies). They find this parametrization, when trained with stochastic value gradient outperforms DDPG on several OpenAI gym benchmarks.\n\nThis paper unfortunately misses many significant pieces of prior work training stochastic policies. The most relevant is [1] which should definitely be cited. The algorithm here can be seen as SVG(0) with a particular parametrization of the policy. However, numerous other works have examined stochastic policies including [2] (A3C which also used the Torcs environment) and [3].\n\nThe wide use of stochastic policies in prior work makes the introductory explanation of the potential benefits for stochastic policies distracting, instead the focus should be on the particular choice and benefits of the particular stochastic parametrization chosen here and the choice of stochastic value gradient as a training method (as opposed to many on-policy methods).\n\nThe empirical comparison is also hampered by only comparing with DDPG, there are numerous stochastic policy algorithms that have been compared on these environments. Additionally, the DDPG performance here is lower for several environments than the results reported in Henderson et al. 2017 (cited in the paper, table 2 here, table 3 Henderson) which should be explained.\n\nWhile this particular parametrization may provide some benefits, the lack of engagement with relevant prior work and other stochastic baselines significant limits the impact of this work and makes assessing its significance difficult.\n\nThis work would benefit from careful copyediting.\n\n[1] Heess, N., Wayne, G., Silver, D., Lillicrap, T., Erez, T., & Tassa, Y. (2015). Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems (pp. 2944-2952).\n\n[2] Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., ... & Kavukcuoglu, K. (2016, June). Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning (pp. 1928-1937).\n\n[3] Schulman, J., Moritz, P., Levine, S., Jordan, M., & Abbeel, P. (2015). High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438.\n\n", "In this paper, the authors investigate a simple method for improving the performance of networks trained with DDPG: instead of outputting a single action, they output several actions (through distinct output layers), and choose one uniformly at random. The selected action is updated using deterministic policy gradient. The critic Q is updated with a Bellman backup where the the choice of the action is marginalized out. Authors show improved performance on a large number of standard continuous control environment (openAI gym and TORCS).\n\nThe paper is well written, and the idea seems to work perhaps surprisingly well. The authors do a good job of investigating the behavior of their algorithm (in particular the increase of standard deviation in states where multiple optimal actions exist). \n\nSimilar ideas (mixture of gaussians for action distribution in policy gradient setups, or multi-modal action distribution through the use of latent variables) are often difficult to make work - I am curious why this particular method works so well.\nIn particular, it would be interesting to investigate how the algorithm avoids collapsing all actions into the same one; as implied by section 3.2, in a state with multiple optimal actions, there is no difference in loss between having all actions be nearly identical (and optimal), and all actions being distinct optimal actions. Furthermore, as the loss does not encourage diversity, once two actions are set to be similar in a state, intuitively it would be hard for the actions to become distinct again. Imagine for instance the cart pole problem with M=2. If both action layers start with the same 'tendency' (towards clock-wise or counter clock-wise motion), it is likely that the same tendency would be reinforced for both, and the network with M=2 would end up having a similar behavior to a classical network with M=1.\n\nIs this problem avoided by using a large value of M? It would be interesting to investigate the behavior of the algorithm in a toy environment (perhaps a simple 2d navigation task with distinct 'paths' with same cost) where the number of distinct basins of optimality is know for various states, and investigate in more details how diversity is maintained (perhaps as a function of M).\n\n\nMinor: \n- typo rho -> $\\rho$\n- Given the paper fits comfortably within the page limit, it would have been worthwhile to give mathematical details to the Algorithm 1 box (even if they are easy to find in text or appropriate references)", "This paper describes an approach to stochastic control using RL that extends DDPG with a stochastic policy. A standard DDPG setup is extended such that the actor now produces M actions at each timestep. Only one of the M actions will be executed in the environment using a uniform sampling. The sampled action is the only that will receive a gradient update from the critic network. The authors demonstrate that such a stochastic policy performs better on average in a series of benchmark control tasks.\n\nI find the general idea of the work compelling, but the particular approach is rather poor. The fact that we are choosing the number of modes in the uniform distribution is a bit underwhelming (a more compelling architecture could have proposed a policy conditioned on gaussian noise for example, thus having better coverage of the distribution). I found the proposed apprach to be under-analyzed and the stochastic aspects of the policy are undervalued. The main claim being argued in the paper is that the proposed stochastic policy has better final performance on average than a deterministic policy, but the only practical difference seems to be a slightly more structured approach to exploration. \nHowever, very little attention is paid to trying different exploration methods with the deterministic policy (incidentally, Ornstein-Uhlenbeck process noise is not something I'm familiar with, a citation to the use of this noise for exploration as well as a more explicit explanation would have been appreciated). One interpretation is that each of the M sub-policies follows a different mode of the Q-value distribution over the action space. But is this indeed the case? There is a brief analysis of this with cartpole, but a more complete look at how actions are clustered in the action space would make this paper much more compelling. Even in higher-dimensional action spaces, you could look at a t-SNE projection or cluster analysis to try and see how many modes the agent is reasoning over. Additionally, the baseline agent should have used additional exploration methods as these can quickly change the performance of the agent.\n\nI also think that better learning is not the only redeeming aspect of a stochastic policy. In the face of a non-stationary environment, a stochastic policy will likely be much more robust. Additionally, it will have much better performance against adversarial environments. Given the remaining available space in the paper it would have been interesting to provide more insight into the proposed methods gains in these areas.", "\nThank you for your comments! We are happy that you value the simplicity of our approach and like the paper. We will reply to your comments in detail.\n\n>>> Similar ideas (mixture of gaussians for action distribution in policy gradient setups, or multi-modal action distribution through the use of latent variables) are often difficult to make work - I am curious why this particular method works so well.\n\nWe have experimented with Gaussian mixtures also in supervised learning tasks and found that they can suffer from numerical instabilities especially in higher dimensions due to the log() and exp() terms. Consider the scenario where exactly one action that is optimal. A mixture model would need to predict (close to) zero variance which can easily cause instabilities. Our method, however, does not change the original model at all in terms of loss computation and gradients. The sampling is virtually transparent to the computation inside the model. \nAnother reason - that we found after comments from R1 - is that our method naturally explores the action space, as initially the actions are randomly distributed due to the initialization (see also our reply to R1 regarding exploration).\n\n>>> In particular, it would be interesting to investigate how the algorithm avoids collapsing all actions into the same one; as implied by section 3.2, in a state with multiple optimal actions, there is no difference in loss between having all actions be nearly identical (and optimal), and all actions being distinct optimal actions.\n\nThis is an interesting observation. We will explain our intuition. In loss space optimal actions are minima in different locations. Since our network is randomly initialized all M actions are distributed over the action space in random locations (in our experiments it helps when actions are normalized for example to [-1,1]). During training with gradient descent, each action will move to a close (local) minimum which is unlikely the same for all of them if multiple of them exist. In theory you are correct that nothing explicitly prevents the model for learning one single optimal action for all M proposals.\n\n>>> Furthermore, as the loss does not encourage diversity, once two actions are set to be similar in a state, intuitively it would be hard for the actions to become distinct again\n\nBefore writing the paper, we have extensively discussed adding a diversity term to the actions, that encourages actions to be distinct. However, this would act against learning an optimal policy, which would invalidate our theoretical guarantee that all M action proposals have the same expected performance. Diversity would be a trade-off for performance. One option would be to decay the diversity term over time, but since the initialization is already diverse, we do not expect this to have a big influence. Separating two identical actions can be done either with exploration noise or naturally in a non static, stochastic environment, where the same action in the same current state could receive different rewards.\n\n>>> It would be interesting to investigate the behavior of the algorithm in a toy environment (perhaps a simple 2d navigation task with distinct 'paths' with same cost) where the number of distinct basins of optimality is know for various states, and investigate in more details how diversity is maintained (perhaps as a function of M).\n\nThis is indeed an interesting experiment that we will investigate in the future. The exploration experiment that we have added to the paper shows the same property. During training without exploration, initially equally scoring actions are kept around making the agent explore more possibilities until it is able to find a good strategy. This means that not only the final policy benefits from multiple actions, but also the training is improved since the network does not need to decide for one out of several possible “paths” early.\n\n>>> typo rho -> $\\rho$\n\nThank you! We have fixed it.\n\n>>> Given the paper fits comfortably within the page limit, it would have been worthwhile to give mathematical details to the Algorithm 1 box (even if they are easy to find in text or appropriate references)\n\nWe have extended Algorithm 1 with more details.\n", "Thank you for your comments! We are pleased to hear that you are intrigued by our approach. We have made changes to the paper (including additional experiments) based on your suggestions. We will reply to your comments in detail in the following.\n\n>>> choosing the number of modes in the uniform distribution is a bit underwhelming (a more compelling architecture could have proposed a policy conditioned on gaussian noise for example, thus having better coverage of the distribution)\n\nIt is indeed possible to learn a parametrized distribution, for example by predicting a Gaussian mixture model (e.g. by adapting Mixture Density Networks, C. M. Bishop, 1994). However, we see two reasons why our simple approach is compelling. First, with our method we do not constrain the actions to follow a predefined distribution, thus it can in theory learn any optimal action sub-space. Second, usually parametrized distributions come with numerical difficulties in high dimensional output spaces. Our model is easy to train since it does not change the original model with additional numerically challenging computations (e.g. exp() in GMMs). (see also our reply to R3 on predicting distributions)\n\n>>> very little attention is paid to trying different exploration methods with the deterministic policy\n\nThank you for this hint! We have added an experiment where we do not use any exploration during training (newly added Section 4.4). In the Pendulum environment, DDPG is not able to explore sufficiently, especially in the beginning of training, and shows poorer performance, while MAPG converges to a high performance. As you suspected, this shows that the stochastic nature of our method helps not only to learn a better policy but also results in better exploration during training. In the Cheetah environment we see that exploration helps late during training, indicating that is helps the M actions to become diverse. \n\n>>> Ornstein-Uhlenbeck process noise is not something I'm familiar with, a citation to the use of this noise for exploration as well as a more explicit explanation would have been appreciated\n\nThank you! We have refined this section in the paper and added a reference. Further we analyze the performance of our method in the added Section 4.4 without exploration.\n\n>>> One interpretation is that each of the M sub-policies follows a different mode of the Q-value distribution over the action space. \n\nThis is correct. We have improved Figure 3 to be more easily readable, where we visualize this intuition for the Pendulum task. The interesting aspect of MAPG is that it does not parametrize the distribution, thus it can in theory learn a point-wise approximation for any distribution.\n\n>>> a stochastic policy will likely be much more robust. Additionally, it will have much better performance against adversarial environments.\n\nWe agree that this might be the case and are eager to try this! \n", "\nThank you for your feedback and the additional references. We have now added and discussed your suggestions. We reply to your review in detail.\n\n>>> The wide use of stochastic policies in prior work makes the introductory explanation of the potential benefits for stochastic policies distracting, instead the focus should be on the particular choice and benefits of the particular stochastic parameterization chosen here and the choice of stochastic value gradient as a training method\n\nIn addition to the suggested references we have cleaned up and improved this section. Further we have added an experiment where we train without an exploration mechanism and can show, that the stochasticity of our method is enough to sufficiently explore the solution space in the beginning of training. MAPG without an explicit exploration mechanism still achieves a good performance. For some tasks DDPG however, is unable to learn a good policy without exploration (see also our reply to R1 regarding exploration).\nFurther, we have emphasized the theoretical benefits of our method.\n\n>>> The algorithm here can be seen as SVG(0) with a particular parametrization of the policy.\n\nWe have carefully compared SVG(0) with our method but do not see much similarity between the two algorithms. Could you clarify your thoughts here? SVG(0) learns a variance for an action while we predict multiple actions that do not necessarily follow a given (Gaussian) distribution.\n\n>>> DDPG performance here is lower for several environments than the results reported in Henderson et al. 2017 \n\nThank you for the hint. We have made clear in the paper that this difference comes from the fact, that we train for 2,000 epochs for all environments and M-values for a fair and reproducible experimental setup. \n\n>>> The empirical comparison is also hampered by only comparing with DDPG\n\nThank you! We have added A3C results to the Table 2. \n", "We have updated the paper based on the feedback and comments of all reviewers. Here we list the major changes in the manuscript:\n\n - Changed abstract and introduction to reflect that MAPG is a general policy gradient algorithm.\n\n - Updated Algorithm 1 with equations and more details.\n\n - Added Section 4.4, exploring the effect of exploration noise onto the training together with Figure 4.\n\n - Improved Figure 3.\n\n - Added additional references (e.g. A3C).\n\n - Added A3C results to Table 2.\n\n - Revised introduction.\n\n - Several small corrections of typos." ]
[ -1, 3, 7, 4, -1, -1, -1, -1 ]
[ -1, 4, 3, 4, -1, -1, -1, -1 ]
[ "Hy9nKUamM", "iclr_2018_SJgf6Z-0W", "iclr_2018_SJgf6Z-0W", "iclr_2018_SJgf6Z-0W", "HJoqViKlM", "HyRqndjez", "B1B3e0Oef", "iclr_2018_SJgf6Z-0W" ]
iclr_2018_r1BRfhiab
The Principle of Logit Separation
We consider neural network training, in applications in which there are many possible classes, but at test-time, the task is to identify only whether the given example belongs to a specific class, which can be different in different applications of the classifier. For instance, this is the case in an image search engine. We consider the Single Logit Classification (SLC) task: training the network so that at test-time, it would be possible to accurately identify if the example belongs to a given class, based only on the output logit for this class. We propose a natural principle, the Principle of Logit Separation, as a guideline for choosing and designing losses suitable for the SLC. We show that the cross-entropy loss function is not aligned with the Principle of Logit Separation. In contrast, there are known loss functions, as well as novel batch loss functions that we propose, which are aligned with this principle. In total, we study seven loss functions. Our experiments show that indeed in almost all cases, losses that are aligned with Principle of Logit Separation obtain a 20%-35% relative performance improvement in the SLC task, compared to losses that are not aligned with it. We therefore conclude that the Principle of Logit Separation sheds light on an important property of the most common loss functions used by neural network classifiers.
rejected-papers
All of the reviewers have found some aspects of the formulation interesting, but they raised concerns regarding the practical use of the experimental setup.
train
[ "B1mIOqdlz", "ryA44e5xf", "HyCT3vclM", "HJxlZlx7G", "B1Jp1ggmG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "The paper is well-written which makes it easy to understand its main\nthrust - choosing loss functions so that at test time one can\naccurately (and speedily) determine whether an example is in a given\nclass, ie loss functions which are aligned with the \"Principle of Logit\nSeparation (PoLS)\". \n\nWhen the \"Principle of logit separation\" was first given (second page)\nI found it confusing and difficult to parse (too many \"any\"s, I could\nnot work out how the quantification worked). However, the formal\ndefinition (Definition 2.1) was fine. Why not just use this - and drop\nthe vague, wordy definition?\n\nThe paper is fairly 'gentle'. For example, we are taken through\nexamples of loss functions which satisfy \"PoLS\" and those which don't.\nNo 'deep' mathematical reasoning is required - but I don't see this as\na deficiency.\n\nThe experiments are reasonably chosen and, as expected, show the\nbenefits of using PoLS-aligned loss functions.\n\nMy criticism of the paper is that I don't think there is enough\nmotivation. We have that normal classification is linear in the number\nof classes. This modest computational burden (ie just linear),\napparently, is too slow for certain applications. I would like more\nevidence for this, including some examples of this problem including\nin the paper. This is lacking from the current version.\n\n\ntypos, etc\n\nmax-maring -> max-margin\nthe seconds term -> the second term\n\n", "The paper addresses the problem of a mismatch between training classification loss and a loss at test time. This is motivated by use cases in which multiclass classification problems are learned during training, but where binary or reduced multi-class classifications is performed at test time. The question for me is the following: if at test time, we have to solve \"some\" binary classification task, possibly drawn at random from a set of binary problems (this is not made precise in the paper), then why not optimize the same classification error or a surrogate loss at training time? Instead, the authors start with a multiclass problem, which may introduce a computational burden. when the number of classes is large as one needs to compute a properly normalized softmax. The authors now seem to ask, what if one were to use a multi-classification loss at training time, but then decides at test time that a binary classification of one-vs-all is asked for. \n\nIf one buys into the relevance of the setting, then of course, one is faced with the problem that the multiclass logits (aka raw scores) may not be calibrated to be used for binary classification by applying a fixed threshold. The authors call this sententiously \"Principle of logit separation\". Not too surprisingly, the standard multiclass losses do not have the desired property, however approaches that reduce multi-class to binary classification at training time do, namely unnormalized models with penalized log Z (self-normalization), the NCE approach, as well as (the natural in the proposed setting) binary classification loss. I find this almost a bit circular in the line of argumentation, but ok. It remains odd that while usually one has tried to reduce multiclass to binary, the authors go the opposite direction.\n\nThe main technical contribution of the paper is the batch-nornalization that makes sure that multiclass logits across mini-batches of data are better calibrated. One can almost think of that as an additional regularization. This seems interesting and does not create much overhead, if one applies mini-batched SGD optimization anyway. However, I feel this technique would need to be investigated with regard to general improvements in a multiclass setting and as such also benchmarked relative to other methods that could be applied. \n", "This paper explores a neat, simple idea intended to learn models suitable for fast membership queries about single classes (\"is this data point a member of this class [or set of classes]?\"). In the common case when the class prediction is made with a softmax function minimizing 1-of-K multiclass cross-entropy loss, this cannot in general be determined without essentially evaluating all K logits (inputs to the softmax). This paper describes how other losses (such as the natural multilabel cross-entropy) do not suffer this problem because all true labels' logits rank above all false labels' (so that any membership query can be answered by choosing a threshold), and models trained to minimize these losses perform better on class membership metrics. One of the new losses suggested, the batch cross-entropy, is particularly interesting in keeping with the recent work on using batch statistics; I would like to see this explored further (see below). The paper is well-written.\n\nI am not sure of the relevance of this work as written. The authors discuss how related work (e.g. Grave et al.) scales computationally with K, which is undesirable; however, training the entire network with a non-CE objective function is an end-to-end model change, and practical uptake may suffer without further justification. The problem (and the proposed solution by changing training objective) is of interest because standard approaches ostensibly suffer unfavorable runtime-to-performance tradeoffs, so this should be demonstrated. I would be more comfortable if the authors actually evaluated runtime, preferably against one or two of the other heuristic baselines they cite. \nThe notation is a little uneven. The main idea is stated given the premise of Fig. 1, that there exist logits which are computed and passed through a softmax neuron, but this is never formally stated. (There are a few other very minor quibbles, e.g. top of pg. 6: sum should be over 1...k). ", "We write this general comment as we believe an important aspect of our work was overlooked by the reviewers. We believe that basic understanding of fundamental building blocks of neural networks is a topic or high significance to ICLR. Neural network models already exist for a few decades and became very common over the past few years. Still, we lack basic understanding about the most common building blocks, such as the softmax function + cross entropy training mechanism and other common loss functions such as the binary cross entropy and NCE. Our work is concerned with understanding basic characteristics of the logits that result from a variety of the most common loss functions. We outline a simple property, and show that loss functions that optimize for this property at training time will yield logits at test time with characteristics that are much more suitable for some tasks. Moreover, we show that the most common softmax + cross entropy training mechanism is not optimizing for our proposed property. By doing so, we contribute to the basic understanding of the most common building blocks of neural network models.", "In the setting we consider, we are presented with a multi-class classification problem, such that every example has exactly one correct class. At test time we are interested in the ability to perform fast binary (one-vs-all) classification of any given class, based on the class’s logit alone. We name this task Single Logit Classification (SLC). This setting appears naturally in the case of search engines, when the class for the binary classification is chosen according to user’s behavior. Another setting this problem applies to is the the task of face verification, where at test time we want to know if a given image is of person x of not. \n\nWe agree on the final conclusion from this review, that the softmax + cross entropy training mechanism is not suited for the task we describe, and other loss functions should be used. However, we claim this conclusion and the insights we derive towards it are far from being trivial. We address specific claims below. \n\n\"if at test time, we have to solve \"some\" binary classification task, possibly drawn at random from a set of binary problems (this is not made precise in the paper), then why not optimize the same classification error or a surrogate loss at training time?\"\nIn our work, we draw the same conclusion as the reviewer, that the multi-class classification should not be used in this case. However, this is far from being trivial as the reviewer suggests, for several reasons:\n1) The problem we consider is naturally a multi-class classification problem. Among all possible classes, exactly one class is the correct one. For example, consider of classifying a face image where the classes are a large number of persons.\n2) Several existing works do use the multiclass classification problem, instead of a more suitable loss for the problems they consider. For example, the task of face verification is often done by learning a face classifier over thousands of classes, using variants of the multi-class softmax + cross entropy mechanisms. Then, for face verification, a distance between representations of two faces in the last network layer is measured. Works that do that include:\nParkhi, Omkar M., Andrea Vedaldi, and Andrew Zisserman. \"Deep Face Recognition.\" BMVC. Vol. 1. No. 3. 2015.\nLiu, Weiyang, et al. \"SphereFace: Deep Hypersphere Embedding for Face Recognition.\" arXiv preprint arXiv:1704.08063 (2017).\nTaigman, Yaniv, et al. \"Deepface: Closing the gap to human-level performance in face verification.\" Proceedings of CVPR 2014.\nWe show that using better suited loss functions may replace the need to compare face representations in the last network layer. \n3) We outline the set of loss functions that can be used for the setting we present. Specifically, we show that suitable loss functions are ones with good logit separation characteristics. \n\n\"Not too surprisingly, the standard multiclass losses do not have the desired property, however approaches that reduce multi-class to binary classification at training time do, namely unnormalized models with penalized log Z (self-normalization), the NCE approach, as well as (the natural in the proposed setting) binary classification loss\"\nAgain, the reviewer finds our conclusion well motivated and sound. However, we argue that this is far from being trivial, for several reasons:\n1) Many existing works do not practice this conclusion, such as the examples listed above. \n2) The reviewer claims that, trivially, approaches that reduce multi-class to binary classification at training time perform well in the SLC task, such as self-normalization and others. While binary cross-entropy and NCE indeed reduce multi-class to binary classification at training time, other losses we consider do not do this reduction, but still perform well on the SLC task. For example, in contrary to the reviewer's statement, self-normalization and other penalized log Z losses do not reduce multi-class to binary classification at training time. Such losses perform well on the SLC task for another reason, which is good logit separation properties, as we show in this work. In addition to self-normalization and other penalized log Z losses, we show that other losses perform well on the SLC task, such as the batch cross-entropy and batch max-margin, which also do not reduce multi-class to binary classification at training time, and again, we show that the reason for the desired behavior is the principle of logit separation.\n\nMoreover, the value of this work is by shedding light on an important property of the most common loss functions. The use of loss mechanisms such as the softmax + cross entropy, binary cross entropy and NCE is extremely abundant nowadays. Yet, basic properties about the resulting logits are still poorly understood, and this is, in our view, the most significant point of this work, which was overlooked in this review. \n" ]
[ 6, 3, 4, -1, -1 ]
[ 3, 4, 4, -1, -1 ]
[ "iclr_2018_r1BRfhiab", "iclr_2018_r1BRfhiab", "iclr_2018_r1BRfhiab", "iclr_2018_r1BRfhiab", "ryA44e5xf" ]
iclr_2018_SJD8YjCpW
Balanced and Deterministic Weight-sharing Helps Network Performance
Weight-sharing plays a significant role in the success of many deep neural networks, by increasing memory efficiency and incorporating useful inductive priors about the problem into the network. But understanding how weight-sharing can be used effectively in general is a topic that has not been studied extensively. Chen et al. (2015) proposed HashedNets, which augments a multi-layer perceptron with a hash table, as a method for neural network compression. We generalize this method into a framework (ArbNets) that allows for efficient arbitrary weight-sharing, and use it to study the role of weight-sharing in neural networks. We show that common neural networks can be expressed as ArbNets with different hash functions. We also present two novel hash functions, the Dirichlet hash and the Neighborhood hash, and use them to demonstrate experimentally that balanced and deterministic weight-sharing helps with the performance of a neural network.
rejected-papers
An empirical study of weight sharing for neural networks is interesting, but all of the reviewers found the experiments insufficient without enough baseline comparisons.
test
[ "rJqGz8tlf", "rybTRlqgz", "BkmW-pbMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The manuscript advocates to study the weight sharing in a more systematic way by proposing ArbNets which defines the weight sharing function as a hash function. In this framework, any existing neural network architectures, including CNN and RNN, could be incorporated into ArbNets.\n\nThe manuscript is not well written. There are multiple grammar errors and typos. Content-wise, it is already well known that CNN and RNN can be expressed as general MLP with weight sharing. The introduction of ArbNets does not bring much value or insight to this area. So it seems that most content before experimental section is common sense.\n\nIn the experimental section, it is interesting to see how different hash function with different level of entropy can affect the performance of neural nets. However, this single observation cannot enrich the whole manuscript. Two questions:\n(1) What is the definition of sparsity here, and how is it controlled?\n(2) There seems to be a step change in Figure 3. All the results are either between 10 to 20, or near 50. And the blue line goes up and down. Is this expected?", "This paper proposes a general framework for studying weight sharing in neural networks. They further suggest two hash functions and study the role of different properties of these hash functions in the performance.\n\nThe paper is well-written and clear. It is a follow-up on Chen et al. (2015) which introduced HashedNets. Therefore, the idea of using hash functions is not novel. This paper suggests a framework to study different hash functions. However, the experimental results do not seem adequate to validate this framework. One issue here is lack of a baseline for performance comparison. Otherwise, the significance of the results is not clear.\n\n\n", "This paper has limited novelty, the ideas has been previously proposed in HashedNet and Deep Compression. The experimental section is week, with only mnist and cifar results it's not convincing to the community whether this method is general. " ]
[ 4, 4, 4 ]
[ 4, 4, 4 ]
[ "iclr_2018_SJD8YjCpW", "iclr_2018_SJD8YjCpW", "iclr_2018_SJD8YjCpW" ]
iclr_2018_ByL48G-AW
Simple Nearest Neighbor Policy Method for Continuous Control Tasks
We design a new policy, called a nearest neighbor policy, that does not require any optimization for simple, low-dimensional continuous control tasks. As this policy does not require any optimization, it allows us to investigate the underlying difficulty of a task without being distracted by optimization difficulty of a learning algorithm. We propose two variants, one that retrieves an entire trajectory based on a pair of initial and goal states, and the other retrieving a partial trajectory based on a pair of current and goal states. We test the proposed policies on five widely-used benchmark continuous control tasks with a sparse reward: Reacher, Half Cheetah, Double Pendulum, Cart Pole and Mountain Car. We observe that the majority (the first four) of these tasks, which have been considered difficult, are easily solved by the proposed policies with high success rates, indicating that reported difficulties of them may have likely been due to the optimization difficulty. Our work suggests that it is necessary to evaluate any sophisticated policy learning algorithm on more challenging problems in order to truly assess the advances from them.
rejected-papers
Evaluating simple baselines for continuous control is important and nearest neighbor search methods are interesting. However, the reviewers think that the paper lacks citation and comparison to some prior work and evaluation on more challenging benchmarks.
train
[ "r1yRvu84z", "H1cX_a21z", "BkVx4mcez", "H1q18tjxM", "ryjPFIp7f", "SkIkY8pQM", "BJEAOU6Xf", "HyHjM9BmG", "B1NqTmtgz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "public" ]
[ "Thanks for the author's response. As with the other reviewers, I continue to believe this is more suited for a workshop submission.\n\nAs I cited in my review (and hopefully this also addresses the follow-up comment), I don't believe there are recent, accepted papers which only use these simple tasks (except for some theory focused papers). The fact that many empirical results use some simple tasks is true, but they also test against a number of other more complex tasks, blunts the primary argument of this work so I will leave my rating.", "SUMMARY\nThe paper deal with the problem of RL. It proposes a non-parametric approach that maps trajectories to the optimal policy. It avoids learning parameterized policies. The fundamental idea is to store passed trajectories. When a policy is to be executed, it does nearest neighbor search to find then closest trajectory and executes it.\n\nCOMMENTS\n\nWhat happens if the agent finds it self in a state that while is close to a state in the similar trajectory the action required to could be completely different.\n\nNot certain about the claim that standard RL policy learning algorithms make it difficult to assess the difficulty of a problem. \n\nHow do you execute a trajectory? Actions in RL are by definition stochastic, and this would make it unlikely that a same trajectory can be reproduced exactly.\n", "This work shows that a simple non-parametric approach of storing state embeddings with the associated Monte Carlo returns is sufficient to solve several benchmark continuous control problems with sparse rewards (reacher, half-cheetah, double pendulum, cart pole) (due to the need to threshold a return the algorithms work less well with dense rewards, but with the introduction of a hyper-parameter is capable of solving several tasks there). The authors argue that the success of these simple approaches on these tasks suggest that more changing problems need to be used to assess new RL algorithms.\n\nThis paper is clearly written and it is important to compare simple approaches on benchmark problems. There are a number of interesting and intriguing side-notes and pieces of future work mentioned.\n\nHowever, the originality and significance of this work is a significant drawback. The use non-parametric approaches to the action-value function go back to at least [1] (and probably much further). So the algorithms themselves are not particularly novel, and are limited to nearly-deterministic domains with either single sparse rewards (success or failure rewards) or introducing extra hyper-parameters per task.\n\nThe significance of this work would still be quite strong if, as the author's suggest, these benchmarks were being widely used to assess more sophisticated algorithms and yet these tasks were mastered by such simple algorithms with no learnable parameters. Yet, the results do not support the claim. Even if we ignore that for most tasks only the sparse reward (which favors this algorithm) version was examined, these author's only demonstrate success on 4, relatively simple tasks.\n\nWhile these simple tasks are useful for diagnostics, it is well-known that these tasks are simple and, as the author's suggest \"more challenging tasks .... are necessary to properly assess advances made by sophisticated, optimization-based policy algorithms.\" Lillicrap et al. (2015) benchmarked against 27 tasks, Houtfout et al. (2016) compared in the paper also used Walker2D and Swimmer (not used in this paper) as did [2], OpenAI Gym contains many more control environments than the 4 solved here and significant research is pursing complex manipulation and grasping tasks (e.g. [3]). This suggests the author's claim has already been widely heeded and this work will be of limited interest.\n\n[1] Juan, C., Sutton, R. S., & Ram, A. Experiments with Reinforcement Learning in Problems with Continuous State and Action Spaces.\n\n[2] Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., & Meger, D. (2017). Deep reinforcement learning that matters. arXiv preprint arXiv:1709.06560.\n\n[3] Nair, A., McGrew, B., Andrychowicz, M., Zaremba, W., & Abbeel, P. (2017). Overcoming exploration in reinforcement learning with demonstrations. arXiv preprint arXiv:1709.10089.", "This paper presents a nearest-neighbor based continuous control policy. Two algorithms are presented: NN-1 runs open-loop trajectories from the beginning state, and NN-2 runs a state-condition policy that retrieves nearest state-action tuples for each state. \n\nThe overall algorithm is very simple to implement and can do reasonably well on some simple control tasks, but quickly gets overwhelmed by higher-dimensional and stochastic environments. It is very similar to \"Learning to Steer on Winding Tracks Using Semi-Parametric Control Policies\" and is effectively an indirect form of tile coding (each could be seen as a fixed voronoi cell). I am sure this idea has been tried before in the 90s but I am not familiar enough with all the literature to find it (A quick google search brings this up: Reinforcement Learning of Active Recognition Behaviors, with a chapter on nearest-neighbor lookup for policies: https://people.eecs.berkeley.edu/~trevor/papers/1997-045/node3.html).\n\nAlthough I believe there is work to be done in the current round of RL research using nearest neighbor policies, I don't believe this paper delves very far into pushing new ideas (even a simple adaptive distance metric could have provided some interesting results, nevermind doing a learned metric in a latent space to allow for rapid retrainig of a policy on new domains....), and for that reason I don't think it has a place as a conference paper at ICLR. I would suggest its submission to a workshop where it might have more use triggering discussion of further work in this area.", "\"What happens if the agent finds itself in a state that while is close to a state in the similar trajectory the action required to could be completely different.\"\n\nThis would be indeed an issue with any nearest neighbor based policy, and the two variants in our submission would also suffer from this issue. However, we'd like to point out that the success of these variants on the four out of five tasks we've tried in this submission suggests that such a situation does not naturally/frequently occur in these tasks, leading us to conclude that these tasks are not adequate for evaluating any sophisticated policy parametrizations nor learning algorithms. This is our main conclusion and motivation in this submission.\n\n\"Not certain about the claim that standard RL policy learning algorithms make it difficult to assess the difficulty of a problem.\"\n\nWe believe we have explained it at the end of Sec. 2. It would be better if you could share with us which part of our explanation you find less certain of. We would like to improve our submission.\n\n\"How do you execute a trajectory?\"\n\nAs stated in Sec. 3, \"[o]nce the trajectory has been retrieved, the NN-1 policy executes it by adding noise ε to each retrieved action\".\n\n\"Actions in RL are by definition stochastic\"\n\nWe also do not believe this is necessarily true. In the case of deterministic policy, actions are stochastically corrupted during training for exploration, but the underlying policy could as well be deterministic.\n", "Thanks for pointing out earlier works! We should have and will definitely cite these work later in a revision. \n\nIndeed those works you pointed out evaluated their approaches on a broader set of tasks, but we would like to point out that the five tasks tested in our submission are indeed popular and widely used. It is in our plan to broaden a set of target tasks for which we evaluate these simple nearest neighbour based policies, and we will release our code to make it easier for anyone in the community to evaluate any new (or existing) task with this minimal approach to assess its difficulty more easily in the future.\n", "Thanks for pointing out some earlier works! We will cite those and probably more earlier works later in a revision. \n\nWe'd like to re-emphasize that the nearest neighbour policy itself is not the point of our submission. Indeed, we did design two particular instantiations of a nearest neighbour based policy family, but the main goal was to see whether these popular existing tasks are worth benchmarks for assessing increasingly many variants of sophisticated neural net based policy algorithms, as we stated in the conclusion (though, we agree we should make it much clearer): \"the perceived difficulty of these benchmark problems has largely been dominated by optimization difficulty rather than the actual, underlying difficulty of the environment and task\" and \"we conclude that more challenging tasks, or more diverse settings of existing benchmark tasks, are necessary to properly assess advances.\"", "Disclosure: not an author on this paper\n\nWhile I welcome the comment that only a limited number of tasks have been studied in this paper, I would like to ask if the same standards are adhered more broadly in the community. While the reviewers seem to acknowledge that these tasks are only toy examples, there seems to be a flurry of papers that propose and evaluate *novel* algorithms only on these tasks. There seems to be a big discrepancy here, with these novel approaches trying to kill a mosquito with a revolver.\n\nI would like to raise a more general question to both this reviewer and the program committee whether they would stick to these standards more broadly. My extrapolation for \"fairness\" from the reviewer's comments is that papers that propose new algorithms but demonstrate capabilities only on tasks that can be solved with simple look-up approaches or linear policies should be automatically down-weighted heavily. When new tasks are used, naturally they must be put to the same test to assess if simpler approaches and architectures can solve them. This is unfortunately not the case in the community and peer-review process. Does the reviewer or PC have any suggestions on how to normalize these issues?", "I enjoyed reading the paper!\nOne caveat with the given approach is that the distance metric becomes very important.\nAs the motion tasks become more complex, it may require custom distance metrics\nfor different motion phases or state-space regions.\n\nI believe that the general idea has connections to \"habit based learning\", i.e., see\n Habits, action sequences and reinforcement learning (2012)\n https://pdfs.semanticscholar.org/ed15/6c39a0d3a5f58660b571decbf3f46da5d752.pdf\n\nSee also the 2012 paper \"Optimal isn't good enough\" by Loeb,\nwhich places an emphasis on related ideas of memory-based lookup \"learning\"\n(and an alternate philosophical point of view to optimization).\n\nLastly, the following papers demonstrate the efficacy of simple nearest-neighbor control policies,\nusing only 6-20 points to represent the entire control policy. Caveat: this is more of a parametric policy,\ngiven that policy search is used to optimize these small set of \"representative states and actions\".\n\nhttp://www.cs.ubc.ca/~van/papers/2005-icra-steering.pdf\nhttp://www.cs.ubc.ca/~van/papers/2005-icra-walking.pdf\n\nbest wishes with this work.\nMichiel" ]
[ -1, 4, 4, 3, -1, -1, -1, -1, -1 ]
[ -1, 5, 4, 5, -1, -1, -1, -1, -1 ]
[ "SkIkY8pQM", "iclr_2018_ByL48G-AW", "iclr_2018_ByL48G-AW", "iclr_2018_ByL48G-AW", "H1cX_a21z", "BkVx4mcez", "H1q18tjxM", "BkVx4mcez", "iclr_2018_ByL48G-AW" ]
iclr_2018_rkw-jlb0W
Deep Lipschitz networks and Dudley GANs
Generative adversarial networks (GANs) have enjoyed great success, however often suffer instability during training which motivates many attempts to resolve this issue. Theoretical explanation for the cause of instability is provided in Wasserstein GAN (WGAN), and wasserstein distance is proposed to stablize the training. Though WGAN is indeed more stable than previous GANs, it takes much more iterations and time to train. This is because the ways to ensure Lipschitz condition in WGAN (such as weight-clipping) significantly limit the capacity of the network. In this paper, we argue that it is beneficial to ensure Lipschitz condition as well as maintain sufficient capacity and expressiveness of the network. To facilitate this, we develop both theoretical and practical building blocks, using which one can construct different neural networks using a large range of metrics, as well as ensure Lipschitz condition and sufficient capacity of the networks. Using the proposed building blocks, and a special choice of a metric called Dudley metric, we propose Dudley GAN that outperforms the state of the arts in both convergence and sample quality. We discover a natural link between Dudley GAN (and its extension) and empirical risk minimization, which gives rise to generalization analysis.
rejected-papers
Dear authors, While the reviewers appreciated your analysis, they all expressed concerns about the significance of the paper. Indeed, given the plethora of GAN variants, it would have been good to get stronger evidence about the advantages of the Dudley GAN. Even though I agree it is difficult to provide a clean comparison between generative models because of the lack of clear objectives, the LL on one dataset and images generated is limited. For instance, it would have been nice to show robustness results as this is a clear issue with GANs.
train
[ "SyXkyOqJz", "H17IrS0lz", "BkYfM_Rgz", "S1TXw7EfG", "BJV5PQ4MM", "r1f5sXVMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Ensuring Lipschitz condition in neural nets is essential of stablizing GANs. This paper proposes two contraint-based optimzation to ensure the Lips condtions , and these proposed approaches maintain suffcient capacity, as well as expressiveness of the network. A simple theoritical result is given by emprical risk minimization. The content of this paper\nis written clearly, and there are certain contribution and orginality in the literature. However, I am not sure that the novelty is\nsignificant, since I think that the idea of proposing their optimization is trival. Here I am concerned with the following two questions:\n(1) How to parameterize the function space of f_w or h_w, since they are both multivariate and capacity of the network will be\nreduced if the used way of parametering functions is adopted inappropriatily.\n(2) The theoretical result in (4) doesnot contain the information of Rademacher complexity, and it may be suboptimal in some sense. Besides, the parameter $\\gamma$ appears in the discriminator, which contradicts its role on the contraint of functions space.", "The authors propose a different type of GAN--the Dudley GAN--that is related to the Dudley metric. In fact, it is very much like the WGAN, but rather than just imposing the function class to have a bounded gradient, they also impose it to be bounded itself. This is argued to be more stable than the WGAN, as gradient clipping is said not necessary for the Dudley GAN. The authors empirically show that the Dudley GAN achieves a greater LL than WGAN for the MNIST and CIFAR-10 datasets.\n\nThe main idea [and its variants] looks solid, but with the plethora of GANs in the literature now, after reading I'm still left wondering why this GAN is significantly better than others [BEGAN, WGAN, etc.]. It is clear that imposing the quadratic penalty in equation (3) is really the same constraint as the Dudley norm? The big contribution of the paper seems to be that adding some L_inf regularization to the function class helps preclude gradient clipping, but after reading I'm unsure why this is \"the right thing\" to do in this case. We know that convergence in the Wasserstein metric is stronger than the Dudley metric, so why is using the weaker metric overweighed by the benefits in training?\n\nNits: Since the function class is parameterized by a NN, the IPM is not actually the Dudley metric between the two distributions. One would have to show that the NN is dense in Dudley unit ball w.r.t. L_inf norm, but this sort of misnaming had started with the \"Wasserstein\" GAN.", "It is clear that the problem studied in this paper is interesting. However, after reading through the manuscript, it is not clear to me what are the real contributions made in this paper. I also failed to find any rigorous results on generalization bounds. In this case, I cannot recommend the acceptance of this paper. ", "Thanks for the review. In this paper, we provided the necessary conditions for a Lipschitz neural nets which have applications beyond GANs. In addition, we believe we provided the tools for building variants of IPM-based GANs. We believe our approach is simple to implement and understand and provided better capacity in the discriminator so that the generator can produce better images. In addition, this Lipschitz neural network can be used in WGANs instead of weight clipping. Weight-clipping significantly limits the capacity of the network which leads to worst quality for the samples generated using WGAN. For GANs the discriminator has to be controlled so that it improves gradually. If it learns to discriminate very fast, then the generator will not be able to produce good quality images. We propose to use L_inf norm corresponding to using Dudley metric in the discriminator (instead of just limiting the weights). This regularization in addition to the Lipschitz neural network we introduced allows for better quality images to be produced faster compared to WGAN. We believe, the divergence measure we employed in our paper is more suitable for GANs than other alternatives. It is expressive that allows to learn good generators and not too restrictive to decrease the convergence speed.\nMoreover, since we use bounded functions, we can devise the generalization bound in the paper. This bound (that has never been studied or used before) suggests the relation between the discriminator’s performance with respect to the margin and the input bound. \nWe appreciate it if the reviewer could provide more information on which part of our contributions are not clear enough so that we provide better explanation.\n", "Thanks for the review. Firstly, we believe we provided the tools for building variants of IPM-based GANs. We believe our approach is simple to implement and understand and provided better capacity in the discriminator so that the generator can produce better images. We argue that weight clipping is definitely not the best way to approach Lipschitz condition in neural networks (something that authors point out in WGAN paper too). Having a bounded output also allows us to analyse the model from the complexity theory point of view as well—something that has never been investigated before in the GAN community. With Dudley metric, we need the Lipchitz function to be bounded and the norm introduced by it to be smaller than a constant. \nSince we use a Lipschitz neural network and use the (soft) penalty to ensure the function value remains bounded we believe we have the correct norm. This norm however is not a constant value throughout training.\nMoreover, we believe our contributions are beyond just the regularization: we showed a simple regularization combined with principles for building a Lipschitz neural network can overcome the problems with quality, speed of convergence and expressiveness of WGAN. We should use a bounded continuous function, we can draw parallels to complexity theory and show how GANs can be interpreted as ERM with a bound on the probability of discrimination between real and fake samples. We also showed for the same divergence measure and same neural net structure, simple change in the output of the network will change the quality of samples and convergence properties (tanh is slower but can be more stable). \nWe agree that the convergence in Dudley is weaker than the Wasserstein metric, however in the context of GANs this is not a concern because we learn the generator and we need to think of the interplay between the generator we learn and the divergence we consider. With GANs, since we learn the generator functions as well strong convergence is not necessarily what we are interested in. It is already shown that the weak convergence for GANs is more favourable because it stops the discriminator from saturation*. In addition, with GANs the convergence is not calculated using the expectation of the function with respect to the true measure (we only use a subset of observations to estimate the empirical mean). As such, the weaker convergence does not necessarily imply poorer performance in practice. There are other factors that have to be considered to have a stable training that converges and produces real looking samples. In addition, for the WGAN, the weights are extremely limited that will cause slow convergence. In our experiments, we observe faster convergence compared to WGAN. As an example, total variation which provides strong convergence has not been very successful in its applications to GANs. We believe the constrains we introduced to the discriminator in Dudley GAN is sufficient enough to deter the discriminator from fast convergence while being expressive enough to capture the complexity of the data (unlike weight clipping in WGAN).\n*: Approximation and Convergence Properties of Generative Adversarial Learning: https://arxiv.org/pdf/1705.08991.pdf\n", "Thanks for the review. For (1) we should note all the comparisons are for the same parametric function (a neural network as the universal estimator for the generator/discriminator). If there are not enough parameters in the network, the capacity of the network won’t be enough to learn an appropriate discriminator/generator. In the same scenario our approach provides more capacity and better expressiveness for the discriminator. As shown in Figure 3, this leads to larger variance (higher diversity) in the generated samples.\nFor (2), the second term in Equation 4 is an upper bound on the Rademacher complexity. Instead if Rademacher bound is used in Equation 4, the bound becomes tighter (Rademacher complexity is intractable in practice). Thanks for pointing out the typo in reference to $\\gamma$: it is (\\gamma-yf(x)) in Equation 4." ]
[ 8, 5, 5, -1, -1, -1 ]
[ 4, 3, 1, -1, -1, -1 ]
[ "iclr_2018_rkw-jlb0W", "iclr_2018_rkw-jlb0W", "iclr_2018_rkw-jlb0W", "BkYfM_Rgz", "H17IrS0lz", "SyXkyOqJz" ]
iclr_2018_SJtChcgAW
Cheap DNN Pruning with Performance Guarantees
Recent DNN pruning algorithms have succeeded in reducing the number of parameters in fully connected layers often with little or no drop in classification accuracy. However most of the existing pruning schemes either have to be applied during training or require a costly retraining procedure after pruning to regain classification accuracy. In this paper we propose a cheap pruning algorithm based on difference of convex (DC) optimisation. We also provide theoretical analysis for the growth in the Generalisation Error (GE) of the new pruned network. Our method can be used with any convex regulariser and allows for a controlled degradation in classification accuracy while being orders of magnitude faster than competing approaches. Experiments on common feedforward neural networks show that for sparsity levels above 90% our method achieves 10% higher classification accuracy compared to Hard Thresholding.
rejected-papers
Dear authors, While the reviewers appreciated the idea, the significant loss of accuracy was a concern. Even though you made significant changes to the submission, it is unfortunately unrealistic to ask the reviewers to do another review of a heavily modified version in such a short amount of time. Thus, I cannot accept this paper for publication but I encourage you to address the reviewers' concerns and resubmit at a later conference.
train
[ "Sk4UIHOlM", "rybUQFOgf", "Skse_Ydxz", "rJmwxI6XG", "HkemB8pQM", "SJu6kG_zM", "HkxGjb_zz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The manuscript mainly presents a cheap pruning algorithm for dense layers of DNNs. The proposed algorithm is an improvement of Net-Trim (Aghasi et al., 2016), which is to enforce the weights to be sparse.\n\nThe main contribution of this manuscript is that the non-convex optimization problem in (Aghasi et al., 2016) is reformulated as a difference of convex (DC) problem, which can be solved quite efficiently using the DCA algorithm (Tao and An, 1997). The complexity of the proposed algorithm is much lower than Net-Trim and its fast version LOBS (Dong et al., 2017). The authors also analyze the generalization error bound of DNN after pruning based on the work of (Sokolic et al., 2017).\n\nAlthough this is an incremental work built upon (Aghasi et al., 2016) and an existing algorithm (Tao and An, 1997) is adopted for optimization, the contribution is valuable since the complexity is significantly reduced by utilizing the proposed difference of convex reformulation. Although the main idea is clearly presented, there are many syntax errors and I suggest the authors carefully checking the manuscript.\n\nPros:\n1.\tThe motivation is clear and the presented reformulation is reasonable.\n\n2.\tThe generalization error analysis and the conclusion of “layers closer to the input are exponentially less robust to pruning” is interesting.\n\nCons:\n1.\tThere are many syntax errors, e.g., “Closer to our approach recently in Aghasi et al. (2016) the authors”, “an cheap pruning algorithm”, etc. Besides, there is no discussion for the results in Table 1.\n\n2.\tAlthough the complexity of the proposed method is much lower than the compared approaches (Net-Trim and LOBS), there seems to be a large sacrifice on accuracy. For example, the accuracy drops from 95.2% to 91% compared with Net-Trim in the LeNet-5 model and from 80.5% to 74.6% compared with LOBS in the CifarNet model. The proposed method is only better than hard-thresholding.\n", "The problem of pruning DNNs is an active area of study.\nThis paper addresses this problem by posing the Net-trim objective function as a Difference of convex(DC) function. This allows for an immediate application of DC function minimization using existing techniques. An analysis of Generalization error \nis also given. \n\nThe main novelty seems to be the interesting connection to DC function minimization. The benefits seem to be a faster algorithm for pruning. \n\nAbout the generalization error the term C_2 needs to be more well defined otherwise the coefficient of A would be -ve which may lead to complications.\n\nExperimental investigations are reasonable and the results are convincing.\n\nA list of Pros:\n1. Interesting connection to DC function\n2. Attempt to analyze generalization error \n3. Faster speed of convergence empirically\n\nA list of Cons:\n1. The contribution in posing the objective as a DC function looks limited as it is very straightforward. Also the algorithm is \ndirect application\n2. The time complexity analysis is imprecise. Since the proposed algorithm is iterative time complexity would depend on the number of iterations.\n\n\n\n\n", "This paper casts the pruning optimization problem of NetTrim as a difference of convex problems, and uses DCA to obtain the smaller weight matrix; this algorithm is also analyzed theoretically to provide a bound on the generalization error of the pruned network.\n\nHowever, there are many questions that aren't answered in the paper that make it difficult to evaluate: in particular, some experimental results leave open more questions for performance analysis. \n\nQuality: of good quality, but incomplete.\nClarity: clear with some typos\nOriginality: a new approach to the NetTrim algorithm, which is somewhat original, and a new generalization bound for the algorithm.\nSignificance: somewhat significant.\n\nPROS\n- A very efficient algorithm for pruning, which can run orders of magnitude faster than the approaches that were compared to on certain architectures.\n- An interesting generalization bound for the pruned network which is in line experimentally with decreasing robustness to pruning on layers close to the input.\n\nCONS\n- Non-trivial loss of accuracy on the pruned network, which cannot be estimated for larger-scale pruning as the experiments only prune one layer.\n- No in-depth analysis of the generalization bound.\n\nMain questions:\n- You mention you use a variant of DCA: could you detail what differences Alg. 2 has with classical DCA?\n- Where do you use the 0-1 loss in Thm. 3.2?\n- I think your result in Theorem 3.2 would be significantly stronger if you could provide an analysis of the bound you obtain: in which cases can we expect certain terms to be larger or smaller, etc.\n- Your experiments in section 4.2 show a non-trivial degradation of the accuracy with FeTa. Although the time savings seem worth the tradeoff to prune *one* layer, have you run the same experiments when pruning multiple layers? Could you comment on how the accuracy evolves with multiple pruned layers?\n- It would be nice to see the curves for NetTrim and/or LOBS in Fig. 2.\n- Have you tried retraining the network after pruning? Did you observe the same behavior as mentioned in (Dong et al., 2017) and (Wolfe et al., 2017)? \n- It would be interseting to plot the theoretical (pessimistic) GE bound as well as the experimental accuracy degradation. \n\nNitpicks:\n-Ubiquitous (first paragraph)\n-difference of convex problemS\n- The references should be placed before the appendix.\n- The amount of white space should be reduced (e.g. around Eq. (1)).", "Thank you very much for your useful comments.\n\nOur GE depends on a lower bound for the margin parameter \\gamma. For a large enough C_2 we indeed have a lower bound on the margin \\gamma> A where A<0. However the bound on the GE then becomes vacuous as by definition gamma > 0. For consistency we have added the constraint that the base in the exponentiation needs to be positive, and thus the C_2 should be sufficiently small. Furthermore we have made a direct computation of the ratio between two GE bounds in section 4.3 which shows that the error C_2 is indeed small enough at least for the LeNet-5 architecture.\n\nConcerning the generality of our approach please note that our method can be applied with any convex regulariser, possibly with ones that do not aim at network compression, but for example for protection against adversarial examples. Also our theoretical analysis of the GE includes not only pruning but any type of bounded perturbation to one or multiple hidden layers.\n\nWe have also added a more detailed analysis of the computational complexity of our algorithm. This includes the iterations \"K\" required by the outer DCA algorithm, as well as the gradient evaluation number to reach an \\epsilon good solution for the inner stochastic descent algorithm.", "Thank you very much for your useful comments.\n\nWe have made a number of changes to the original submission. \n- We have applied a different optimisation scheme for the optimisation of the linearised objective, specifically Proximal SVRG with acceleration. This has improved the accuracy for the DNNs after pruning with the proposed algorithm.\n- We have made multilayer pruning experiments on the architectures tested originally.\n- We have generalised our theoretical analysis to pruning multiple hidden layers, and have tested the validity of our analysis through the direct computation of a ratio between two GEs.\n-We have also addressed other reviewer comments by providing additional analysis of our theoretical and experimental results, and fixing other minor issues.\n\nWe apologise for any whitespace or syntax errors, which have been corrected to the best of our ability, and kindly ask that the reviewers reconsider their decision.", "Thank you very much for your useful comments.\n\nWe are currently working to improve the submitted work. We have improved the accuracy of the pruned architectures by using a different optimisation during the DCA iterations, which allows us to reach better minima. Specifically we have used Proximal Stochastic Variance Reduction (Prox-SVRG) instead of Proximal Stochastic Gradient Descent (Prox-SGD). We are also working to remove any syntax or spelling errors. ", "Thank you very much for your useful comments. \n\n-The term \"variant\" is redundant and will be removed from the corrected version. It's usage was meant to convey that the minimisation of the linearised objective in the DCA iterations, is done using stochastic gradient descent. \n\n- The theoretical analysis of the generalisation error was based largely on Theorem 2 and Corollary 1 page 8 in [1] as well as Theorem 3 page 6 of [2]. It is in this last theorem that the assumption for the 0-1 loss is needed, the loss needs to be non-negative and upper bounded by a scalar M. We will restate all the relevant theorems in the Appendix for clarity, and provide some discussion on the proposed generalisation bound.\n\n-One cause for the non-trivial loss of accuracy is that we use Proximal Stochastic Gradient Descent for the optimisation. Prox-SGD fails to converge to a good solution within the given iterations. We propose to instead use Proximal Stochastic Variance Reduction (Prox-SVRG). This has so far improved our results. We are currently conducting more experiments to address this and other reviewer questions and to further validate our claims.\n\n-Please note that the GE bound depends on constants that are difficult to calculate for real data, such as the intrinsic data dimensionality \"k\". This in turn makes plotting the theoretical GE bound non-trivial. However the bound should give a good intuition about the behaviour of a DNN in relation to the margin \"gamma\" as the underlying assumptions that it makes have been tested empirically in [1] [3].\n\n[1] Sokolic, Jure, et al. \"Robust large margin deep neural networks.\" IEEE Transactions on Signal Processing (2017).\n[2] Xu, Huan, and Shie Mannor. \"Robustness and generalization.\" Machine learning 86.3 (2012): 391-423.\n[3] Sokolic, Jure, et al. \"Generalization Error of Invariant Classifiers.\" arXiv preprint arXiv:1610.04574 (2016)." ]
[ 6, 5, 5, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_SJtChcgAW", "iclr_2018_SJtChcgAW", "iclr_2018_SJtChcgAW", "rybUQFOgf", "iclr_2018_SJtChcgAW", "Sk4UIHOlM", "Skse_Ydxz" ]
iclr_2018_Sy-tszZRZ
Bounding and Counting Linear Regions of Deep Neural Networks
In this paper, we study the representational power of deep neural networks (DNN) that belong to the family of piecewise-linear (PWL) functions, based on PWL activation units such as rectifier or maxout. We investigate the complexity of such networks by studying the number of linear regions of the PWL function. Typically, a PWL function from a DNN can be seen as a large family of linear functions acting on millions of such regions. We directly build upon the work of Mont´ufar et al. (2014), Mont´ufar (2017), and Raghu et al. (2017) by refining the upper and lower bounds on the number of linear regions for rectified and maxout networks. In addition to achieving tighter bounds, we also develop a novel method to perform exact numeration or counting of the number of linear regions with a mixed-integer linear formulation that maps the input space to output. We use this new capability to visualize how the number of linear regions change while training DNNs.
rejected-papers
Dear authors, The reviewers appreciated your work and recognized the importance of theoretical work to understand the behaviour of deep nets. That said, the improvement over existing work (especially Montufar, 2017) is minor. This, combined with the limited attraction of such work, means that the paper will not be accepted. I acknowledge the major modifications done but it is up to the reviewers to decide whether or not they agree to re-review a significantly updated version.
val
[ "SkfMvJqez", "SkSZLZ5gf", "r1knUinef", "BJpMV_p7G", "B126mvTXG", "SykyXwBmz", "HySqKpdZM", "H146OpdZf", "BkVEOadZM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "This paper investigates the complexity of neural networks with piecewise linear activations by studying the number of linear regions of the representable functions. It builds on previous works Montufar et al. (2014) and Raghu et al. (2017) and presents improved bounds on the maximum number of linear regions. It also evaluates the number of regions of small networks during training. \n\nThe improved upper bound given in Theorem 1 appeared in SampTA 2017 - Mathematics of deep learning ``Notes on the number of linear regions of deep neural networks'' by Montufar. \n\nThe improved lower bound given in Theorem 6 is very modest but neat. Theorem 5 follows easily from this. \n\nThe improved upper bound for maxout networks follows a similar intuition but appears to be novel. \n\nThe paper also discusses the exact computation of the number of linear regions in small trained networks. It presents experiments during training and with varying network sizes. These give an interesting picture, consistent with the theoretical bounds, and showing the behaviour during training. \n\nHere it would be interesting to run more experiments to see how the number of regions might relate to the quality of the trained hypotheses. \n\n\n\n", "Paper Summary:\n\nThis paper looks at providing better bounds for the number of linear regions in the function represented by a deep neural network. It first recaps some of the setting: if a neural network has a piecewise linear activation function (e.g. relu, maxout), the final function computed by the network (before softmax) is also piecewise linear and divides up the input into polyhedral regions which are all different linear functions. These regions also have a correspondence with Activation Patterns, the active/inactive pattern of neurons over the entire network. Previous work [1], [2], has derived lower and upper bounds for the number of linear regions that a particular neural network architecture can have. This paper improves on the upper bound given by [2] and the lower bound given by [1]. They also provide a tight bound for the one dimensional input case. Finally, for small networks, they formulate finding linear regions as solving a linear program, and use this method to compute the number of linear regions on small networks during training on MNIST\n\nMain Comments:\nThe paper is very well written and clearly states and explains the contributions. However, the new bounds proposed (Theorem 1, Theorem 6), seem like small improvements over the previously proposed bounds, with no other novel interpretations or insights into deep architectures. (The improvement on Zaslavsky's theorem is interesting.) The idea of counting the number of regions exactly by solving a linear program is interesting, but is not going to scale well, and as a result the experiments are on extremely small networks (width 8), which only achieve 90% accuracy on MNIST. It is therefore hard to be entirely convinced by the empirical conclusions that more linear regions is better. I would like to see the technique of counting linear regions used even approximately for larger networks, where even though the results are an approximation, the takeaways might be more insightful.\n\nOverall, while the paper is well written and makes some interesting points, it presently isn't a significant enough contribution to warrant acceptance.\n\n[1] On the number of linear regions of Deep Neural Networks, 2014, Montufar, Pascanu, Cho, Bengio\n[2] On the expressive power of deep neural networks, 2017, Raghu, Poole, Kleinberg, Ganguli, Sohl-Dickstein", "This is quite an interesting paper. Thank you. Here are a few comments:\n\nI think this style of writing theoretical papers is pretty good, where the main text aims of preserving a coherent story while the technicalities of the proofs are sent to the appendix. \nHowever I would have appreciated a little bit more details about the proofs in the main text (maybe more details about the construct that is involved). I can appreciate though that this a fine line to walk. Also in the appendix, please restate the lemma that is being proven. Otherwise one will have to scroll up and down all the time to understand the proof. \n\nI think the paper could also discuss a bit more in detail the results provided. For example a discussion of how practical is the algorithm proposed for exact counting of linear regions would be nice. Though regardless, I think the findings speak for themselves and this seems an important step forward in understanding neural nets. \n\n****************\nI had reduced my score based on the observation made by Reviewer 1 regarding the talk Montufar at SampTA. Could the authors prioritize clarification to that point ! \n - Thanks for the clarification and adding this citation. ", "This is a full list of changes made to the paper, except for changes made for readability. The changes and their motivations are discussed in previous comments; this list is only for the convenience of the reviewers. \n\nBelow, Montufar2017 refers to Montufar's 2017 SampTA paper that contained the previous upper bound, and Arora2016 refers to Arora et al. 2016 (see reference in paper), which provides a different lower bound to the maximal number of regions.\n\n1. Introduction\n\n- Added to the discussion the Montufar2017 and Arora2016 papers.\n\n2. Notations and background\n\n- Before \"Main contributions\", all changes in this section are for readability, except for the addition of a citation to Montufar2017.\n\n- We revise \"Main contributions\" in light of the overall changes to the paper, which are discussed in a previous comment. The main addition is the highlighting of the insights for the large input dimension case.\n\n3. Tighter bounds for rectifier networks\n\n- In the beginning, we add previous bounds from Montufar2017 and Arora2016.\n\n3.1. An upper bound on the number of linear regions\n\n- A major change is the bound in Theorem 1. We tighten the upper bound result by considering dimensions more precisely.\n\n- We thoroughly discuss two insights from Theorem 1. One of them is the bottleneck effect, which was briefly considered in the first version. In this version, we discuss it more extensively, add some plots illustrating the effect, and prove supporting results in Appendix A. The second insight is the case where the input dimension is large, in which we compare shallow and deep networks. Once again, we discuss it extensively, provide plots, and prove results in Appendix A.\n\n- Following these discussions, most of the ingredients of the proof of Theorem 1 are the same. In particular, we merge the first version's Lemmas 3 and 4 into the current version's Lemma 3, but with a slightly different result that leads to the new proof. The current Lemma 4 is part of the new proof, which is completed in Appendix D. We have moved some of the shorter proofs from the appendix to the main text for ease of readability.\n\n3.2. The case of dimension one\n\n- Readability changes only.\n\n3.3. A lower bound on the maximal number of regions\n\n- We extend a lower bound from Arora2016 with the one-dimensional construction, as it was similarly done for the Montufar et al. lower bound.\n\n4. An upper bound on the number of linear regions for maxout networks\n\n- Modified wording due to changes in its relationship with Theorem 1.\n\n5. Exact counting of linear regions\n\n- We include a reference to Cheng et al. 2017, which considers a MIP formulation for a DNN in a different context.\n\n6. Experiments\n\n- Both experiments are replaced by experiments on larger networks, with width 10 instead of 8 in the first experiment and total number of neurons 22 instead of 16 in the second experiment. The upper bound in the plot is updated with the new one.\n\n- These networks approach test error of 6% in the first case and 5% in the second. \n\n- We also provide training and final errors in Appendix M counting runtimes in Appendix N.\n\n7. Discussion\n\n- The discussion was extensively revised, including the following changes:\n\n- We mention the finding on shallow networks with large input dimension.\n\n- We point out that the new version of Theorem 1 has particular depths maximizing the upper bound according to the input dimension and the total number of neurons, which could be investigated for the actual number of regions. This is illustrated with an example in Appendix O.\n\n- We also discuss other directions for future work in the last paragraph, including understanding the relation between number of regions, accuracy, and potential for overfitting.\n\n\nAppendices\n\nIn the new version, we have added appendices A, B, G, M, N, and O. Appendix A elaborates on the properties of the new version of Theorem 1. Appendix B shows that the maximal number of linear regions is also exponential for large input dimensions. Appendix G describes how the lower bound from Arora2016 can be extended with the new one-dimensional construction. Appendices M and N provide error measures and runtimes for the experiments. Appendix O provides additional plots based on Theorem 1.\n\nOther appendices have been changed: Appendix D (previously also D, Proof of Theorem 1) has been rewritten with the new proof. Appendix H (previously G, proof of the maxout upper bound) was slightly revised in light of the change in Theorem 1. Previous appendices B and C (in the first version) have been moved to the main text (and altered according to the new proof).\n", "We just uploaded a new version of the manuscript. We have backed some intuitions on the exponential growth of linear regions for large input dimension, improved the readability of the paper, and refined the discussion.", "We would like to specially thank the reviewers for constructive feedback and suggestions. This was extremely useful in revising the manuscript. We have tried to address all the concerns of the reviewers. In particular, we highlight the major changes in the revised submission.\n\n1) We managed to get a copy of Montufar’s 2017 SampTA paper through email communication. We thank Reviewer 1 for pointing this out, and we did observe that Theorem 1 in our original submission is also shown in his paper. Montufar generally suggests that the bound can be improved by looking more closely at dimensions, but he does not provide a way to do it. We had addressed this in the submitted version by studying the rank of weight matrices. While it is unfortunate that we did not find an online copy of this paper during our submission, on the positive side, this pushed us to further improve our upper bound in the revised manuscript. In particular, our bound is tighter now and also produces additional novel insights when the input dimension is large. We hope that this will satisfy the main concerns of Reviewer 1 and Reviewer 3.\n\n2) As suggested by Reviewer 2, we have added more intuitions for the theorems and lemmas in the paper. Using the revised bounds, we have made an interesting observation that is very different from previous results. In particular, the results in Montufar et al. 2014 show that, if the input dimension is constant, then the number of regions of deep networks is asymptotically larger than those of shallow (single-layer) networks. While most prior results assert the claim that more depth leads to better representational power, we have observed scenarios where shallow networks have larger number of linear regions, i.e., better representational power. Using our revised upper bound, we show that if the input dimension is large, then shallow networks have more regions than deep networks. More precisely, when the input dimension is higher than the total number of neurons, then a deep network of $L$ layers each having width $n$ has fewer linear regions compared to a shallow network that has a single layer of width $Ln$. Exact details are given in the paper and the appendix. Note that this result is particularly interesting, since it is different from prior results and cannot be obtained from earlier bounds. \n\n3) To address the concerns of Reviewer 2, we performed several additional experiments on improved MNIST networks with accuracy closer to 95%. Please note that before this work, the idea of linear regions for deep neural networks is only a theoretical concept. Although we only count on simple networks such as MNIST, it nevertheless reinforces the idea that such theoretical ideas can be validated in real experiments. Unfortunately, the exact counting for larger networks is infeasible using the current approach, but improvements or workarounds could be devised. For instance, we would like to thank Reviewer 2 for suggesting the problem of “approximate counting” of linear regions. This is a promising approach and it can be done as future work, as finding good approximations involves a separate line of research. We would like to note that the counting procedure serves to validate the bound and that it could also provide further insight for future work on tighter bounds. If and when these bounds get close enough, it might not be relevant to do exact counting anymore.\n\n4) As suggested by Reviewer 3, we have restated the lemmas/theorems in the Appendix. We have also added runtimes for the exact counting of different networks in the Appendix. \n\n5) As per the suggestion given by Reviewer 1, we now report and discuss experimental results on the relation between the number of linear regions and the quality of training. While we have anecdotal evidence from the plots that larger number of linear regions generally correspond to better accuracy, we believe that this requires a more thorough investigation. In particular, we believe that the quality of the training also depends on the shape of the linear regions, which is not represented by just the number of linear regions. Independently of the precise relationship, our procedure opens a new door to an extensive empirical investigation.\n\nThe paper length has increased to 9.5 pages (note that there is no actual page limit for this conference), but we could easily rearrange the contents based on the final recommendation of the reviewers and Area Chairs, if the paper gets accepted. We really appreciate your feedback and we would be happy to address any additional concerns that you may have.\n", "We are currently working on addressing all the comments of the reviewers. However, we would like to provide a brief update on the status and current progress on our part in addressing the concerns. \n\n1) \"[...] I would have appreciated a little bit more details about the proofs in the main text. [...] I think the paper could also discuss a bit more in detail the results provided.\" \n\nWe will improve the discussion and move some of the contents from the Appendix to the main section.\n\n\n2) \"[...] observation made by Reviewer 1 regarding the talk Montufar at SampTA.\"\n\nPlease see our answer to AnonReviewer1.", "We are currently working on addressing all the comments of the reviewers. However, we would like to provide a brief update on the status and current progress on our part in addressing the concerns. \n\n1) \"[...] The new bounds proposed (Theorem 1, Theorem 6), seem like small improvements over the previously proposed bounds, with no other novel interpretations or insights into deep architectures.\"\n\nA novel interpretation derived from Theorem 1 is on the relationship between the number of linear regions and the widths of the layers of the DNN. We emphasize that in Theorem 1, the summations depend on the minimum width across the previous layers. This yields the insight that the number of regions is affected by a bottleneck-like effect from earlier layers. In other words, the bound from Theorem 1 is smaller if the earlier layers are smaller rather than if the later layers are smaller, fixed the total size of the network. This is reflected in the upper bound plot in Figure 4(b) and further validated by the computational results shown in the same figure.\n\nIn addition, the insights behind Theorem 1 pave the road to Theorem 5, which exploits the dimensionality of the regions in order to achieve the exact maximal number of regions for the one-dimensional case.\n\nThe case of more dimensions has proven to be more challenging, as evidenced by previous papers on the topic, but Theorem 6 nevertheless achieves a modest improvement. It generalizes the insight of Theorem 5 to higher dimensions.\n\nWe will elaborate on this discussion in the paper.\n\n\n2) \"I would like to see the technique of counting linear regions used even approximately for larger networks, where even though the results are an approximation, the takeaways might be more insightful.\" \n\n\nWe agree with the reviewer that more insight could be obtained with larger networks. However, exact counting has never been done before and we are excited about this new capability. While this is not fully scalable in the current form, this serves as a proof-of-concept that already provides insights even at a small scale.\n\nNevertheless, as the reviewer correctly pointed out, moving towards larger networks may require approximations. While we have already been thinking about using approximations, this is a different line of research and may need substantial additional work.\n\n\n3) \"[...] as a result the experiments are on extremely small networks (width 8), which only achieve 90% accuracy on MNIST.\" \n\nIn order to partially address this concern, we are working on counting (possibly larger) networks with higher accuracy.", "We are currently working on addressing all the comments of the reviewers. However, we would like to provide a brief update on the status and current progress on our part in addressing the concerns. \n\n1) \"The improved upper bound given in Theorem 1 appeared in SampTA 2017 - Mathematics of deep learning \"Notes on the number of linear regions of deep neural networks\" by Montufar.\" \n\nWe were not aware of the paper in our original submission and we searched for it but it is not available online. We have emailed the author for a copy. As soon as we obtain one, we will clarify the relationship between both papers." ]
[ 6, 4, 6, -1, -1, -1, -1, -1, -1 ]
[ 5, 5, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Sy-tszZRZ", "iclr_2018_Sy-tszZRZ", "iclr_2018_Sy-tszZRZ", "iclr_2018_Sy-tszZRZ", "iclr_2018_Sy-tszZRZ", "iclr_2018_Sy-tszZRZ", "r1knUinef", "SkSZLZ5gf", "SkfMvJqez" ]
iclr_2018_H1l8sz-AW
Improving generalization by regularizing in L2 function space
Learning rules for neural networks necessarily include some form of regularization. Most regularization techniques are conceptualized and implemented in the space of parameters. However, it is also possible to regularize in the space of functions. Here, we propose to measure networks in an L2 Hilbert space, and test a learning rule that regularizes the distance a network can travel through L2-space each update. This approach is inspired by the slow movement of gradient descent through parameter space as well as by the natural gradient, which can be derived from a regularization term upon functional change. The resulting learning rule, which we call Hilbert-constrained gradient descent (HCGD), is thus closely related to the natural gradient but regularizes a different and more calculable metric over the space of functions. Experiments show that the HCGD is efficient and leads to considerably better generalization.
rejected-papers
Dear authors, Despite the desirable goal, that is to move away from regularization in parameter space toward regularization in function space, the reviewers all thought that the paper was not convincing enough, both in the choice of the particular regularization and in the experimental section. While I appreciate that you have done a major rework of the paper, the rebuttal period should not be used for that and we can not expect the reviewers to do a complete re-review of a new version. This paper thus cannot be accepted to ICLR.
train
[ "H1L5a2I1z", "S1-zlmikf", "HkI5OXsxz", "H1JU_P27M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "\nGENERAL IMPRESSION:\n\nOverall, the revised version of the paper is greatly improved. The new derivation of the method yields a much simpler interpretation, although the relation to the natural gradient remains weak (see below). The experimental evaluation is now far more solid. Multiple data sets and network architectures are tested, and equally important, the effect of parameter settings is investigated. I enjoyed the investigation of the effect of L_2 regularization on qualitative optimization behavior.\n\n\nCRITICISM:\n\nMy central criticism is that the introduction of the L_2 norm as a replacement of KL divergence is completely ad-hoc; how it is related to KL divergence remains unclear. It seems that other choices are equally well justified, including the L_2 norm in parameter space, which then defeats the central argument of the paper. I do believe that L_2 distance is more natural in function space than in parameter space, but I am missing a strict argument for this in the paper.\n\nAlthough related work is discussed in detail in section 1, it remains unclear how exactly the proposed algorithm overlaps with existing approaches. I am confident that it is easy to identify many precursors in the optimization literature, but I am not an expert on this. It would be of particular interest to highlight connections to algorithm regularly applied to neural network training. Adadelta, RMSprop, and ADAM are mentioned explicitly, but what exactly are differences and similarities?\n\nThe interpretation of figure 2 is off. It is deduced that HCGD generalizes better, however, this is the case only at the very end of training, while SGD with momentum and ADAM work far better initially. With the same plot one could sell SGD as the superior algorithm. Overall, also in the light of figure 4, the interpretation that the new algorithm results in better generalization seems to stand on shaky ground, since differences are small.\n\nI like the experiment presented in figure 5 in particular. It adds insights that are of value even if the method should turn out to have significant overlap with existing work (see above), and perform \"only\" on par with these: it adds an interesting perspective to the discussion of how network optimization \"works\", how it handles local optima and which role they play, and how the objective function landscape is \"perceived\" by different optimizers. This is where I learned something new.\n\n\nMINOR POINTS:\n\npage 5: \"the any\" (typo)\n\npage 5: \"ture\" -> \"true\" (typo)\n", "I have read comments and rebuttal - i do not have the luxury of time to read in depth the revision.\nIt seems that the authors have made an effort to accommodate reviewers' comments. I upgraded the rating.\n\n-----------------------------------------------------------------------------------------------------------------------\n\nSummary: The paper considers the use of natural gradients for learning. The added twist is the substitution of the KL divergence with the Wasserstein distance, as proposed in GAN training. The authors suggest that Wasserstein regularization improves generalization over SGD with a little extra cost.\n\nThe paper is structured as follows:\n1. KL divergence is used as a similarity measure between two distributions.\n2. Regularizing the objective with KL div. seems promising, but expensive.\n3. We usually approximate the KL div. with its 2nd order approximation - this introduces the Hessian of the KL divergence, known as Fisher information matrix.\n4. However, computing and inverting the Fisher information matrix is computationally expensive.\n5. One solution is to approximate the solution F^{-1} J using gradient descent. However, still we need to calculate F. There are options where F could be formed as the outer product of a collection gradients of individual examples ('empirical Fisher').\n6. This paper does not move towards Fisher information, but towards Wasserstein distance: after a \"good\" initialization via SGD is obtained, the inner loop continues updating that point using the Wasserstein regularized objective. \n7. No large matrices need to be formed or inverted, however more passes needed per outer step.\n\nImportance:\nSomewhat lack of originality and poor experiments lead to low importance.\n\nClarity:\nThe paper needs major revision w.r.t. presenting and highlighting the new main points. E.g., one needs to get to page 5 to understand that the paper is just based on the WGAN ideas in Arjovsky et al., but with a different application (not GANS).\n\nOriginality/Novelty:\nThe paper, based on WGAN motivation, proposes Wasserstein distance regularization over KL div. regularization for training of simple models, such as neural networks. Beyond this, the paper does not provide any futher original idea. So, slight to no novelty.\n\nMain comments:\n1. Would the approximation of C_0 by its second-order Taylor expansion (that also introduces a Hessian) help? This would require the combination of two Hessian matrices.\n\n2. Experiments are really demotivating: it is not clear whether using plain SGD or the proposed method leads to better results. \n\nOverall:\nRejection.\n", "The paper presents an additive regularization scheme to encourage parameter updates that lead to small changes in prediction (i.e. adjusting updates based on their size in the output space instead of the input space). This goal is to achieve a similar effect to that of natural gradient, but with lighter computation. The authors claim that their regularization is related to Wasserstein metric (but the connection is not clear to me, read below). Experiments on MNIST with show improved generalization (but the baseline is chosen poorly, read below).\n\nThe paper is easy to read and organized very well, and has adequate literature review. However, the contribution of the paper itself needs to be strengthened in both the theory and empirical sides.\n\nOn the theory side, the authors claim that their regularization is based on Wasserstein metric (in the title of the paper as well as section 2.2). However, this connection is not very clear to me [if there is a rigorous connection, please elaborate]. From what I understand, the authors argue that their proposed loss+regularization is equivalent to the Kantorovich-Rubinstein form. However, in the latter, the optimization objective is the f itself (sup E[f_1]-E[f_2]) but in your scheme you propose adding the regularization term (which can be added to any objective function, and then the whole form loses its connection to Wasserstrin metric).\n\nOn the practical side, the chosen baseline is very poor. The authors only experiment with MNIST dataset. The baseline model lacks both \"batch normalization\" and \"dropout\", which I guess is because otherwise the proposed method would under-perform against the baseline. It is hard to tell if the proposed regularization scheme is something significant under such poorly chosen baseline.\n", "Based on the reviewers' responses, we have opted for a major rewrite of this project. The major changes include a change of theoretical justification, empirical analyses of this justification, additional optimization experiments, and a rework of the text (including the title and abstract).\n\n1.\nThe largest change is a shift away from the framework of Wasserstein distance in favor of one that establishes a metric of function distance in an L^2 Hilbert space, and regularizes that distance throughout learning. We believe this framework better emphasizes our overall message, which is that it is important to consider the behavior of optimizers in function space, not just parameter space.\n\nAs reviewer 1 noted, our original claim of regularizing the Wasserstein distance of the output distribution was not quite correct. We were limiting the change on specific examples, rather than between two distributions. After some thought, we realized that the expected norm of the change in output on specific examples is equivalent to an L^2 distance in a Hilbert space. A better description of our algorithm, then, is that it performs gradient descent through this L^2 function space. \n\nThis conceptual change required that we re-name the learning rule. The title, abstract, and algorithm description reflect this change. \n\n2. \nThe foundation of this work is a shift from thinking in parameter space to thinking in functional space. Previously, we did not adequately highlight or discuss this shift. We included a set of plots designed to evaluate the typical motion of a network through both parameter space and function space. These plots serve to justify an L^2 function space as a useful one for analysis.\n\n3.\nAll three reviewers were unsatisfied with the small number of optimization experiments. We included more experiments, including those using techniques like momentum, batch normalization, and alternative architectures.\n" ]
[ 6, 5, 4, -1 ]
[ 3, 4, 3, -1 ]
[ "iclr_2018_H1l8sz-AW", "iclr_2018_H1l8sz-AW", "iclr_2018_H1l8sz-AW", "iclr_2018_H1l8sz-AW" ]
iclr_2018_ry831QWAb
BLOCK-NORMALIZED GRADIENT METHOD: AN EMPIRICAL STUDY FOR TRAINING DEEP NEURAL NETWORK
In this paper, we propose a generic and simple strategy for utilizing stochastic gradient information in optimization. The technique essentially contains two consecutive steps in each iteration: 1) computing and normalizing each block (layer) of the mini-batch stochastic gradient; 2) selecting appropriate step size to update the decision variable (parameter) towards the negative of the block-normalized gradient. We conduct extensive empirical studies on various non-convex neural network optimization problems, including multilayer perceptron, convolution neural networks and recurrent neural networks. The results indicate the block-normalized gradient can help accelerate the training of neural networks. In particular, we observe that the normalized gradient methods having constant step size with occasionally decay, such as SGD with momentum, have better performance in the deep convolution neural networks, while those with adaptive step sizes, such as Adam, perform better in recurrent neural networks. Besides, we also observe this line of methods can lead to solutions with better generalization properties, which is confirmed by the performance improvement over strong baselines.
rejected-papers
The paper proposes to study the impact of normalizing the gradient for each layer before applying existing techniques such as SG + momentum, Adam or AdaGrad. The study is done on a reasonable number of datasets and, after the reviewers' comments, confidence intervals have been added, although Table 1 puts results in bold but many of these results are not statistically significant. The paper, however, lacks a proper analysis of the results. Two main things could be improved: - Normalization does not always have the same effect but the reasons for it are not discussed. This needs not be done theoretically but a more thorough analysis would have been appreciated. - There is no hyperparameter tuning, which means that the results are heavily dependent on which hyperparameters were chosen. Thus, it is hard to draw any conclusion. Regarding the seemingly conflicting remarks of the two reviewers, it all depends on what the paper is trying to achieve. If it tries to show that is it state-of-the-art, then comparing to state-of-the-art algorithms on every dataset is crucial. If it tries to study the impact of one specific change, in this case layer normalization, on the optimization, then comparing to the vanilla version is fine. The paper seems to try to address the latter so it is OK if it is not compared to all the state-of-the-art algorithms. However, proper tuning of existing methods is still required. Ultimately, a better understanding of layer normalization could be useful but the paper is not convincing enough to provide that understanding. There is no need to increase the number of datasets but it should rather focus on designing setups to test and validate hypotheses.
train
[ "BkTXGMKlf", "H1O8NOKeM", "HJl9PL1zM", "r1Na9Th7z", "By48Tpn7z", "SyqMa63mM", "S1YM2p2mz", "HJzDuph7f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper proposes a family of first-order stochastic optimization schemes based on (1) normalizing (batches of) stochastic gradient descents and (2) choosing from a step size updating scheme. The authors argue that iterative first-order optimization algorithms can be interpreted as a choice of an update direction and a step size, so they suggest that one should always normalize the gradient when computing the direction and then choose a step size using the normalized gradient. \n\nThe presentation in the paper is clear, and the exposition is easy to follow. The authors also do a good job of presenting related work and putting their ideas in the proper context. The authors also test their proposed method on many datasets, which is appreciated.\n\nHowever, I didn't find the main idea of the paper to be particularly compelling. The proposed technique is reasonable on its own, but the empirical results do not come with any measure of statistical significance. The authors also do not analyze the sensitivity of the different optimization algorithms to hyperparameter choice, opting to only use the default. Moreover, some algorithms were used as benchmarks on some datasets but not others. For a primarily empirical paper, every state-of-the-art algorithm should be used as a point of comparison on every dataset considered. These factors altogether render the experiments uninformative in comparing the proposed suite of algorithms to state-of-the-art methods. The theoretical result in the convex setting is also not data-dependent, despite the fact that it is the normalized gradient version of AdaGrad, which does come with a data-dependent convergence guarantee.\n\nGiven the suite of optimization algorithms in the literature and in use today, any new optimization framework should either demonstrate improved (or at least matching) guarantees in some common (e.g. convex) settings or definitively outperform state-of-the-art methods on problems that are of widespread interest. Unfortunately, this paper does neither. \n\nBecause of these points, I do not feel the quality, originality, and significance of the work to be high enough to merit acceptance. \n\nSome specific comments:\np. 2: \"adaptive feature-dependent step size has attracted lots of attention\". When you apply feature-dependent step sizes, you are effectively changing the direction of the gradient, so your meta learning formulation, as posed, doesn't make as much sense.\np. 2: \"we hope the resulting methods can benefit from both techniques\". What reason do you have to hope for this? Why should they be complimentary? Existing optimization techniques are based on careful design and coupling of gradients or surrogate gradients, with specific learning rate schedules. Arbitrarily mixing the two doesn't seem to be theoretically well-motivated.\np. 2: \"numerical results shows that normalized gradient always helps to improve the performance of the original methods when the network structure is deep\". It would be great to provide some intuition for this. \np. 2: \"we also provide a convergence proof under this framework when the problem is convex and the stepsize is adaptive\". The result that you prove guarantees a \\theta(\\sqrt{T}) convergence rate. On the other hand, the AdaGrad algorithm guarantees a data-dependent bound that is O(\\sqrt{T}) but can also be much smaller. This suggests that there is no theoretical motivation to use NGD with an adaptive step size over AdaGrad.\np. 2-3: \"NGD can find a \\eps-optimal solution....when the objective function is quasi-convex. ....extended NGD for upper semi-continuous quasi-convex objective functions...\". This seems like a typo. How are results that go from quasi-convex to upper semi-continuous quasi-convex an extension?\np. 3: There should be a reference for RMSProp.\np. 3: \"where each block of parameters x^i can be viewed as parameters associated to the ith layer in the network\". Why is layer parametrization (and later on normalization) a good way idea? There should be either a reference or an explanation.\np. 4: \"x=(x_1, x_2, \\ldots, x_B)\". Should these subscripts be superscripts?\np. 4: \"For all the algorithms, we use their default settings.\" This seems insufficient for an empirical paper, since most problems often involve some amount of hyperparameter tuning. How sensitive is each method to the choice of hyperparameters? What about the impact of initialization?\np. 4-8: None of the experimental results have error bars or any measure of statistical significance.\np. 5: \"NG... is a variant of the NG_{UNIT} method\". This method is never motivated.\np. 5-6: Why are SGD and Adam used for MNIST but not on CIFAR? \np. 5: \"we chose the best heyper-paerameter from the 56 layer residual network.\" Apart from the typos, are these parameters chosen from the training set or the test set? \np. 6: Why isn't Adam tested on ImageNet?\n\n \nPOST AUTHOR RESPONSE: After reading the author response and taking into account the fact that the authors have spent the time to add more experiments and clarify their theoretical result, I have decided to upgrade my score from a 3 to a 4. However, I still do not feel that the paper is up to the standards of the conference. \n\n\n\n\n\n ", "This paper illustrates the benefits of using normalized gradients when training deep models.\nBeyond exploring the \"vanilla\" normalized gradient algorithm they also consider adaptive versions, i.e., methods that employ per block (adaptive) learning rates using ideas from AdaGrad and Adam.\nFinally, the authors provide a theoretical analysis of NG with adaptive step-size, showing convergence guarantees in the stochastic convex optimization setting.\n\nI find this paper both very interesting and important. \nThe normalized gradient method was previously shown to overcome some non-convex phenomena which are hurdles to SGD, yet there was still the gap of combining NG with methods which automatically tune the learning rate.\n\nThe current paper addresses this gap by a very simple (yet clever) combination of NG with AdaGrad and Adam, and the authors do a great job by illustrating the benefits of their scheme by testing it over a very wide span of deep learning \nmodels. In light of their experiments it seems like AdamNG and NG should be adopted as the new state-of-the-art methods in deep-learning applications.\n\nAdditional comments:\n-In the experiments the authors use the same parameters as is used by Adam/AdaGrad, etc..\nDid the authors also try to fine tune the parameters of their NG versions? If so what is the benefit that they get by doing so?\n-It will be useful if the authors can provide some intuition about why is the learning rate chosen per block for NG?\nDid the authors also try to choose a learning rate per weight vector rather than per block? If so, what is the behaviour that they see.\n-I find the theoretical analysis a bit incomplete. The authors should spell out the choice of the learning rate in Thm. 1 and compare to AdaGrad.\n", "This paper proposes a variation to the familiar AdaGrad/Adam/etc family of optimization algorithms based a gradient magnitude normalization. More precisely, the components of the gradient are split into blocks (one block per layer), and each block is normalized by its L2 norm. The concatenation of these normalized gradients are used in place of the standard gradient in AdaGrad/Adam/SGD. The authors find the resulting optimizer performs slightly better than its competitors on four tasks.\n\nI feel this paper would be much stronger focusing extensively on one or two small problems and models, providing insight into how normalization affects optimization, rather than chasing state-of-the-art numbers on a variety of datasets and models. I believe the significance and originality of this work to be lacking.\n\n## Pros ##\n\nThe paper is easy to follow. The algorithm and experiment setups are clearly explained, and the plots are easy to understand. I appreciate the variety in experimental setups. The authors provide a proof of convergence for the AdaGrad variant on convex functions.\n\n## Cons ##\n\nThe paper fails to provide new insights to the reader. It succeeds in asking a question (how do normalized gradients impact training of neural networks?), but fails to add theoretical or empirical knowledge that furthers the field. While effectively changing the geometry of the problem, no motivation (theoretical or intuitive) is given as to why this normalization scheme should be effective.\n\nFrom the empirical side, the authors compare the proposed optimizers on many datasets and models, but concerningly only using the baselines' default hyperparameters. Even ADAM, a supposedly \"hands-free\" optimizer, has been shown to vary greatly in performance when its hyperparameters are well chosen (https://arxiv.org/abs/1705.08292). This is simply unfair to the baselines, and conclusions cannot meaningfully be drawn from this alone. In addition, different tasks use different optimizers, which strikes me as odd, and no error bars are added to any plots.\n\nFrom the theoretical side, the authors show a convergence bound that is minimized when the number of blocks is one. This, however, is not what the authors use in experiments, and no reasoning about the choice of blocks == network layers is given.\n\n## Specific comments ##\n\np1: \"Gradient computation is expensive\" is not a good justification. All empirical risk minimization, convex or not, requires a full pass over the dataset. Many convex problems outside of ERM involve very expensive gradient computations.\n\np1: \"These two challenges indicate that for each iteration, stochastic gradient might be the best practical first order information we can get\". See loads of work in approximate second-order methods that show otherwise! Hessian-free Optimization, K-FAC, Learning to Learn Gradient Descent, ACKTR's use of Kronecker-factored Trust Region.\n\np2: You may want to reference Layer-Specific Adaptive Learning Rates for Deep Networks (https://arxiv.org/pdf/1510.04609.pdf), as it appears relevant to the layer-wise nature of your paper.\n\np2: \"Recently, provably correct algorithms...\" I'm fairly confident that Adam and RMSProp lack provable correctness. You may want to soften this statement.\n\np3: The expression being minimized is the sample risk, rather than the expected risk.\n\np5: The relationship between NG and NG_{UNIT} is confusing. I suggest keeping only the vanilla method analyzed in this paper, or that the second method be better motivated.", "We first thank the reviewer for the valuable feedback. The responses to your questions are as follows:\n\nQ: “The proposed technique is reasonable on its own, but the empirical results do not come with any measure of statistical significance.”\nA: Thank you for the suggestion! We have included the mean and variance in our experimental results in Section 4.2. Please see the revised version. We show that the normalized gradient method is indeed better than its unnormalized counterpart in many scenarios and it is not by chance. \n\nQ: “For a primarily empirical paper, EVERY state-of-the-art algorithm should be used as a point of comparison on EVERY dataset considered.”\nA: We believe this is a very harsh and unrealistic requirement and strikingly contradicts with AnonReviewer4’s suggestion. Nowadays, getting state-of-the-art performance on a dataset usually appeals to the combination of different efforts, including data preprocessing/augmentation, careful model designing and thorough parameter tuning, etc. However, it is not the main focus of this paper. Our paper aims to provide a simple alternative to train neural networks and it empirically works well on a number of tasks. In fact, in the CIFAR10/100 and ImageNet experiment, we largely adopted the parameter settings in [1], where the model championed the ImageNet 2015 challenge, except for the layer number, mini-batch size and GPU number (we don’t have that many GPUs). This should be considered a very strong baseline. Those parameters were well tuned by other researchers and we don’t see the necessity for re-tuning. Furthermore, we NEVER claim our method is a panacea and we believe none of the existing methods are either.\n\n[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Deep Residual Learning for Image Recognition. CVPR 2016. \n\nQ: “The theoretical result in the convex setting is also not data-dependent, despite the fact that it is the normalized gradient version of AdaGrad, which does come with a data-dependent convergence guarantee.”\nA: We are not sure what “data-dependent” means in the reviewer’s question. Since the reviewer said AdaGrad has a data-dependent convergence guarantee, we compare the convergence result of AdaGrad with our Theorem 1. We guess that the reviewer probably meant that the right-hand side of Adagrad’s convergence result (the inequalities in Theorem 5 by Duchi et al(2011)) has a summation of the norms of rows of historical gradients,, i.e., $\\sum_{i=1}^d \\|g_{1:T,i}\\|_2$, which makes their bounds data-dependent. If our understanding is correct, we think our convergence guarantee is in fact data-dependent in the exactly the same way as AdaGrad. Please take a look at our inequality (3) which contains two components: the first component depends on $x_t$ and the second component depends on $\\|g_t^i\\|$. The first component is still upper bounded as in (5) but the second component can be bounded just by the first inequality in (4). Then our convergence guarantee will contain $\\sum_{i=1}^d \\|g_{1:T,i}\\|_2$ and will have the same data-dependency as AdaGrad. The reason this dependency did not appear in our Theorem 1 is because we further upper bounded the second component by 2\\sqrt{Td_i} as we showed in the second inequality in (4). In fact, the authors of AdaGrad also did the same thing in their Corollary 6 where the data-dependent term $\\sum_{i=1}^d \\|g_{1:T,i}\\|_2$ were also upper bounded by simpler terms. We thought the bound we reported in Theorem 1 was simpler. In the revision, we have included the data-dependent bound in Theorem 1.\n\nQ: “Any new optimization framework should either demonstrate improved (or at least matching) guarantees in some common (e.g. convex) settings or definitively outperform state-of-the-art methods on problems that are of widespread interest.”\nA: In terms of the performance of optimization under the convex setting, our result (Theorem 1) indeed matches AdaGrad in many ways. First, the optimality gaps ensured by both AgaGrad and our method convergence to zero in a rate of $1/\\sqrt{T}$. Second, according to our answer to the last question, the convergence guarantee of both AgaGrad and our method are data-dependent and contain the term $\\sum_{i=1}^d \\|g_{1:T,i}\\|_2$ in the same way. (We further upper bounded this term by a simpler quantity so this data-dependency might not be observed directly.) In addition, our method generalize AdaGrad by using block-wise adaptive subgradient. ", "Q: “Gradient computation is expensive” is not a good justification.\nA: By saying so, we want to emphasize that the full gradient computation for deep neural networks is unrealistic and we thus often use stochastic gradient. We made this point clearer in the revision. \n\nQ: On other optimization methods.\nA: We have softened our expression in the original text to avoid any confusion. Thanks for the pointing this out!\n\nQ: Additional references.\nA: Thanks for bringing this up, we will cite and discuss these two papers. In a nutshell, the updating rule in “Layer-Specific Adaptive Learning Rates for Deep Network” is different from ours in that they are essentially adding a term to the gradient rather than normalizing it, while the paper https://arxiv.org/abs/1705.08292 focuses more on the bad generalization cases of the adaptive step-size methods, which is orthogonal to our focus.\n\nQ: “I'm fairly confident that Adam and RMSProp lack provable correctness. You may want to soften this statement.”\nA: Yes we agree that there are some flaws in the proof of Adam, even for the convex case. That’s why our special case analysis can only be applied to Adagrad. We have softened this statement in the revision. \n\nQ: “The expression being minimized is the sample risk, rather than the expected risk.”\nA: In machine learning, the ultimate goal is to minimize expected risk, although in practice we can only work on the sample risk instead. Nevertheless, we think this is a minor point. In fact, the expression we wrote can present either sample risk or expected, depending on what is the distribution of \\xi in that expectation. If we consider the case where the distribution of \\xi in our minimization is simply the empirical discrete distribution corresponding to the finite sample, that expectation will just become the average of risk over samples. We prefer to use expectation instead of finite sum expression because the former is more general and our algorithms and theorem can be both applied to minimizing an expectation, no matter the corresponding distribution is continuous (expected risk) or discrete (sample risk). \n\nQ: “The relationship between NG and NG_{UNIT} is confusing.”\nA: We have clarified this in the revision and also rename the methods. The new method is a variant when the normalization is relaxed to not be strictly 1. We empirically find that it helps improve the generalization performance in Sec 4.2.\n", "We first thank the reviewer for the valuable feedback. The responses to your questions are as follows:\n\nQ: “I feel this paper would be much stronger focusing extensively on one or two small problems and models, providing insight into how normalization affects optimization, rather than chasing state-of-the-art numbers on a variety of datasets and models.”\nA: Thanks for the suggestion! We indeed put more results (figures and tables) and analysis in Sec 4.2, by investigating the CIFAR10 and 100 datasets. Interestingly, this point of view strikingly contradicts with Reviewer 2’s, who requires “Every state-of-the-art algorithm should be used as a point of comparison on every dataset considered.”\n\nQ: “The paper fails to provide new insights to the reader. It succeeds in asking a question (how do normalized gradients impact training of neural networks?), but fails to add theoretical or empirical knowledge that furthers the field. While effectively changing the geometry of the problem, no motivation (theoretical or intuitive) is given as to why this normalization scheme should be effective.”\nA: The intuition is that when the network is deep, the original gradient in the low layers will become very small or very large because of the multiplicative effect of the gradient of the upper layers, which is called gradient vanishing or explosion phenomenon. The layer-wise gradient normalization, which can also be interpreted as layer-wise learning rate, can counteract this negative effect automatically, maintaining the gradient magnitude as a constant, so that the information can still backprop to the bottom layers. We agree that our intuition is not strictly supported by theory, but this is also true of many of the effective approaches in deep learning, such as batch normalization, layer normalization, weight normalization and gradient clipping. Lacking theory does not prevent those method becoming prevalent.\n\nQ: “This is simply unfair to the baselines, and conclusions cannot meaningfully be drawn from this alone.”\nA: The goal of the experiments is to compare the performance between the existing algorithms and their gradient normalized counterpart. Hyperparameter tuning is orthogonal to our goal. We believe that as long as they are using the same parameter settings, the comparison is fair. In fact, in the CIFAR10/100 and ImageNet experiment, we largely adopted the parameter settings in [1], where the model championed the ImageNet 2015 challenge, except for the layer number, mini-batch size and GPU number (we don’t have that many GPUs). This should be considered a very strong baseline. Those parameters were well tuned by other researchers and we don’t see the necessity for re-tuning. That said, we actually searched over the learning rate for Adam or other parameters, please see Sec 4.2 of the revision.\n\n[1] He et.al.. Deep Residual Learning for Image Recognition. CVPR 2016. \n\nQ: “In addition, different tasks use different optimizers.”\nA: In fact, it is commonly observed that for RNNs, the adaptive step-size method like Adam performs better, while for CNNs, SGD+momentum works much better. That’s why we selected the best baseline optimizers for the specific tasks and compare with their normalized gradient counterpart based on this. We have clarified this point in the revision. We have also added the Adam experiment on CNN with ImageNet data, and confirmed this common observation.\n\nQ: “no error bars are added to any plots”\nA: We have added the means and variances in the tables of Section 4.2.\n\nQ: “the authors show a convergence bound that is minimized when the number of blocks is one.”\nA: According to what Theorem 1 stated, we agree that our convergence bound is minimized when the number of blocks is one. However, this is not the property of the algorithm. Instead, it is just because of our analysis. In the revision, we have derived the convergence bound in a tighter way so that the optimal number of blocks is not necessarily one. In fact, this is easy to derive. Instead of considering a constant M bounding the full gradient \\|F’\\|, we must consider a block-dependent constant M_i that upper bounds the corresponding block of gradient \\|F‘_i\\|. By simply replacing all M by M_i in the proof of Theorem 1, we obtain a convergence bound like O( [D^2\\sqrt{Bd}/\\eta+ \\eta(\\sum_i M_i^2)\\sqrt{d_i}] / sqrt{T} ). Then, consider a situation where some M_k is much larger than other M_i and some d_h is much larger than other d_i but h is different from k. For instance, we can have M_k=O(M)>>1 and d_h=O(d) but d_i=M_i=O(1) for other i. Our new convergence bound becomes [D^2\\sqrt{Bd}/\\eta+ \\eta(M^2+B+\\sqrt{d})] / sqrt{T}. After optimizing eta, we obtain [D(Bd)^{1/4}\\sqrt{M^2+B+\\sqrt{d}}] / sqrt{T}. Compared this bound for B=1, which is [DM\\sqrt{d}] / sqrt{T}, our bound can be lower, for example, when B<M^2. We have add some discussions on when the new convergence bound when $B>1$ is better than when $B=1$ in the revision. \n", "Q: “When you apply feature-dependent step sizes, you are effectively changing the direction of the gradient, so your meta learning formulation, as posed, doesn't make as much sense.”\nA: We agree that we are indeed changing the direction of the real gradient. However, in this work we do demonstrate that this modification works well. We should also point out that a number of very successful and widely used approaches, such as batch normalization, layer normalization, weight normalization, gradient clipping, do dynamically change the data, or the weight, or the direction of the gradient. We believe our technique falls into the same category as those.\n\nQ: “What reason do you have to hope for this? Why should they be complimentary? Existing optimization techniques are based on careful design and coupling of gradients or surrogate gradients, with specific learning rate schedules. Arbitrarily mixing the two doesn't seem to be theoretically well-motivated.”\nA: Again, neither our starting point nor the goal of this paper is on theory, just like most of the prevalent techniques. Our intuition is supported by the thorough experiments, not by the proof. We also point out that none of the current optimization techniques can be proved to work under the general neural network setting, without unrealistic assumptions.\n\nQ: “It would be great to provide some intuition for this”. \nA: The intuition is that when the network is deep, the original gradient in the low layers will become very small or very large because of the multiplicative effect of the gradient of the upper layers, which is called gradient vanishing or explosion phenomenon. The layer-wise gradient normalization can counteract this negative effect automatically, maintaining the gradient magnitude per layer as a constant, so that the information can still backprop to the bottom layers.\n\nQ: “This suggests that there is no theoretical motivation to use NGD with an adaptive step size over AdaGrad.”\nA: Yes, you are correct, our motivation is not from theory but from the practical observation. Please also see the response to the previous question.\n\nQ: “How are results that go from quasi-convex to upper semi-continuous quasi-convex an extension?”\nA: It is indeed a typo. We missed “differentiable”. We meant to say “NGD can find a \\eps-optimal solution....when the objective function is differentiable quasi-convex.” Kiwiel (Kiwiel, 2001) extended NGD for upper semi-continuous (not necessarily differentiable) quasi-convex objective functions.\n\nQ: “There should be a reference for RMSProp.”\nA: We will cite Geoff Hinton’s lecture note, as there is no formal publication on this method.\n\nQ: “Why is layer parametrization (and later on normalization) a good way idea?”\nA: We repeat the intuition here that when the network is deep, the original gradient in the low layers will become very small or very large because of the multiplicative effect of the gradient of the upper layers. The layer-wise gradient normalization can counteract this negative effect automatically, maintaining the gradient magnitude as a constant, so that the information can still backprop to the bottom layers.\n\nQ: “This seems insufficient for an empirical paper, since most problems often involve some amount of hyperparameter tuning. How sensitive is each method to the choice of hyperparameters? What about the impact of initialization?”\nA: The goal of the experiments is to compare the performance of the existing algorithms and their gradient normalized counterpart. We believe that as long as they are using the same parameter settings, the comparison is fair. Although hyperparameters tuning is orthogonal to the goal of our paper, we actually searched over the learning rate for Adam or other parameters, please see Sec 4.2 of the revision. Besides, we included the mean and variance of the performance over 5 runs for each method, each ResNet and each dataset with random initialization in Sec 4.2.\n\nQ: “NG_{unit} is never motivated.”\nA: Thanks for pointing this out! We have clarified this in the revision and also rename the method. The new method is a variant when the normalization is relaxed to not be strictly 1. We empirically find that it helps improve the generalization performance in Sec 4.2.\n\nQ: “Why are SGD and Adam used for MNIST but not on CIFAR?”\nA: Interestingly, even the Table 1 in the first submission exactly shows the SGD and Adam results on CIFAR10. We also added the result on CIFAR100 in the revision.\n\nQ: “are these parameters chosen from the training set or the test set?”\nA: They are chosen from validation set, which is clarified in the revision.\n\nQ: “Why isn't Adam tested on ImageNet?”\nA: We also included the Adam result on ImageNet in the revision. In fact, as a common wisdom, CNN is the basic model for ImageNet and SGD+momentum is usually better than Adam when using CNNs. That’s why we did not use Adam in the first version. We confirm this result in revision.\n", "We first thank the reviewer for the valuable feedback. The responses to your questions are as follows:\n\nQ: “In the experiments the authors use the same parameters as is used by Adam/AdaGrad, etc. Did the authors also try to fine tune the parameters of their NG versions? If so what is the benefit that they get by doing so?”\nA: We keep using the same parameters for both the normalized and original version, to make the comparisons fair. Otherwise, if we change the parameters in the normalized version, it is hard to tell whether the effect is due to the normalization or parameter tuning. \n\nQ: “It will be useful if the authors can provide some intuition about why is the learning rate chosen per block for NG?\nA: “Block” in the neural network scenario means “layer”. So our method is a layer-wise normalization approach. The intuition is that when the network is deep, the original gradient in the low layers will become very small or very large because of the multiplicative effect of the gradient of the upper layers, known as gradient vanishing or explosion phenomenon. The layer-wise gradient normalization, which can also be interpreted as layer-wise learning rate, can counteract this negative effect automatically, maintaining the gradient magnitude as a constant, so that the information (error) can still backprop to the bottom layers. \n\nQ: “Did the authors also try to choose a learning rate per weight vector rather than per block? If so, what is the behaviour that they see.”\nA: If we take all the variables of a neural network as a long vector, normalizing the gradient layer-wisely somehow has already changed the direction of this vector. And if we normalize by each weight vector, making the granularity of the normalization even finer, we are afraid the direction change will be more severe. Consider the extreme case of normalizing each dimension, which is equivalent to choose the sign of each coordinate of the gradient. We believe this would jeopardize the algorithm significantly. However, we feel it makes more sense to address the differences of the gradient magnitude between layers, rather than changing the relative values of weights within the same layer.\n\nQ: “The learning rate in Thm. 1”\nA: This learning rate is chosen to get through the proof under the convex setting. However, we should point out that in our experiments, where the objective function is no longer convex, it is unclear whether this learning rate would still provide convergence guarantee. \n" ]
[ 4, 9, 2, -1, -1, -1, -1, -1 ]
[ 5, 5, 5, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ry831QWAb", "iclr_2018_ry831QWAb", "iclr_2018_ry831QWAb", "BkTXGMKlf", "HJl9PL1zM", "HJl9PL1zM", "BkTXGMKlf", "H1O8NOKeM" ]
iclr_2018_H1pri9vTZ
Deep Function Machines: Generalized Neural Networks for Topological Layer Expression
In this paper we propose a generalization of deep neural networks called deep function machines (DFMs). DFMs act on vector spaces of arbitrary (possibly infinite) dimension and we show that a family of DFMs are invariant to the dimension of input data; that is, the parameterization of the model does not directly hinge on the quality of the input (eg. high resolution images). Using this generalization we provide a new theory of universal approximation of bounded non-linear operators between function spaces. We then suggest that DFMs provide an expressive framework for designing new neural network layer types with topological considerations in mind. Finally, we introduce a novel architecture, RippLeNet, for resolution invariant computer vision, which empirically achieves state of the art invariance.
rejected-papers
The idea of extending deep nets to infinite dimensional inputs is interesting but, as the reviewers noted, the execution does not have the quality we can expect from an ICLR publication. I encourage the authors to consider the meaningful comments that were made and modify the paper accordingly.
train
[ "rkeYOm_lM", "SJ2P_-YgG", "SyjvRE9lG", "Hyy_AXnmM", "rk_s9M2mz", "ByftMXn7f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper extends the framework of neural networks for finite-dimension to the case of infinite-dimension setting, called deep function machines. This theory seems to be interesting and might have further potential in applications.", "The main idea of this paper is to replace the feedforward summation\ny = f(W*x + b)\nwhere x,y,b are vectors, W is a matrix\nby an integral\n\\y = f(\\int W \\x + \\b)\nwhere \\x,\\y,\\b are functions, and W is a kernel. A deep neural network with this integral feedforward is called a deep function machine. \n\nThe motivation is along the lines of functional PCA: if the vector x was obtained by discretization of some function \\x, then one encounters the curse of dimensionality as one obtains finer and finer discretization. The idea of functional PCA is to view \\x as a function is some appropriate Hilbert space, and expands it in some appropriate basis. This way, finer discretization does not increase the dimension of \\x (nor its approximation), but rather improves the resolution. \n\nThis paper takes this idea and applies it to deep neural networks. Unfortunately, beyond rather obvious approximation results, the paper does not get major mileage out of this idea. This approach amounts to a change of basis - and therefore the resolution invariance is not surprising. In the experiments, results of this method should be compared not against NNs trained on the data directly, but against NNs trained on dimension reduced version of the data (eg: first fixed number of PCA components). Unfortunately, this was not done. I suspect that in this case, the results would be very similar. \n\n", "This paper deals with the problem of learning nonlinear operators using deep learning. Specifically, the authors propose to extend deep neural networks to the case where hidden layers can be infinite-dimensional. They give results on the quality of the approximation using these operator networks, and show how to build neural network layers that are able to take into account topological information from data. Experiments on MNIST using the proposed deep function machines (DFM) are provided. \n\nThe paper attempts to make progress in the region between deep learning and functional data analysis (FDA). This is interesting. Unfortunately, the paper requires significant improvements, both in terms of substance and in terms of presentation. My main concerns are the following:\n\n1) One motivation of DFM is that in many applications data is a discretization of a continuous process and then can be represented by a function. FDA is the research field that formulated the ideas about the statistical data analysis of data samples consisting of continuous functions, where each function is viewed as one sample element. This paper fails to consider properly the work in its FDA context. Operator learning has been already studied in FDA. See for e.g. the problem of functional regression with functional responses. Indeed the functional model considered in the linear case is very similar to Eq. 2.5 or Eq. 3.2. Moreover, extension to nonparametric/nonlinear situations were also studied. The authors should add more information about previous work on this topic so that their results can be understood with respect to previous studies.\n\n2) The computational aspects of DFM are not clear in the paper. From a practical computational perspective, the algorithm will be implemented on a machine which processes on finite representations of data. The paper does not clearly provide information about how the functional nature and the infinite dimensional can be handled in practice. In FDA, generally this is achieved via basis function approximations.\n\n3) Some parts of the paper are hard to read. Sections 3 and 4 are not easy to understand. Maybe adding a section about the notation and developing more the intuition will improve the reading of the manuscript. \n\n4) The experimental section can be significantly improved. It will be interesting to compare more DFM with its discrete counterpart. Also, other FDA approaches for operator learning should be discussed and compared to the proposed approach.\n", "We thank the reviewer for their careful consideration and useful comments. We'd like to address a few points raised.\n\n1) We agree that our treatment of existing empirical techniques relating to FDA and operator learning is minimal. However, the paper is presented as such so as to highlight the importance of the solution of the given universal approximation theorems in the language of DFMs and also unify existing infinite dimensional frameworks of neurocomputation. The appropriate references to existing linear/non-linear FDA works will be added in the camera-ready version.\n\n2) Ultimately, any functional analytic approach to machine learning must have a computationally efficient discrete representation. In section 4.2 and Appendix D (10) we make substantial amount of time deriving those discrete representations. The code is currently released on Github but to preserve anonymity we will add a link after the review period has ended. We would like to highlight that again the primary purpose of the paper is in developing a new theoretical framework, DFMs, to show long standing unproven universal approximation results. The experimental and differential topological results are expositionally interesting corollaries of the structure of that language.\n\n4) We would like to note that the experimental section is dedicated directly to comparing infinite dimensional DFMs with their discrete counterparts; that is, experiment 1 and experiment 2 compare to state of the art discrete neural networks respectively.\n\n\n", "We would like to thank the reviewer for their time and useful summary.", "We'd like to first express our gratitude for the reviewers time and useful comments. We will address the reviewers comments individually.\n\n>>> The main idea of this paper is to replace the feedforward summation [...] <<<\n\nWhile we introduce operator neural networks as such a formulation, the core idea of the paper is to unify the theory of infinite dimensional neural networks under one formulation, DFMs, and show that proving universal approximation results is ubiquitously reducible to the language of DFMs.\n\n>>> Unfortunately, beyond rather obvious approximation results, the paper does not get major mileage out of this idea. <<< \n\nWe present DFMs with the explicit intention of providing a language for proving extremely difficult universal approximation results. While we agree that the statement of the results may be *simple*, as were the statements of the first universal approximation conjectures for discrete neural networks, our proofs resolve a major open-problem in the approximation theoretic literature which has gone unsolved for over 18 years now. (Stinchcombe, 1999), (Rossi et al., 2002), (Conan-Guez, Brieuc, 2002), etc.\n\nOur major theorem(s) are the reduction of these long-standing conjectures to the language of DFMs which provide a category theoretic framework for factorizing any universal approximation conjecture through that of the discrete DFM. (Theorem 3.5, Theorem 3.6, ***Definition 8.5, Lemma 8.6***).\n\n\n### References ###\n\nMaxwell B Stinchcombe. Neural network approximation of continuous functionals and continuous\nfunctions on compactifications. Neural Networks, 12(3):467–477, 1999.\n\nFabrice Rossi, Brieuc Conan-Guez, and François Fleuret. Theoretical properties of functional multi\nlayer perceptrons. 2002.\n\nChristopher KI Williams. Computation with infinite neural networks. Neural Computation, 10(5):\n1203–1216, 1998.\n\nRossi, Fabrice, et al. \"Representation of functional data in neural networks.\" Neurocomputing 64 (2005): 183-210.\n\nConan-Guez, Brieuc, and Fabrice Rossi. \"Approche régularisée du traitement de données fonctionnelles par un perceptron multi-couches.\" Actes des neuviemes journées de la SFC, Toulouse, France (2002): 169-172.\n" ]
[ 7, 3, 4, -1, -1, -1 ]
[ 1, 4, 3, -1, -1, -1 ]
[ "iclr_2018_H1pri9vTZ", "iclr_2018_H1pri9vTZ", "iclr_2018_H1pri9vTZ", "SyjvRE9lG", "rkeYOm_lM", "SJ2P_-YgG" ]
iclr_2018_rJma2bZCW
Three factors influencing minima in SGD
We study the statistical properties of the endpoint of stochastic gradient descent (SGD). We approximate SGD as a stochastic differential equation (SDE) and consider its Boltzmann Gibbs equilibrium distribution under the assumption of isotropic variance in loss gradients.. Through this analysis, we find that three factors – learning rate, batch size and the variance of the loss gradients – control the trade-off between the depth and width of the minima found by SGD, with wider minima favoured by a higher ratio of learning rate to batch size. In the equilibrium distribution only the ratio of learning rate to batch size appears, implying that it’s invariant under a simultaneous rescaling of each by the same amount. We experimentally show how learning rate and batch size affect SGD from two perspectives: the endpoint of SGD and the dynamics that lead up to it. For the endpoint, the experiments suggest the endpoint of SGD is similar under simultaneous rescaling of batch size and learning rate, and also that a higher ratio leads to flatter minima, both findings are consistent with our theoretical analysis. We note experimentally that the dynamics also seem to be similar under the same rescaling of learning rate and batch size, which we explore showing that one can exchange batch size and learning rate in a cyclical learning rate schedule. Next, we illustrate how noise affects memorization, showing that high noise levels lead to better generalization. Finally, we find experimentally that the similarity under simultaneous rescaling of learning rate and batch size breaks down if the learning rate gets too large or the batch size gets too small.
rejected-papers
Dear authors, The reviewers agreed that the theoretical part lacked novelty and that the paper should focus on its experimental part which at the moment is not strong enough to warrant publication. Regarding the theoretical part, here are the main concerns: - Even though it is used in previous works, the continuous time approximation of stochastic gradient overlooks its practical behaviour, especially since a good rule of thumb is to use as large as stepsize as possible (without reaching divergence), as for instance mentioned in The Marginal Value of Adaptive Gradient Methods in Machine Learning by Wilson et al. - The isotropic approximation is very strong and I don't know settings where this would hold. Since it seems central to your statements, I wonder what can be deduced from the obtained results. - I do not think the Gaussian assumption is unreasonable and I am fine with it. Though there are clearly cases where this will not be true, it will probably be OK most of the time. I encourage the authors to focus on the experimental part in a resubmission.
train
[ "ByBJy2Oef", "BkC-HgcxG", "H19fnlceG", "rk7v6dZGM", "ryzJa_ZMG", "ryUHh_bGG", "rkHxndbMM", "SkX0jdWGM", "BJOPjdZGG", "HJ4-tdWzz", "rJJ6atxzG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "The paper investigates how the learning rate and mini-batch size in SGD impacts the optima that the SGD algorithm finds.\nEmpirically, the authors argue that it was observed that larger learning rates converge to minima which are more wide,\nand that smaller learning rates more often lead to convergence to minima which are narrower, i.e. where the Hessian has large Eigenvalues. In this paper, the authors derive an analytical theory that aims at explaining this phenomenon.\n\nPoint of departure is an analytical theory proposed by Mandt et al., where SGD is analyzed in a continuous-time stochastic\nformalism. In more detail, a stochastic differential equation is derived which mimicks the behavior of SGD. The advantage of\nthis theory is that under specific assumptions, analytic stationary distributions can be derived. While Mandt et al. focused\non the vicinity of a local optima, the authors of the present paper assumed white diagonal gradient noise, which allows to\nderive an analytic, *global* stationary distribution (this is similar as in Langevin dynamics).\n\nThen, the authors focus again on individual local optima and \"integrate out\" the stationary distribution around a local optimum, using again a Gaussian assumption. As a result, the authors obtain un-normalized probabilities of getting trapped in a given local optimum. This un-normalized probability depends on the strength of the value of the loss function in the vicinity of the optimum, the gradient noise, and the width of the optima. In the end, these un-normalized probabilities are taken as\nprobabilities that the SGD algorithm will be trapped around the given optimum in finite time.\n\n\nOverall assessment:\nI find the analytical results of the paper very original and interesting. The experimental part has some weaknesses. The paper could be drastically improved when focusing on the experimental part.\n\nDetailed comments:\n\nRegarding the analytical part, I think this is all very nice and original. However, I have some comments/requests:\n\n1. Since the authors focus around Gaussian regions around the local minima, perhaps the diagonal white noise assumption could be weakened. This is again the multivariate Ornstein-Uhlenbeck setup examined in Mandt et al., and probably possesses an analytical solution for the un-normalized probabilities (even if the noise is multivariate Gaussian). Would the authors to consider generalizing the proof for the camera-ready version perhaps?\n\n2. It would be nice to sketch the proof of theorem 2 in the main paper, rather than to just refer to the appendix. In my opinion, the theorem results from a beautiful and instructive calculation that should provide the reader with some intuition.\n\n3. Would the authors comment on the underlying theoretical assumptions a bit more? In particular, the stationary distribution predicted by the Ornstein-Uhlenbeck formalism is never reached in practice. When using SGD in practice, one is in the initial mode-seeking phase. So, why is it a reasonable assumption to still use results obtained from the stationary (equilibrated) distribution which is never reached?\n\n\nRegarding the experiments: here I see a few problems. First, the writing style drops in quality. Second, figures 2 and 3 are cryptic. Why do the authors focus on two manually selected optima? In which sense is this statistically significant? How often were the experiments repeated? The figures are furthermore hard to read. I would recommend overhauling the entire experiments section.\n\nDetails:\n\n- Typo in Figure 2: ”with different with different”.\n- “the endpoint of SGD with a learning rate schedule η → η/a, for some a > 0, and a constant batch size S, should be the same\n as the endpoint of SGD with a constant learning rate and a batch size schedule S → aS.” This is clearly wrong as there are many local minima, and running teh algorithm twice results in different local optima. Maybe add something that this only true on average, like “the characteristics of these minima ... should be the same”.", "In this paper, the authors present an analysis of SGD within an SDE framework. The ideas and the presented results are interesting and are clearly of interest to the deep learning community. The paper is well-written overall.\n\nHowever, the paper has important problems. \n\n1) The analysis is widely based on the recent paper by Mandt et al. While being an interesting work on its own, the assumptions made in that paper are very strict and not very realistic. For instance, the assumption that the stochastic gradient noise being Gaussian is very restrictive and trying to justify it just by the usual CLT is not convincing especially when the parameter space is extremely large, the setting that is considered in the paper.\n\n2) There is a mistake in the proof Theorem 1. Even with the assumption that the gradient of sigma is bounded, eq 20 cannot be justified and the equality can only be \"approximately equal to\". The result will only hold if sigma does not depend on theta. However, letting sigma depend on theta is the only difference from Mandt et al. On the other hand, with constant sigma the result is very trivial and can be found in any text book on SDEs (showing the Gibbs distribution). Therefore, presenting it as a new result is misleading. \n\n3) Even if the sigma is taken constant and theorem 1 is corrected, I don't think theorem 2 is conclusive. Theorem 2 basically assumes that the distribution is locally a proper Gaussian (it is stated as locally convex, however it is taken as quadratic) and the result just boils down to computing some probability under a Gaussian distribution, which is still quite trivial. Apart from this assumption not being very realistic, the result does not justify the claims on \"the probability of ending in a certain minimum\" -- which is on the other hand a vague statement. First of all \"ending in\" a certain area depends on many different factors, such as the structure of the distribution, the initial point, the distance between the modes etc. Also it is not very surprising that the inverse image of a wider Gaussian density is larger than of a pointy one. This again does not justify the claims. For instance consider a GMM with two components, where the means of the individual components are close to each other, but one component having a very large variance and a smaller weight, and the other one having a lower variance and higher weight. With authors' claim, the algorithm should spend more time on the wider one, however it is evident that this will not be the case. \n\n4) There is a conceptual mistake that the authors assume that SGD will attain the exact stationary distribution even when the SDE is simulated by the fixed step-size Euler integrator. As soon as one uses eta>0 the algorithm will never attain the stationary distribution of the continuous-time process, but will attain a stationary distribution that is close to the ideal one (of course with several smoothness, growth assumptions). The error between the ideal distribution and the empirical distribution will be usually O(eta) depending on the assumption and therefore changing eta will result in a different distribution than the ideal one. With this in mind the stationary distributions for (eta/S) and (2eta/2S) will be clearly different. \n\n\nThe experiments are very interesting and I do not underestimate their value. However, the current analysis unfortunately does not properly explain the rather strong claims of the authors, which is supposed to be the main contribution of this paper. \n", "The authors study SGD as a stochastic differential equation and use the Fokker planck equation from statistical physics to derive the stationary distribution under standard assumptions. Under a (somewhat strong) local convexity assumption, they derive the probability of arriving at a local minimum, in terms of the batchsize, learning rate and determinant of the hessian.\n\nThe theory in section 3 is described clearly, although it is largely known. The use of the Fokker Planck equation for stationary distributions of stochastic SDEs has seen wide use in the machine learning literature over the last few years, and this paper does not add any novel insights to that. For example, the proof of Theorem 1 in Appendix C is boilerplate. Also, though it may be relatively new to the deep learning/ML community, I don't see the need to derive the F-P equation in Appendix A.\n\nTheorem 2 uses a fairly strong locally convex assumption, and uses a straightforward taylor expansion at a local minimum. It should be noted that the proof in Appendix D assumes that the covariance of the noise is constant in some interval around the minimum; I think this is again a strong assumption and should be included in the statement of Theorem 2.\n\nThere are some detailed experiments showing the effect of the learning rate and batchsize on the noise and therefore performance of SGD, but the only real insight that the authors provide is that the ratio of learning rate to batchsize controls the noise, as opposed to the that of l.r. to sqrt(batchsize). I wish this were analyzed in more detail.\n\nOverall I think the paper is borderline; the lack of real novelty makes it marginally below threshold in my view.", "First of all we would like to thank authors for reproducing our results, we are very happy to see interest in our work! We will do our best to further investigate the report soon. There are some issues that need to be clarified (e.g. x axis of memorization experiment is different than in our paper), we contacted authors of reproduction via e-mail to clarify.\n\nIn the meantime, let us clarify cyclical batch size. In our submission we plot batch size and learning rate over time for both schedules in the Appendix. \nWe also do mention in text it is just replacing any relative change in learning rate with batch size change, for instance if learning rate is increased by factor of 5 (e.g. 0.1 to 0.5), we replace it with reduction of batch size by factor of 5 (e.g. 100 to 25). Adding to your report results of CBS, especially discrete one, would be very interesting. We will clarify it further in text.", "We added many clarifying changes, including discussions, better plots, and some improvements to experiments (e.g. larger grid in “Breaking point” section). This increased submission size by 2 pages, but we believe it was necessary to address all reviewer’s points. We would be grateful for feedback if some clarifications are too explicit, or if we should reduce the size of submission to previous size. Easiest way of reducing size would be moving some of the enlarged and expanded figures to Appendix.\n\nChanges:\nWe revised abstract to reflect better our novelty and main contribution\nWe added paragraph in Related work on Fokker-Planck equation\nWe improved figures as suggested by reviewers, e.g. Figure 4 and Figure 6 are enlarged.\nWe renamed section to 3 from “Theoretical results” to “Insights from Fokker-Planck” and 3.2 from “Main results” to “Three factors influencing equilibrium distribution”, to reflect the novel main finding of this section.\nWe reworded significantly section 3, mostly in response to reviewer 2:\nWe added many clarifications, e.g. in opening of section 3 we say “We make the assumption of isotropic covariance (...)”, or we added whole paragraph discussing Theorem 1 in 3.2. At the end of section 3.1 we add a clarifying remark on how we differ to Mandt et al. at the end of section 3.1. and a reference to Li et al. justifying the approximation of SGD by an SDE.\nAdded discussion sections after each theorem, which talk about the assumptions and interpretations of the results. \nWe made changes to theory\nWe fixed assumption of Theorem 1 as suggested by reviewer 2 to have a constant sigma.\nWe clarified that we assume equilibrium distribution in solution of Fokker-Planck rather than just the stationary distribution.\nIn 4.1 we rerun MLP experiment on a 4-layer network with Batch Normalization that is closer to assumptions made in Theorem 1, and we moved the 20 layer network without Batch Normalization experiments to appendix. Correlations remain qualitatively similar between the two experiments.\nIn 4.2 (“eta/S determines learning dynamics of SGD”) we added clarifying paragraph discussing that theory predicts “invariance” of endpoint of SGD, while dynamics “invariance” is an additional experimental result\nIn 4.3 (“impact of SGD on memorization”) we added minor clarifications\nIn 4.4 (“Breaking point of the theory in practice”) we significantly improved experiment by running larger grid, and improving plots\nWe added Section 5 “Discussion” which in 4 paragraphs summarizes results.\nDue to large space taken by all of the above changes we moved 4.5 (“Cyclical batch and learning rate schedule”) to Appendix, and referred to it in 4.2, which also discussed cyclical batch size and learning rate schedules.\nWe also included some changes in 4.5 (now in Appendix). We changed tracking ratio of hessian and loss, to tracking hessian. We rerun larger grid search and included table comparing performance of discussed schedules (CLR, CBS and constant).", "We thank the reviewer for their interesting comments and observations and for their enthusiasm for our results.\n\nAnalytical Part \n\nResponse to point 1, whether we can generalize to non-diagonal white noise. \n\nIn short, we believe generalization beyond the isotropic case is nontrivial, and we leave for future work. To clarify, in Mandt et al. they assume globally the Ornstein-Uhlenbeck (quadratic loss) setup (i.e they only consider one minimum), whereas for Theorem 2 we assume a series of minima, and then approximate the integral using the second order Taylor series locally near each minima, but not globally. (For Theorem 1 there is no restriction on the loss at all). The proof of Theorem 1 only strictly holds if the gradient noise is isotropic - in the non-isotropic case, the Fokker Planck equation will be a complicated partial differential equation which doesn’t have a closed form analytic stationary solution in general. Instead one would need numerical solutions of the PDE or further simplifying assumptions for an analytic solution. The solution may also depend on the path in parameter space through which the process evolves, unless further assumptions are made.\n\nResponse to point 2, whether the proof of theorem 2 can appear in the main paper.\n\nWe have decided to keep the proof of theorem 2 in the appendix, in response to AnonReviewer3 who suggested this proof is fairly standard, and also to keep the paper easy to read on a first pass without too much mathematical detail. \n\nResponse to point 3, that the equilibrium distribution of the SDE is not reached in practice. \n\nWe agree that the stationary distribution is not precisely arrived at in practice, but it can be approached to a good approximation if enough epochs have passed. On the other hand, we are not necessarily interested in exactly reaching the equilibrium distribution, we are more interested in sampling from the equilibrium distribution, which can happen in a fewer number of epochs than it takes for the probability distribution to approach it.\n\nExperimental Part\n\nIn figures 2 and 3 we show a qualitative result, common in the literature, e.g. Fig. 3 of https://arxiv.org/pdf/1609.04836.pdf, which expresses intuitively the consistency of the theory with experiment. They are just a one-dimensional slice through parameter space and so should be treated with a pinch of salt. In the original submission there were five plots each of which shows the consistency of our experiments with our theory pictorially. To show we are not manually selecting minima that fit our claims, we have run some more interpolation experiments to validate this. In the new version we have added more seeds to show the robustness with respect to the model random initialization of the result, see Appendix F in the revised version.\n\nWe have improved the quality of the figures to make them easier to read. \n\nOn detail 1, the typo, we have fixed this in the new version.\n\nOn detail 2, that the endpoints will not be the same, we agree with the reviewer here and thank them for the suggestion to clarify the phrasing in this way and have edited accordingly to read instead that the characteristics of these minima should be the same, not the actual minima. This is similar to the fix done for AnonReviewer3 on the vagueness of the phrase “the probability of ending in a certain minimum”.", "4. In response to point 4, on the error between SGD and the SDE stationary solution. It is standard to approximate SGD with an SDE in the machine learning literature and use the stationary distribution as an approximation of the learnt distribution. We are aware that SGD will not exactly attain the SDE stationary distribution. Instead, we recognise the breakdown of the theory, and have a whole section devoted to it: In the new version this is Section 4.5 “Breaking of the theory in practice”, where we see this error for larger eta, as the reviewer states. We highlight other limitations in Appendix F. We also specifically mention the approximation holds only to first order in eta in the final paragraph of Appendix B. In the new version we give a more detailed experiment of the breakdown of the theory in figure 7. We hope this addresses the concern that there is a conceptual mistake - we are aware that SGD will not attain the exact stationary distribution for eta>0 and this is reflected in our paper.", "2. Response to paragraph 2, first point, that there is a mistake in Theorem 1:\n\nWe agree there is a mathematical mistake in allowing sigma to vary with theta. We address this by changing our assumption so that sigma is constant. This modification does not affect our end results as the equilibrium distribution will then be the standard Gibbs distribution. Though the Gibbs distribution appears in the SDE literature, this exact expression has not appeared before in the machine learning literature to the best of our knowledge, explicitly showing the dependence of the loss, learning rate, batch size and sigma.\n\nResponse to paragraph 2, second point, that letting sigma depend on theta is the only difference to Mandt et al.: \n\nWe agree with the reviewer that taking a constant sigma is the same as Assumption 2 of Mandt et al. However, this is not the only difference between our paper and Mandt et al.\nAs stated on the first point of our introduction to this rebuttal, we do not assume Mandt et al. Assumption 4, which is the assumption that the iterates throughout are constrained to be in a region in which the loss surface is quadratic. Instead we allow iterates to be drawn from any region of parameter space for a general loss function. For Theorem 2 we decompose the whole loss surface into different basins of attraction of different sizes. For each of these different basins of attraction we use a second order Taylor expansion to evaluate the integral for the result of Theorem 2, allowing us to define the sizes of these basins and to compare between these basins. There is no comparison of different basins in Mandt et al. and this comparison is critical for the important observations about which minima SGD ends up in. To summarize, letting sigma be constant is mathematically necessary, but this was not the only difference between us and Mandt et al., instead our key difference is that we don’t restrict to a single basin with a quadratic loss, instead we consider many basins. \n\n3. Response to paragraph 3, even if Theorem 1 is corrected, the reviewer thinks Theorem 2 is not conclusive. Let us address each point in turn:\n\nAbout the concern that Theorem 2 is quite trivial. \nWe disagree that the result is trivial - it is indeed a simple calculation, but it is not obvious a priori that it will be the determinant of the Hessian that will appear in the prefactor, nor that the ratio of learning rate to batch size will control the weight given to width (from the Hessian prefactor) over depth. \n\nAbout \"the probability of ending in a certain minimum\" is vague.\nWe agree that the concept of the size of minima SGD finishes in is indeed vague unless it is sufficiently well defined. To discuss this we propose an approximation of each minima region by the quadratic bowl at that minima, and the size of the minima by the effective posterior mass of the corresponding Gaussian distribution. This Laplace approximate mass has been used before in the context of computing the evidence in Bayesian methods: it is indeed approximate, but it is sufficiently well defined, and captures enough for us to be able to discuss the critical issue of sizes of minima regions. For example to calculate Bayes’ factors e.g. in Kass and Raferty https://www.stat.washington.edu/raftery/Research/PDF/kass1995.pdf or for Bayesian model comparison in Mackay https://pdfs.semanticscholar.org/e5c6/a695a4455a526ec8955dcc0fa2d6810089e9.pdf. We have revised the phrase to read instead “the probability of ending in a minimum characterized by a certain loss value and Hessian determinant”.\n\nWith regards to the dependence of the endpoint on the initial point we point out that the stationary distribution theoretically doesn’t depend on the initialization, and this will be approximately true in practice if the algorithm is run for long enough. \n\nWe would like to clarify the dependence on the distance between the modes. We point out in appendix D that we assume the modes are separated by a large enough distance that the tails of the Gaussian approximation do not contribute significantly. This means that the example the reviewer gives of a GMM with close means is not valid for our situation since we assume the modes are separated enough in the derivation in appendix D. We would be happy to promote details of this assumption to the main text for clarity. \n\nTo summarize, our claim is not that the algorithm will spend more time on the wider minima, instead our claim is that the ratio of learning rate to batch size controls the tradeoff between width and depth, so whether it spends more time at a wider minima depends on the value of this ratio. Finally, we would like to emphasize that our empirical results confirm that a higher ratio of learning rate to batch size leads to a wider region being sampled. \n\n(continued...)", "We thank the reviewer for their interest in our paper, and the detailed review. We will address each points of the review in turn, and supplement responses with experiments where possible. Before that, we wanted to stress here two crucial points about our submission: \n\nWe would like to restate our main claim. This is that learning rate over batch size, along with noise in gradients, controls the stationary distribution from which SGD “samples” a solution. This claim (especially the importance of the ratio of learning rate to batch size) has not been made before. We discuss in the theory section, and in the experiment section how these findings are reflected in practice. Please refer to the rebuttal of AnonReviewer1 for more details about this point.\n\nSecond, we believe our paper is different from Mandt et al.. Our goal is comparing the different relative probabilities of ending in different “minima regions”, characterized by a loss value and hessian determinant. In particular we differ in Assumption 4 of Mandt et al. where in their whole analysis they restrict attention only to a region within a quadratic bowl, whereas we allow for a general loss function with multiple minima regions. In contrast, the goal of Mandt et al. is to show that under certain assumptions, SGD can be seen as sampling from a quadratic posterior (see for instance Fig.4 in Mandt et al), whereas we view SGD as sampling a solution from a stationary distribution that is not just a quadratic. For theorem 2, which talks about the probability of ending in a minima with certain characteristics, we use a Laplace approximation to evaluate an integral, which uses the second order Taylor expansion of the loss locally around a given minima - but this is not the same as the assumption in Mandt et al. which is that the loss is globally approximated by a quadratic. We have made changes to the paper at the end of Section 3.1 and in Appendix D to emphasize this.\n\nWe have clarified the aforementioned points in the revised paper.\n\nDetailed responses:\n1. Response to point 1, that assuming the batch gradient converges via the CLT to a Gaussian distribution. \n\nIt is a common assumption that the stochastic gradient noise can be modelled as Gaussian, for instance in the paper by Li et al. ‘15, https://arxiv.org/pdf/1511.06251.pdf the stochastic differential equation that we use has been proven to approximate SGD in the weak sense. More precisely, the use of the central limit theorem is appropriate in this case: the minibatch samples are randomized draws from a fixed distribution with finite variance: the distribution over the randomly ordered full dataset. Typical minibatch sizes are large by any CLT standard. The data exchangeability ensure that there is a shared variance, C, for all data points, and hence by the CLT the average over the minibatch will have variance C/S for a batch size S. We have produced a plot for the MLP model on the FMNIST dataset used in section 4.1 of the submitted paper which shows samples of gradients in randomly chosen directions for different batch sizes, which appear to follow a Gaussian distribution already for a typical batch size of 64. Here is a sample from 10 random directions at initialization https://anonfile.com/52v9m2d0b6/grid.pdf, and at best validation point https://anonfile.com/6cvcmbd4b8/grid_after_training.pdf.\n\n", "We thank the reviewer for the insightful comments and interesting questions they pose. We think the paper will be stronger through the clarifications that ensue. \n\n1)\n\nResponse to the comments about the Fokker-Planck equation and the novelty of our results. \nWe would like to clarify where our novelty arises. In our paper the main new result is that the learning rate over batch size ratio controls the tradeoff between the width (i.e. sharpness) and depth of the region in which SGD ends. In Theorem 1 and the surrounding text, we clarify the relationship between the batch size and learning rate and the effect that this has on the resulting equilibrium distribution. Though theorem 1 is a standard Gibbs-Boltzmann result, it is valuable to express it as a function of batch size and learning rate: this has not previously been emphasised in the literature, and it is this relationship that provides the insight into how SGD performs. To the best of our knowledge we have not seen the exact statement of Theorem 1 in the literature before. We do agree that a Gibbs distribution and its derivation has appeared before in the machine learning literature. For example, in the online setting we are aware of the results of equation (24) of Heskes and Kappen (http://www.sciencedirect.com/science/article/pii/S0924650908700382), but the relation here does not give the temperature of the Gibbs distribution in terms of the learning rate, batch size and gradient covariance. So we believe the result is novel in the machine learning context for minibatch stochastic gradient descent. We have adjusted the presentation of Theorem 1 to reflect this. We have also renamed the section from ‘Theoretical Results’ to ‘Insights from Fokker-Planck’ to reflect more clearly that our novelty is the insights gained rather than the derivation of new mathematical results.\n\n2)\n\nResponse to the comment on Theorem 2 that we assume the gradient covariance is constant in some region. We agree that the assumption should be included in the statement of Theorem 2, and further would like to revise Theorem 2 to be such that the covariance of the noise is constant and proportional to the identity everywhere. This stronger assumption corrects a mathematical mistake pointed out by AnonReviewer3 - nonetheless, this stronger assumption is sufficient for our requirements, to obtain an analytic solution for the stationary distribution. In the revised version this assumption appears in the statement of theorem 2.\n\n3)\n\nResponse to the comment “the only real insight that the authors provide is that the ratio of learning rate to batchsize controls the noise, as opposed to the that of l.r. to sqrt(batchsize)”. \n\nWe disagree that the ratio of learning rate to batch size controlling noise being the only real insight. We agree this is a core contribution of our work. However, more than interpreting this ratio as just the noise, we also investigate how this ratio affects the geometry sampled by SGD, the learning dynamics, the memorization and generalization. \n\nWe verify in the paper that when keeping the ratio of learning rate to batch size the same, we terminate in a region with similar properties, hessian, loss and performance. We did not focus on other scaling strategies such as square root as they did not appear in our theoretical analysis and investigations of them have appeared previously, e.g. in Hoffer et al., as referenced in Section 4.2. We would be happy to include further experiments on square root scaling if the reviewer suggests. ", "\n*Introduction*\nWe sought to reproduce the results of this paper. The three sections investigated are composed of sections 4.1, 4.3 and 4.5. The results of studying the effect of controllable noise, the effects of randomized labels, and the effect of cyclic learning rate on generalization were investigated. \n\n*Reproducibility Methodology and Results*\nControllable Noise\nFor the first experiment, a 20 layer multilayer perceptron (MLP) with ReLU activation functions trained on the FashionMNIST dataset was used. The structure of the MLP is as described in the paper \"Adding Gradient Noise Improves Learning for Very Deep Networks\". The number of epochs used was not specified, so 15 epochs was used in our network since it is the number of epochs used in other sections of the paper. Since the results were meant to hold for all datasets, the MLP was also tested on the MNIST dataset. For both datasets, the network was trained on all data, excluding the 10,000 test images. The hypothesis regarding the correlation between the controllable noise and test accuracy appeared to hold on both datasets.\nThe second reproduction used VGG-11 architecture on the CIFAR-10 dataset. Since no other information was provided about the network, the paper \"Very Deep Convolutional Networks for Large-Scale Image Recognition\" was used to build VGG-11. Due to the multiple max-pool layers, the architecture was not compatible with CIFAR-10. As a result this reproduction was dropped. \n\nMemorization\nA 2 layer multilayer perceptron was used to test the memorization phenomenon. The network ran on 300 epochs with ReLU activation functions, 256 units per layer, no momentum, and using the digit MNIST dataset. The network did not learn the data when using the original learning rates, so smaller learning rates were used. To save time, two extreme ratios using the batch sizes (50, 800) and learning rates (0.005, 0.01) were utilized.\nA varying number of epochs were required to learn different sized subsets of the data. In the case of the entire dataset, 1000 epochs were still not enough to learn the data. For a subset of 1000 training and validation data points, it was found that higher ratio of learning rate to batch size generalizes better. However, with the addition of more data points, the results achieved worsened.\nNetworks with a smaller ratio learned more quickly than those with a larger ratio. When testing on 7000 data points (using a learning rate of 0.05 with 700 and 10 epochs) the smaller ratio finished training, while the greater ratio did not. This reinforces the idea that a smaller ratio memorizes more.\n\nCyclic Learning Rate\nIn section 4.5 of the original paper, the effect of CLR on testing accuracy was experimented on the CIFAR-10 dataset using the VGG-11 architecture. However, as mentioned before, they are not compatible. FashionMNIST was used as an alternative dataset and the previously mentioned 20 layer MLP with ReLU activation was used.\nSome hyperparameters of CLR are missing such as higher bound and lower bound for learning rates. After experimentation, an optimum result was achieved with higher bound at 0.002 and lower bound at 0.001. To compare the effect of CLR on generalization, constant learning rates of 0.001, 0.002 and 0.0015 using a mini-batch size of 8 were tested as they represent a relatively similar structure to a CLR with step size of 4. From our results, CLR improves generalization as stated in the original paper and furthermore, CLR achieved the highest accuracy among all the accuracies that we obtained using constant learning rate.\n\n*Conclusion*\nWe were able to reproduce the experiment regarding the controllable noise. \nIn the case of memorization, only in the cases with small subsets were the results vaguely be reproduced. While the results were not able to be recreated, in some cases the results acquired supported the authors claims about the ratio: the smaller ratio learned faster than the bigger ratio, suggesting that having a smaller ratio leads to faster memorization. But overall, it cannot be said that the results achieved in this paper were able to properly replicate those of the original. In the case of cyclic learning rate, although not being able to reproduce the exact experiment, the conclusions reached show that CLR does improve generalization on FashionMNIST when compared with constant learning rate. \n\nTo see full references and more details, view the full report by S. Huang, K. Kutschera, and S. Perry-Fagant : http://cs.mcgill.ca/~kkutsc/reproduce.pdf." ]
[ 6, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJma2bZCW", "iclr_2018_rJma2bZCW", "iclr_2018_rJma2bZCW", "rJJ6atxzG", "iclr_2018_rJma2bZCW", "ByBJy2Oef", "BkC-HgcxG", "BkC-HgcxG", "BkC-HgcxG", "H19fnlceG", "iclr_2018_rJma2bZCW" ]
iclr_2018_rk3mjYRp-
Diffusing Policies : Towards Wasserstein Policy Gradient Flows
Policy gradients methods often achieve better performance when the change in policy is limited to a small Kullback-Leibler divergence. We derive policy gradients where the change in policy is limited to a small Wasserstein distance (or trust region). This is done in the discrete and continuous multi-armed bandit settings with entropy regularisation. We show that in the small steps limit with respect to the Wasserstein distance W2, policy dynamics are governed by the heat equation, following the Jordan-Kinderlehrer-Otto result. This means that policies undergo diffusion and advection, concentrating near actions with high reward. This helps elucidate the nature of convergence in the probability matching setup, and provides justification for empirical practices such as Gaussian policy priors and additive gradient noise.
rejected-papers
Dear authors, The authors all agreed that this was an interesting topic but that the novelty, either theoretical or empirical, was lacking. This, the paper cannot be accepted to ICLR in its current state but I encourage the authors to make the recommended updates and to push their idea further.
train
[ "Sywhphuez", "Syy9DXtef", "rk0iJ0FgM", "rJmn-DTmG", "rJdx9LamM", "B1m4OIamM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper ‘Diffusing policies: Towards Wasserstein policy gradient flows’ explores \nthe connections between reinforcement learning and the theory of quadratic optimal transport (i.e.\nusing the Wasserstein_2 as a regularizer of an iterative problem that converges toward\nan optimal policy). Following a classical result from Jordan-Kinderlehrer-Otto, they show that \nthe policy dynamics are governed by the heat equation, that translates in an advection-diffusion \nscheme. This allows to draw insights on the convergence of empirical practices in the field.\n\nThe paper is clear and well-written, and provides a comprehensive survey of known results in the \nfield of Optimal Transport. The insights on why empirical strategies such as additive gradient noise\nare very interesting and helps in understanding why they work in practical settings. That being said, \nmost of the results presented in the paper are already known (e.g. from the book of Samtambrogio or the work \nof G. Peyré on entropic Wasserstein gradient flows) and it is not exactly clear what are the original\ncontributions of the paper. The fact that the objective is to learn policies\nhas little to no impact on the derivations of calculus. It clearly suggests that the entropy \nregularized Wasserstein_2 distance should be used in numerical experiments but this point is not \nsupported by experimental results. Their direct applications is rapidly ruled out by highlighting the \ncomputational complexity of solving such gradient flows but in the light of recent papers (see \nthe work of Genevay https://arxiv.org/abs/1706.00292 or another paper submitted to ICLR on large scale optimal transport \nhttps://openreview.net/forum?id=B1zlp1bRW) numerical applications should be tractable. For these reasons \nI feel that the paper would clearly be more interesting for the practitioners (and maybe to some extent \nfor the audience of ICLR) if numerical applications of the presented theory were discussed or sketched \nin classical reinforcement learning settings. \n\nMinor comments:\n - in Equation (10) why is there a ‘d’ in front of the coupling \\gamma ? \n - in Section 4.5, please provide references for why numerical estimators of gradient of Wasserstein distances\nare biased. \n", "In this paper the authors studied policy gradient with change of policies limited by a trust region of Wasserstein distance in the multi-armed bandit setting. They show that in the small steps limit, the policy dynamics are governed by the heat equation (Fokker-Planck equation). This theoretical result helps us understand both the convergence property and the probability matching property in policy gradient using concepts in diffusion and advection from the heat equation. To the best of my knowledge, this line of research was dated back to the paper by Jordan et al in 1998, where they showed that the continuous control policy transport follows the Fokker-Planck equation. In general I found this line of research very interesting as it connects the convergence of proximal policy optimization to optimal transport, and I appreciate seeing recent developments on this line of work. \n\nIn terms of theoretical contributions, I see that this paper contains some novel ideas in connecting gradient flow with Wasserstein distance regularization to the Fokker-Planck equation. Furthermore its interpretation on the Brownian diffusion processes justifies the link between entropy-regularization and noisy gradients (with isotropic Gaussian noise regularization for exploration). I also think this paper is well-written and mathematically sound. While I understand the knowledge of this paper based on standard knowledge in PDE of diffusion processes and Ito calculus, I am not experienced enough in this field to judge whether these contributions are significant enough for a standalone contribution, as the problem setting is limited to multi-armed bandits.\n\nMy major critic to this paper is its practical value. Besides the proposed Sinkhorn-Knopp based algorithm in the Appendix that finds the optimal policy as fixed point of (44), I am unsure how these results lead to more effective policy gradient algorithms (with lower variance in gradient estimators, or with quasi-monotonic performance improvement etc.). There are also no experiments in this paper (for example to compare the standard policy gradient algorithm with the one that solves the Fokker-Planck equation) to demonstrate the effectiveness of the theoretical findings.", "The main object of the paper is the (entropy regularized) policy updates. Policy iterations are viewed as a gradient flow in the small timestep limit. Using this, (and following Jordan et al. (1998)) the desired PDE (Equation 21) is obtained. The rest of the paper discusses the implications of Equation 21 including but not limited to what happens when the time derivative of the policy is zero, and the link to noisy gradients.\n\nEven though the topic is interesting and would be of interest to the community, the paper mainly presents known results and provides an interpretation from the point of view of policy dynamics. I fail to see the significance nor the novelty in this work (esp. in light of Jordan et al. (1998) and Peyre (2015)).\n\nThat said, I believe that exposing such connections will prove to be useful, and I encourage the authors to push the area forward. In particular, it would be useful to see demonstrations of the idea, and experimental justifications even in the form of references would be a welcome addition to the literature.", "We thank the reviewer for their interest and for their comments on clarity and style.\n\nWe do agree the paper would benefit from practical results ; we feel there is value from a theoretical standpoint in exposing the connections with proximal mappings and gradient flow PDEs to the RL community, as we hope the general method of equating proximal regularizer, gradient flow PDE, and related stochastic process will become more widespread.\n\nWe are also thankful for your referencing of https://arxiv.org/abs/1706.00292 and https://openreview.net/forum?id=B1zlp1bRW, both of which we were unaware of as of time of writing this paper, obviously. We are indeed hopeful to remediate the lack of empirical results due to both tractability of large-scale optimal transport, and of compatibility of function approximation methods with Fokker-Planck diffusion. We will endeavour to include insights from these papers in further work. \n\nFinally, the d_\\gamma in equation (10) is a notation artifact made to link with the d_\\gamma in equation (9), but it probably is cleaner to correct and omit it. Regarding biased sample gradients of the Wasserstein distance, we do provide our article's fifth reference - a key recent paper that has highlighted this issue is Bellemare et al.'s https://arxiv.org/abs/1705.10743 ; we will clarify that we are referring to sample gradients bias here.", "Thank you very much for your insights and comments, as well as encouraging words on soundness and writing style. We are in agreement that the paper would benefit both from a theoretical standpoint if we could extend the results to the n-step returns setting, and from a practical perspective if we could an exhibit a numerically tractable algorithm using the Wasserstein policy iteration. While theoretical difficulties have arisen in combining neural-network based function approximation with the Fokker-Planck PDE, we do share this reviewer's concern and urgency on that point, and are currently undergoing work on this in a tabular setting.", "Indeed the calculations of sections 3 are found in the major work of Jordan et al. (1998) ; however, it is to our knowledge the first time that the entropy-regularized policy gradient functional is examined in a Wasserstein trust region context (which explains why no references were given for empirical work) in the reinforcement learning context. We do respectfully agree with the reviewer that adding empirical results is the most urgent line of further work. \n\nWe do state clearly that 'Our contribution largely consists in highlighting the connection between the functional of reinforcement learning and these mathematical methods inspired by statistical thermodynamics, in particular\nthe Jordan-Kinderlehrer-Otto result.' in the discussion. However, and as was stated by another reviewer ('Furthermore its interpretation on the Brownian diffusion processes justifies the link between entropy-regularization and noisy gradients (with isotropic Gaussian noise regularization for exploration)', we believe that the SDE interpretation is new and gives theoretical and intuitive grounding to such articles as https://arxiv.org/abs/1706.10295 and https://arxiv.org/pdf/1707.06887.pdf. Similarly the diffusive nature of convergence to the energy-based policies of Sabes and Jordan was not previously known to us; and we hope the method we have used opens up several new possibilities of continuous relaxations of trust-region RL settings via SDEs and PDEs." ]
[ 4, 5, 4, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1 ]
[ "iclr_2018_rk3mjYRp-", "iclr_2018_rk3mjYRp-", "iclr_2018_rk3mjYRp-", "Sywhphuez", "Syy9DXtef", "rk0iJ0FgM" ]
iclr_2018_HyxjwgbRZ
Convergence rate of sign stochastic gradient descent for non-convex functions
The sign stochastic gradient descent method (signSGD) utilizes only the sign of the stochastic gradient in its updates. Since signSGD carries out one-bit quantization of the gradients, it is extremely practical for distributed optimization where gradients need to be aggregated from different processors. For the first time, we establish convergence rates for signSGD on general non-convex functions under transparent conditions. We show that the rate of signSGD to reach first-order critical points matches that of SGD in terms of number of stochastic gradient calls, up to roughly a linear factor in the dimension. We carry out simple experiments to explore the behaviour of sign gradient descent (without the stochasticity) close to saddle points and show that it often helps completely avoid them without using either stochasticity or curvature information.
rejected-papers
Dear authors, After carefully reading the reviews, the rebuttal, and going through the paper, I regret to inform you that this paper does not meet the requirements for publication at ICLR. While the variance analysis is definitely of interest, the reality of the algorithm does not match the claims. The theoretical rate is worse than that of SG but this could be an artefact of the analysis. Sadly, the experimental setup lacks in several ways: - It is not yet clear whether escaping the saddle points is really an issue in deep learning as the loss function is still poorly understood. - This analysis is done in the noiseless setting despite your argument being based around the variance of the gradients. - You report the test error on CIFAR-10. While interesting and required for an ML paper, you introduce an optimization algorithm and so the quantity that matters the most is the speed at which you achieve a given training accuracy. Also, your table lists the value of the test accuracy rather than the speed of increase. Thus, you test the generalization ability of your algorithm while making claims about the optimization performance.
train
[ "rJ6zUEaEf", "SytBqV64f", "Sy6g0wDxz", "S1CO_KVez", "rkMJQKYxz", "rkZ_o074M", "ryqhm0QEM", "S1-hX_XVG", "rySnzNa7G", "BkF5Z467M", "ByUdq7pmf", "S1NYkVpQG", "HJJB3OFmM", "rkzrZEXMz", "HyZFydp-G", "HJngWo3-M", "H1cWJi2Wf", "r18PT52-z" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "In what sense is the result far worse than Alistarh et al.?\n\nWe have now validated empirically that for resnet-20 on cifar-10, the squared gradient 1-norm dominates the squared gradient 2-norm by a factor O(d). Also the stochastic gradient variance is O(d).\n\nThe closest thing to our result in Alistarh et al. is Theorem 3.5, setting s=1 for quantisation levels of -1, +1, 0. Note that B in their theorem is O(d). Therefore the right hand side of their bound is of order d^1.5, whereas ours is of order d.\n\n[Note that in their notation, d=n. Note also that there is a typo in their Theorem 3.5, it should depend on f(x) - f* and not sqrt(f(x) - f*)]", "Dear Reviewer,\n\nFor your interest and for the sake of posterity, we have run experiments to test our assertions about gradient statistics for Resnet-20 architecture, Cifar-10 dataset. We find that our assertions do hold up.\n\nIn particular, we find that\n(i) squared 1-norm of gradient dominates the squared 2-norm by a factor of order d throughout training\n(ii) the stochastic gradient variance is also of order d throughout training", "UPDATED REVIEW:\n\nI have checked all the reviews, also checked the most recent version.\nI like the new experiments, but I am not impressed much with them to increase my score. The assumption about the variance is fixing my concern, but as you have pointed out, it is a bit more tricky :) I would really suggest you work on the paper a bit more and re-submit it.\n\n--------------------------------------------------------------------\nIn this paper, authors provided a convergence analysis of Sign SGD algorithm for non-covex case.\nThe crucial assumption for the proof was Assumption 3, otherwise, the proof technique is following a standard path in non-convex optimization. \n\nIn general, the paper is written nicely, easy to follow.\n\n==============================================\n\"The major issue\":\nWhy Assumption 3 can be problematic in practice is given below:\nLet us assume just a convex case and assume we have just 2 kids of function in 2D: f_1(x) = 0.5 x_1^2 and f_2(x) = 0.5 x_2^2.\nThen define the function f(x) = E [ f_i(x) ]. where $i =1$ with prob 0.5 and $i=2$ with probability 0.5. \nWe have that g(x) = 0.5 [ x_1, x_2 ]^T.\nLet us choose $i=1$ and choose $x = [a,a]^T$, where $a$ is some parameter.\n\nThen (4) says, that there has to exist a $\\sigma$ such that\nP [ | \\bar g_i(x) - g_i(x) | > t ] \\leq 2 exp( - t^2 / 2\\sigma^2). forall \"x\".\n\nplugging our function inside it should be true that\n\nP [ | [ B ] - 0.5 a | > t ] \\leq 2 exp( - t^2 / 2\\sigma^2). forall \"x\".\nwhere B is a random variable which has value \"a\" with probability 0.5 and value \"0\" with probability 0.5.\n\nIf we choose $t = 0.1a$ then we have that it has to be true that\n\n1 = P [ | [ B ] - 0.5 a | > 0.1a ] \\leq 2 exp( - 0.01 a^2 / 2\\sigma^2) ----> 0 as $a \\to \\infty$.\n\nHence, even in this simple example, one can show that this assumption is violated unless $\\sigma = \\infty$.\n\nOne way to ho improve this is to put more assumption + maybe put some projection into a compact set?\n==============================================\n\nHence, I think the theory should be improved.\n\nIn terms of experiments, I like the discussion about escaping saddle points, it is indeed a good discussion. However, it would be nicer to have more numerical experiments.\nOne thing I am also struggling is the \"advantage\" of using signSGD: one saves on communication (instead of sending 4*8 bits per dimension, one just send only 1 bit, however, one needs \"d\"times more iterations, hence, the theory shows that it is much worse then SGD (see (11) ).\n\n\n\n\n", "The paper presents convergence rate of a quantized SGD, with biased quantization - simply taking a sign of each element of gradient.\n\nThe stated Theorem 1 is incorrect. Even if the stated result was correct, it presents much worse rate for a weaker notion of convergence.\n\nMajor flaws:\n1. As far as I can see, Theorem 1 should depend on 4th root of N_K, the last (omitted) step from the proof is done incorrectly. This makes it much worse than presented.\n2. Even if this was correct, the main point is that this is \"only\" d times worse - see eq (11). That is enormous difference, particularly in settings where such gradient compression can be relevant. Also, it is lot more worse than just d times:\n3. Again in eq (11), you compare different notions of convergence - E[||g||_1]^2 vs. E[||g||_2^2]. In particular, the one for signSGD is the weaker notion - squared L1 norm can be d times bigger again. If this is not the case for some reason, more detailed explanation is needed.\n\nOther than that, the paper contains several attempts at intuitive explanation, which I don't find correct. Inclusion of Assumption 3 would in particular require better justification.\n\nExperiments are also inconclusive, as the plots show convergence to significantly worse accuracy than what the models converged to in original contributions.", "Dear Authors,\nAfter reading the revised version I still believe that the assumption about the gradients + their variances to be distributed equivalently among all direction is very non-realistic, also for the case of deep learning applications.\n\nI think that the direction you are taking is very interesting, yet the theoretical work is still too preliminary and I believe that further investigation should be made in order to make a more complete manuscript.\n\nThe additional experiments are nice. I therefore raised my score by a bit.\n\n\n$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$\n The paper explores SignGD --- an algorithm that uses the sign of the gradients instead of actual gradients for training deep models. The authors provide some guarantees regarding the convergence of SignGD to local minima in the stochastic optimization setting, and later compare SignSG to GD in two deep learning tasks.\n\nExploring signSGD is an important and interesting line of research, and this paper provides some preliminary result in this direction.\nHowever, in my view, this work is too preliminary and not ready for publish. This is since the authors do not illustrate any clear benefits of signSGD over SGD neither in theory nor in practice. I elaborate on this below:\n\n-The theory part shows that under some conditions, signGD finds a local minima. Yet, as the authors themselves \nmention, the dependence on the dimension is much worse compared to SGD.\nMoreover, the authors do not mention that if the noise variance does not scale with the dimension (as is often the case), then the convergence of SGD will not depend on the dimension, while it seems that the convergence of signGD will still depend on the dimension.\n\n-The experiments are nice as a preliminary investigation, but not enough in order to illustrate the benefits of signSGD over SGD. In order to do so, the authors should make a more extensive experimental study.\n\n", "Thanks, the experiments are indeed an improvement, I have improved my score, but still think this is insufficient. In particular, the result is far worse than for instance Alistarh et al., I recommend reshaping this work as rather experimental evaluation in the future.", "Along the lines of contrasting with other ICLR submissions, have a look at this one too, which seems to work against some of your claims below.\nhttps://openreview.net/forum?id=ryQu7f-RZ\n", "Dear Authors,\n\nAfter reading the revised version I still believe that the assumption about the gradients + their variances to be distributed equivalently among all direction is very non-realistic, also for the case of deep learning applications.\n\nI think that the direction you are taking is very interesting, yet the theoretical work is still too preliminary and I believe that further investigation should be made in order to make a more complete manuscript.\n\nThe additional experiments are nice. I therefore raised my score by a bit.\n", "Dear Reviewer,\n\nWe have updated our draft:\n\n1) we change assumption 3 for a simpler variance bound\n2) we include a more extensive experimental study\n3) we clarify that for problems with gradient distributed roughly uniform across dimensions, signSGD acquires the same dimension dependence as SGD (section 5)\n\nThank you for your feedback :)", "Dear Reviewer,\n\nWe have updated our draft with:\n1) a more extensive experimental study (Section 7)\n2) a simpler assumption on the stochastic gradient noise model (Assumption 3)\n3) a simple condition under which dimension dependence of our bound matches SGD (Section 5)\n\nThanks for your feedback throughout this process :)", "Dear Reviewers and Area Chair,\n\nThere is a relevant parallel work submitted to ICLR called \"Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients\" (https://openreview.net/forum?id=S1EwLkW0W)\n-- we became aware of this work only after submission\n-- the reviewers of \"Dissecting Adam\" raised the lack of non-convex theory as an issue with their analysis of signSGD. Our paper addresses this point.\n\nWe uploaded a new version of our paper with main changes as follows:\n-- changed the stochastic gradient noise model from sub-Gaussian to bounded variance (assumption 3)\n-- replaced the CIFAR-10 experiments with more robust ones---a large sweep over hyperparameter space (section 7)\n-- clarified that when gradients are uniformly distributed across dimensions, the signSGD bound acquires same dimension dependence as SGD bound (section 5)\n\nThanks!", "Thanks for looking over our work again.\n\nAgreed about the experiments. We made poor hyperparameter choices. To rectify this, we ran a large grid search over learning rate, momentum and weight decay, and have put these results in the new draft (Section 7 & Figure 2). The results properly reproduce the baselines.\n\nTo defend our theoretical result as a basis for contribution, we note that there is a huge swell of interest in understanding the theoretical properties of Adam. We claim that the right place to start is understanding the success and failure modes of signSGD, since Adam is closely related but more complicated.\n\nIn the new draft we clarify that for problems with gradients roughly uniformly distributed across dimensions, the dimension dependence of our bound matches SGD.", "My initial review was perhaps too superficial, I apologize, but the overall feeling holds.\n\nTheoretical result is significantly weaker than other existing alternatives, and thus cannot form basis for contribution.\n\nIt is impossible to draw any conclusions from experiments - on MNIST, you report to converge to ~98.2% accuracy with SGD. The only thing it shows is that you are doing something wrong. The same for CIFAR - you report ~82% accuracy with ResNet18, but original paper shows ~91% with ResNet20. I don't see how can this gap be explained.", "Thank you for the quick reply, and for the reference :)\n\nWe want to point out that:\n\n1. our bound will also benefit from dimension independent variance in the gradients\n\n2. our bound is on the L1 norm of the gradient. For problems where the gradient is typically uniform in magnitude across dimensions, then the square L1 norm is roughly d times larger than square L2 norm. Therefore our bound acquires the same dimension dependence as the SGD bound in this setting.\n\n3. the reference you give is very interesting, but it is not clear how relevant it is for deep networks. In particular, deep networks can suffer from problems like exploding gradients. A priori, it seems that exploding gradient type phenomena should at least lead to gradient variances that depend on network depth.", "Dear Authors,\n\nScenarios with dimension independent variance often arise in text classification.\nWhere each word in a dictionary appears with probability p_i, and p_i is a heavy tailed distribution (e.g. geometric distribution)\nIn such scenarios, it can be shown that the total variance is dimension independent.\nFor a detailed description of this setup you can look in McMahan and Streeter 2010, see Section 1.2 https://arxiv.org/pdf/1002.4908.pdf.\n\n\n\n\n", "Thanks for reviewing our paper---we really appreciate the feedback! We're very interested in your comment about the dimension dependence of the noise variance---would you be able to point us to an example where it does not depend on dimension?\n\nWe view the contribution of our work as twofold. First at the empirical level, we show that signSGD (a method that 1-bit quantises gradients) has empirical convergence properties in deep learning tasks that rival SGD. Therefore we have shown that in practice the method is immensely useful for distributed optimisation, since it converges fast AND has cheap gradient communication across machines. Our method is much simpler than other quantised gradient schemes that take pains to ensure the quantisation scheme is unbiased. We show that in practice unbiasedness is not necessary. Indeed we have now run more rigorous experiments to demonstrate this, and we will update the draft shortly.\n\nSecond on the theoretical level, we put signSGD on the same theoretical footing as SGD, for non-convex functions. Until now there was no non-convex theory of this method. Our work is the first step. We clearly state that signSGD has worse dimension dependence than SGD, but this holds for all non-convex functions. Our assumptions are typical for non-convex theory papers. The surprising observation is that in theory the method is worse, but in practice for neural networks it performs the same, therefore we suggest that there may be special structure in neural network error landscapes, which is not captured by the typical assumptions of non-convex theory work. We are working on constructing a lower bound to check the alternative hypothesis that our bound is just not tight.", "Thanks for the review! We really appreciate it, and the example you give is great. It boils down to a construction of a finite sum problem where the stochastic gradient variance diverges when x tends to infinity. \n\nSince submitting, we have modified the proof to swap Assumption 3 for an assumption of bounded variance. Though bounded variance is the standard assumption in the SGD literature, it still fails under your example. Indeed the problem can be fixed by projecting to a compact set as you say, but we prefer to keep the assumption of bounded variance since it makes our work directly comparable with the existing literature.\n\nIn practice signSGD is immensely useful, since it converges fast for deep nets, and also uses quantised gradients. We agree that there is a gap between our theory which uses standard assumptions and applies to all non-convex functions, and practice where we test on deep neural networks. Drawing attention to this gap may be one of the main contributions of our paper---we imply that if non-convex theorists want to have more impact on deep learning practice, we need to adopt assumptions that better capture the geometry of deep neural net objective functions. (Another possibility is just that our bound is not tight, and we are working on constructing a lower bound to check this.)\n\nWe will update the paper shortly with the *new* assumption 3 of bounded variance, and more rigorous experiments. Thank you for the suggestions :)", "Thanks for the feedback, we really appreciate it. We think the \"flaws\" you mention are actually resulting from some confusion which we will try to clarify here and in the paper.\n\nFirst of all, the final step of the proof---left implicit---is to square the bound. This gives N^(-1/2) and not N^(-1/4). We will make this explicit to clear up the confusion.\n\nNext, the L_1 norm is indeed larger than L_2 norm. This makes our result stronger! Take the case where L_1^2 = d * L_2^2. Then substitute this into our bound and divide by d on both sides. This improves the dimension dependence of our bound to match SGD. (The intuition here is that when the gradient vector has components uniform in magnitude, then the sign operation preserves direction, and signSGD gets the same dimension dependence as SGD).\n\nWe state clearly throughout that there is a gap between our theory which applies to all non-convex functions, and deep network optimisation in practice. One of our contributions is to point out this discrepancy. In particular we suggest that the worse dimension dependence of our bound may not be visible in deep net training because neural network error landscapes have special structure. Non-convex theorists might make use of this observation to design algorithms better suited to neural nets.\n\nWe have now replaced assumption 3 (sub-gaussianity) with a new assumption of bounded variance, which is the typical assumption in the SGD literature. We have also run more rigorous experiments where the baselines behave as they do in the original contributions. We will update the draft shortly.\n\nThanks again for your feedback." ]
[ -1, -1, 4, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "rkZ_o074M", "S1-hX_XVG", "iclr_2018_HyxjwgbRZ", "iclr_2018_HyxjwgbRZ", "iclr_2018_HyxjwgbRZ", "S1NYkVpQG", "ByUdq7pmf", "BkF5Z467M", "Sy6g0wDxz", "HyZFydp-G", "iclr_2018_HyxjwgbRZ", "HJJB3OFmM", "r18PT52-z", "HyZFydp-G", "HJngWo3-M", "rkMJQKYxz", "Sy6g0wDxz", "S1CO_KVez" ]
iclr_2018_B1uvH_gC-
Parametric Manifold Learning Via Sparse Multidimensional Scaling
We propose a metric-learning framework for computing distance-preserving maps that generate low-dimensional embeddings for a certain class of manifolds. We employ Siamese networks to solve the problem of least squares multidimensional scaling for generating mappings that preserve geodesic distances on the manifold. In contrast to previous parametric manifold learning methods we show a substantial reduction in training effort enabled by the computation of geodesic distances in a farthest point sampling strategy. Additionally, the use of a network to model the distance-preserving map reduces the complexity of the multidimensional scaling problem and leads to an improved non-local generalization of the manifold compared to analogous non-parametric counterparts. We demonstrate our claims on point-cloud data and on image manifolds and show a numerical analysis of our technique to facilitate a greater understanding of the representational power of neural networks in modeling manifold data.
rejected-papers
Dear authors, Thank you for your submission to ICLR. Sadly, the reviewers were not convinced by the novelty of your approach nor by its experimental results. Thus, your paper cannot be accepted to ICLR.
train
[ "Bku1giNxf", "rkTKyhFxG", "H1iRKhYxf", "B1-O-VImM", "SkTaJVLQM", "SJlng48Qz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper describes a manifold learning method that adapts the old ideas of multidimensional scaling, with geodesic distances in particular, to neural networks. The goal is to switch from a non-parametric to a parametric method and hence to have a straightforward out-of-sample extension.\n\nThe paper has several major shortcomings:\n* Any paper dealing with MDS and geodesic distances should test the proposed method on the Swiss roll, which has been the most emblematic benchmark since the Isomap paper in 2000. Not showing the Swiss roll would possibly let the reader think that the method does not perform well on that example. In particular, DR is one of the last fields where deep learning cannot outperform older methods like t-SNE. Please add the Swiss roll example.\n* Distance preservation appears more and more like a dated DR paradigm. Simple example from 3D to 2D are easily handled but beyond the curse of dimensionality makes things more complicated, in particular due to norm computation. Computation accuracy of the geodesic distances in high-dimensional spaces can be poor. This could be discussed and some experiments on very HD data should be reported.\n* Some key historical references are overlooked, like the SAMMANN. There is also an over-emphasis on spectral methods, with the necessity to compute large matrices and to factorize them, probably owing to the popularity of spectral DR metods a decade ago. Other methods might be computationally less expensive, like those relying on space-partitioning trees and fast multipole methods (subquadratic complexity). Finally, auto-encoders could be mentioned as well; they have the advantage of providing the parametric inverse of the mapping too.\n* As a tool for unsupervised learning or exploratory data visualization, DR can hardly benefit from a parametric approach. The motivation in the end of page 3 seems to be computational only.\n* Section 3 should be further detailed (step 2 in particular).\n* The experiments are rather limited, with only a few artifcial data sets and hardly any quantitative assessment except for some monitoring of the stress. The running times are not in favor of the proposed method. The data sets sizes are, however, quite limited, with N<10000 for point cloud data and N<2000 for the image manifold.\n* The conclusion sounds a bit vague and pompous ('by allowing a limited infusion of axiomatic computation...'). What is the take-home message of the paper?", "The authors argue that the spectral dimensionality reduction techniques are too slow, due to the complexity of computing the eigenvalue decomposition, and that they are not suitable for out-of-sample extension. They also note the limitation of neural networks, which require huge amounts of data to properly learn the data structure. The authors therefore propose to first sub-sample the data and afterwards to learn an MDS-like cost function directly with a neural network, resulting in a parametric framework.\n\nThe paper should be checked for grammatical errors, such as e.g. consistent use of (no) hyphen in low-dimensional (or low dimensional).\n\nThe abbreviations should be written out on the first use, e.g. MLP, MDS, LLE, etc.\n\nIn the introduction the authors claim that the complexity of parametric techniques does not depend on the number of data points, or that moving to parametric techniques would reduce memory and computational complexities. This is in general not true. Even if the number of parameters is small, learning them might require complex computations on the whole data set. On the other hand, even if the number of parameters is equal to the number of data points, the computations could be trivial, thus resulting in a complexity of O(N).\n\nIn section 2.1, the authors claim \"Spectral techniques are non-parametric in nature\"; this is wrong again. E.g. PCA can be formulated as MDS (thus spectral), but can be seen as a parametric mapping which can be used to project new words.\n\nIn section 2.2, it says \"observation that the double centering...\". Can you provide a citation for this?\n\nIn section 3, the authors propose they technique, which should be faster and require less data than the previous methods, but to support their claim, they do not perform an analysis of computational complexity. It is not quite clear from the text what the resulting complexity would be. With N as number of data points and M as number of landmarks, from the description on page 4 it seems the complexity would be O(N + M^2), but the steps 1 and 2 on page 5 suggest it would be O(N^2 + M^2). Unfortunately, it is also not clear what the complexity of previous techniques, e.g DrLim, is.\n\nFigure 3, contrary to text, does not provide a visualisation to the sampling mechanism.\n\nIn the experiments section, can you provide a citation for ADAM and explain how the parameters were selected? Also, it is not meaningful to measure the quality of a visualisation via the MDS fit. There are more useful approaches to this task, such as the quality framework [*].\n\nIn figure 4a, x-axis should be \"number of landmarks\".\n\nIt is not clear why the equation 6 holds. Citation?\nIt is also not clear how exactly the equation 7 is evaluated. It says \"By varying the number of layers and the number of nodes...\", but the nodes and layer are not a part of the equation.\n\nThe notation for equation 8 is not explained.\n\nFigure 6a shows visualisations by different techniques and is evaluated \"by looking at it\". Again, use [*].\n\n[*] Lee, John Aldo ; Verleysen, Michel. Scale-independent quality criteria for dimensionality reduction. In: Pattern Recognition Letters, Vol. 31, no. 14, p. 2248-2257 (2010). doi:10.1016/j.patrec.2010.04.013.\n", "The key contribution of the paper is a new method for nonlinear dimensionality reduction. \n\nThe proposed method is (more or less) a modification of the DrLIM manifold learning algorithm (Hadsell, Chopra, LeCun 2006) with a slightly different loss function that is inspired by multidimensional scaling. While DrLIM only preserves local geometry, the modified loss function presents the opportunity to preserve both local and global geometry. The rest of the paper is devoted to an empirical validation of the proposed method on small-scale synthetic data (the familiar Swiss roll, as well as a couple of synthetic image datasets). \n\nThe paper revisits mostly familiar ideas. The importance of preserving both local and global information in manifold learning is well known, so unclear what the main conceptual novelty is. This reviewer does not believe that modifying the loss function of a well established previous method that is over 10 years old (DrLIM) constitutes a significant enough contribution.\n\nMoreover, in this reviewer's experience, the major challenge is to obtain proper estimates of the geodesic distances between far-away points on the manifold, and such an estimation is simply too difficult for any reasonable dataset encountered in practice. However, the authors do not address this, and instead simply use the Isomap approach for approximating geodesics by graph distances, which opens up a completely different set of challenges (how to construct the graph, how to deal with \"holes\" in the manifold, how to avoid short circuiting in the all-pairs shortest path computations etc etc). \n\nFinally, the experimental results are somewhat uninspiring. It seems that the proposed method does roughly as well as Landmark Isomap (with slightly better generalization properties) but is slower by a factor of 1000x. \n\nThe horizon articulation data, as well as the pose articulation data, are both far too synthetic to draw any practical conclusions. \n", "-Any paper dealing with MDS and geodesic distances should test the proposed method on the Swiss roll, which has been the most emblematic benchmark since the Isomap paper in 2000. Not showing the Swiss roll would possibly let the reader think that the method does not perform well on that example. In particular, DR is one of the last fields where deep learning cannot outperform older methods like t-SNE. Please add the Swiss roll example.\n\nOur framework works for swiss roll dataset as well. However, since the S-Curve is a similar dataset, we do not see merit to include it. \n\n-Distance preservation appears more and more like a dated DR paradigm. Simple example from 3D to 2D are easily handled but beyond the curse of dimensionality makes things more complicated, in particular due to norm computation. Computation accuracy of the geodesic distances in high-dimensional spaces can be poor. This could be discussed and some experiments on very HD data should be reported.\n\nWe agree with this assessment. However, the main message of our paper was the parameterization of maps that preserve these geodesic distances. We highlight that this leads to improved local and non-local generalization abilities and allows us to analyze neural networks using tools from numerical analysis. We agree that estimation of geodesic distances is indeed a problem when dimensionality is very high, however in this paper our message is: if they are available, our parametric framework provides for an interesting performance analysis along with showing benefit in terms of computation and performance. \n\n-Some key historical references are overlooked, like the SAMMANN. There is also an over-emphasis on spectral methods, with the necessity to compute large matrices and to factorize them, probably owing to the popularity of spectral DR metods a decade ago. Other methods might be computationally less expensive, like those relying on space-partitioning trees and fast multipole methods (subquadratic complexity). Finally, auto-encoders could be mentioned as well; they have the advantage of providing the parametric inverse of the mapping too.\n\nThank you for this input. We will add the references. \n\n-As a tool for unsupervised learning or exploratory data visualization, DR can hardly benefit from a parametric approach. The motivation in the end of page 3 seems to be computational only. \n\nWe do not completely agree. Apart from showing a scheme with considerably reduced training effort, As Figure 7 demonstrates, parametric approaches show improved local and non-local generalization abilities. \n\n-* Section 3 should be further detailed (step 2 in particular).\n\nThank you for the input. We will elaborate.\n\n-The experiments are rather limited, with only a few artifcial data sets and hardly any quantitative assessment except for some monitoring of the stress. The running times are not in favor of the proposed method. The data sets sizes are, however, quite limited, with N<10000 for point cloud data and N<2000 for the image manifold.\n\nWe are currently exploring more richer and larger datasets. However, in the interest of studying network behavior under strict metric constraints we found it appropriate to test on the articulation dataset whose geometry is well understood. \n\n-The conclusion sounds a bit vague and pompous ('by allowing a limited infusion of axiomatic computation...'). What is the take-home message of the paper? \n\nWe will rephrase this statement. What we wish to highlight in this paper is that it is useful to reformulate classical algorithms like multidimensional scaling using a parametric approach with neural networks. Why? Because techniques like geodesic sampling methods can be leveraged in order to (1.) Evaluate network architectures and characterize them using numerical constructs like order of accuracy and (2.) Obtain improved local and non-local generalization abilities in comparison to previous kernel extrapolation techniques minimizing the same objective. ", "We thank the reviewer on his feedback. \n\nThe main message of our paper is to show a merge between classical geometric algorithms and parametric learning methodologies. Although it is true that both Multidimensional Scaling and DrLim are indeed well established, we felt it is important to connect the two and show the advantages in adopting a parametric approach to a classical algorithm like multidimensional scaling. To this aid we demonstrate two experiments: (1.) the use of geodesic sampling methods which allow one to obtain a numerical assessment of the network architecture and show a substantive reduction in training effort as compared to DrLim and (2.) Providing a qualitative and quantitative comparison with the analogous nonparametric out-of-sample techniques. \n\nWe believe that there is no case to be made for a “best” manifold learning algorithm that works on all types of data. Every dataset will have a corresponding algorithm whos primary design principles will best suit it and therefore we do not claim we have a method which works universally. What we have said in this paper is this: In the class of methods that preserve geodesic distances, we find substantive merit to adopt a parametric approach by using a neural network. Why? because the network unambiguously shows better local and non-local generalization abilities on manifold datasets whose geometry is very clearly understood and also the fact that we can gauge how good the network performs by estimating it's order of accuracy. We believe that the novelty of our work lies in the analysis rather than the solution itself (which as rightly argued by the reviewer is well known). Therefore our choice of using a synthetic dataset like the image articulation manifold was to aid a more clear analysis of manifold learning performance. For the class of algorithms we have claimed to better, this performance is best encapsulated by the Stress function.", "We thank the reviewer on his feedback. \n\nIssues on complexity:\n\n- In the introduction the authors claim that the complexity of parametric techniques does not depend on the number of data points, or that moving to parametric techniques would reduce memory and computational complexities. This is in general not true. Even if the number of parameters is small, learning them might require complex computations on the whole data set. On the other hand, even if the number of parameters is equal to the number of data points, the computations could be trivial, thus resulting in a complexity of O(N).\n\n- In section 3, the authors propose they technique, which should be faster and require less data than the previous methods, but to support their claim, they do not perform an analysis of computational complexity. It is not quite clear from the text what the resulting complexity would be. With N as number of data points and M as number of landmarks, from the description on page 4 it seems the complexity would be O(N + M^2), but the steps 1 and 2 on page 5 suggest it would be O(N^2 + M^2). Unfortunately, it is also not clear what the complexity of previous techniques, e.g DrLim, is.\n\nWhat we wanted to highlight was that training a network to preserve geodesic distances shows computational benefits as compared to performing large-scale eigendecompositions having the same principle. However such an analysis is not available for the other network based methods: DrLim and Parametric t-SNE and possibly merits a separate paper, dedicated to this line of investigation.\n\n-It is not meaningful to measure the quality of a visualisation via the MDS fit. There are more useful approaches to this task, such as the quality framework [*].\n\nThank you for this input. However, since we show comparison only for a specific set of algorithms (MDS) we found it very straightforward most appropriate to compare with the Stress function which measures how well manifold distances have been preserved for all points, especially when we train only on a subset of points.\n\n-It is not clear why the equation 6 holds. Citation?\nIt is also not clear how exactly the equation 7 is evaluated. It says \"By varying the number of layers and the number of nodes...\", but the nodes and layer are not a part of the equation.\n\nThis is a standard analysis performed for finite difference schemes used for solving differential equations. For citation see: LeVeque, Randall J. \"Finite difference methods for differential equations.\" Draft version for use in AMath 585.6 (1998). Basically numerical schemes estimate some function (for e.g. the solution to a differential equation) over a given number of sampled points on its domain. Therefore, the order of accuracy of the scheme is estimated by using Equation 7 after plotting the approximation error of the scheme as a function of the grid spacing (or indirectly the number of sampled points) and extracting the slope of this line. \n\nWe employ this same philosophy in our problem with a slight modification. We employ the global stress of Equation 2 as our error function (since a non-zero stress corresponds to a suboptimal solution as far as distance preserving algorithms are concerned). A specific network (with a given number of hidden layers and nodes per layer) corresponds to a numerical scheme and hence determines a unique error profile (like Fig 4a) and hence can be graded with the order of accuracy it demonstrates in unfurling a manifold like the S-Curve. By varying the components of the network (number of layers and hidden nodes per layer) , we obtain different error profiles (the term ‘E’ in equations 6 & 7) and hence different order of accuracies corresponding to each choice of the parameters. \n\n-Figure 6a shows visualisations by different techniques and is evaluated \"by looking at it\". Again, use [*].\n\nSince we know the geometry of the manifold (the articulation data is perfectly isometric as proved in: Donoho, David L., and Carrie Grimes. \"Image manifolds which are isometric to Euclidean space.\" Journal of mathematical imaging and vision 23.1 (2005): 5-24.), a visual validation of Figure 6 clearly shows which algorithms have the best metric preservation. \n" ]
[ 5, 4, 3, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_B1uvH_gC-", "iclr_2018_B1uvH_gC-", "iclr_2018_B1uvH_gC-", "Bku1giNxf", "H1iRKhYxf", "rkTKyhFxG" ]
iclr_2018_Sk0pHeZAW
Sparse Regularized Deep Neural Networks For Efficient Embedded Learning
Deep learning is becoming more widespread in its application due to its power in solving complex classification problems. However, deep learning models often require large memory and energy consumption, which may prevent them from being deployed effectively on embedded platforms, limiting their applications. This work addresses the problem by proposing methods {\em Weight Reduction Quantisation} for compressing the memory footprint of the models, including reducing the number of weights and the number of bits to store each weight. Beside, applying with sparsity-inducing regularization, our work focuses on speeding up stochastic variance reduced gradients (SVRG) optimization on non-convex problem. Our method that mini-batch SVRG with ℓ1 regularization on non-convex problem has faster and smoother convergence rates than SGD by using adaptive learning rates. Experimental evaluation of our approach uses MNIST and CIFAR-10 datasets on LeNet-300-100 and LeNet-5 models, showing our approach can reduce the memory requirements both in the convolutional and fully connected layers by up to 60× without affecting their test accuracy.
rejected-papers
Dear authors, I agree with the reviewers that the paper tries to do several things at once and the results are not that convincing. Overall, this work is mostly incremental, which is fine if there is no issue in the execution. Thus, I regret to inform you that this paper will not be accepted to ICLR.
train
[ "SyPaSBDxz", "Hku4bLqgM", "SyRmCWAxf", "HklPfzTmG", "rk7ogG6Xz", "BkwVefp7z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Summary: \nPaper proposes the compression method Delicate-SVRG-cumulative-L1 (combining minibatch SVRG with cumulative L1 regularization) which can significantly reduce the number of weights without affecting the test accuracy. Paper provides numerical experiments for MNIST and CIRAR10 on LeNet-300-100 and LeNet-5. \n\nComments: \nUp to my knowledge, Han et. al (2016) is not the leading result. There are (at least) two more results which are better than Han et. al. (2016) and also better than your results for LeNet-300-100 and LeNet-5 (MNIST), which were already published at ICML 2017 and NIPS 2016: \nhttp://papers.nips.cc/paper/6165-dynamic-network-surgery-for-efficient-dnns.pdf\nhttp://proceedings.mlr.press/v70/molchanov17a/molchanov17a.pdf\n\nThere is no theory supporting the proposed method (which is the combination of some existing methods). Therefore, you should provide more experiments to show the efficiency. MNIST and CIFAR10 on LeNet-300-100 and LeNet-5 are quite standard that people have already shown. \n\nMoreover, there is no guarantee for sparsity by using L1 regularization on nonconvex problems. \n\nMinor comments: \nPage 3, section 2, first paragraph: typo in the last sentence: “dose” -> “does” \nSame typo above for page 5, the sentence right before (2) Bias-based pruning\n", "The authors present an l-1 regularized SVRG based training algorithm that is able to force many weights of the network to be 0, hence leading to good compression of the model. The motivation for l-1 regularization is clear as it promotes sparse models, which lead to lower storage overheads during inference. The use of SVRG is motivated by the fact that it can, in some cases, provide faster convergence than SGD.\n\nUnfortunately, the authors do not compare with some key literature. For example there has been several techniques that use sparsity, and group sparsity [1,2,3], that lead to the same conclusion as the paper here: models can be significantly sparsified while not affecting the test accuracy of the trained model.\n\nThen, the novelty of the technique presented is also unclear, as essentially the algorithm is simply SVRG with l1 regularization and then some quantization. The experimental evaluation does not strongly support the thesis that the presented algorithm is much better than SGD with l1 regularization. In the presented experiments, the gap between the performance of SGD and SVRG is small (especially in terms of test error), and overall the savings in terms of the number of weights is similar to Deep compression. Hence, it is unclear how the use of SVRG over SGD improves things. Eg in figure 2 the differences in top-1 error of SGD and SVRG, for the same number of weights is very similar (it’s unclear also why Fig 2a uses top-1 and Fig 2b uses top-5 error). I also want to note that all experiments were run on LeNet, and not on state of the art models (eg ResNets).\n\nFinally, the paper is riddled with typos. I attach below some of the ones I found in pages 1 and 2\n\nOverall, although the topic is very interesting, the contribution of this paper is limited, and it is unclear how it compares with other similar techniques that use group sparsity regularization, and whether SVRG offers any significant advantages over l1-SGD.\n\ntypos:\n“ This work addresses the problem by proposing methods Weight Reduction Quantisation”\n-> This work addresses the problem by proposing a Weight Reduction Quantisation method\n\n“Beside, applying with sparsity-inducing regularization”\n-> Beside, applying sparsity-inducing regularization\n\n“Our method that minibatch SVRG with l-1 regularization on non-convex problem”\n-> Our minibatch SVRG with l-1 regularization method on non-convex problem\n\n“As well as providing,l1 regularization is a powerful compression techniques to penalize some weights to be zero”\n-> “l1 regularization is a powerful compression technique that forces some weights to be zero”\n\n The problem 1 can\n-> The problem in Eq.(1) can\n\n“it inefficiently encourages weight”\n-> “it inefficiently encourages weights”\n\n————\n\n[1] Learning Structured Sparsity in Deep Neural Networks\nhttp://papers.nips.cc/paper/6504-learning-structured-sparsity-in-deep-neural-networks.pdf\n\n[2] Fast ConvNets Using Group-wise Brain Damage\nhttps://arxiv.org/pdf/1506.02515.pdf\n\n[3] Sparse Convolutional Neural Networks\nhttps://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Liu_Sparse_Convolutional_Neural_2015_CVPR_paper.pdf\n\n\n", "It is very hard to follow this work, it feels like it tries to get several messages across while none of them properly. The work further contains number of unclear or incorrect claims, meaningless comparison with existing work, and unbelievable results (\"0.737% error rate\" on CIFAR-10).\n\nIn introduction, first, the paper seems to be about L1-regularization, with few motivating remarks valid only for convex problems, then about novel optimization method, and suddenly main contribution is reducing memory requirements. Further, part on \"Cumulative l1 regularization\" need to be better explained if, as it seems, plays important role in what you do. In discussion about SVRG, I don't understand how claims about convergence and batch size make sense, please provide reference, and how is it important for what you do later. When you say \"Hence, a promising approach is to use...\" I don't understand how it either follows from discussion above, nor what is the problem that you address.\nIn Main Contributions, 2.1 - \"we analyse non-convex SVRG\" - I don't see any kind of analysis in the paper.\n\nSec 3. you use IFO of Agarwal and Bottou which is known not to include this kind of algorithm - see large red box above abstract in the last version of the cited paper. Even then it is not clear what you try to say in the section, and whether any of it is new.\n\nSec 3.1. What is the notion of \"larger dataset\"? You regard CIFAR-10 as larger than MNIST.\n\nSec 4. After 4 pages of discussion on optimization algorithms, you write (very ambiguous) 4 lines about quantization, and compare against work not related to optimization at all. No explanation of what is presented in the table nor notation used. It requires lot of guessing to see what you try to do.\nIf I guessed correctly, you propose optimization method used together with particular objective function to train a model that is sparse in its final trained form, and then reduce numerical precision used to represent the model. And compare that to Han et al.\n1. If this is what you try to do, it is never clearly stated it up to this point, and much of the preceding text is irrelevant and it is sufficient to just refer to existing work... I now see you have a similar statement in Discussion, but if this is what you try to do and has to be explained at the beginning.\n2. It does not make any sense to compare against Han et al (precisely against the numbers presented in their paper), as you are compressing something else. If applied to your trained model, I believe it would achieve significantly better result.\n\nI did not properly look at the experiments, as it is not clear what you do/propose in first place, and you seems to report 0.737% error rate on CIFAR-10, and in the appendix, plots for CIFAR-10 show convergence to ~3% test error with LeNet-5.", "1: Up to my knowledge, Han et. al (2016) is not the leading result. There are (at least) two more results which are better than Han et. al. (2016) and also better than your results for LeNet-300-100 and LeNet-5 (MNIST), which were already published at ICML 2017 and NIPS 2016.\n[1] http://papers.nips.cc/paper/6165-dynamic-network-surgery-for-efficient-dnns.pdf\n[2] http://proceedings.mlr.press/v70/molchanov17a/molchanov17a.pdf\n\nOur method can achieve lower test error than [1] and [2] in both LeNet-300-100 and LeNet-5 model. If we keep the same error with [1] and [2], the compression rate of our method is shown below in two tables, which showed that our method is competitive with other methods. \n\nModel Params.% Method[1] Params.%(Ours) Test error\nLeNet-5 0.9% 0.34% 0.91%\nLeNet-300-100 1.8% 0.78% 2.28%\nModel Params.% Method[2] Params.%(Ours) Test error\nLeNet-5 0.36% 2% 0.75%\nLeNet-300-100 1.4% 0.97% 1.92%\n\n\n\n", "[1] and [2] achieved test errors on MNIST dataset with a LeNet network of 1% and 1.71% respectively and these are higher than our method. In [1], the remaining weights were about 2.625K. If we keep the same test error of 1%, our method can reduce this to about 0.5K as shown in Figure 4b. [2] do not provide the number of weights after compression by L1 regularization in the experiment on MNIST dataset in LeNet model. [3] do not provide the experiment on MNIST dataset. Hence, we cannot directly compare with their methods. So far our experiments use two datasets and two different models (LeNet-300-100 and LeNet-5). We aim to show the performance of our method on dense-based models and convolutional-based models. In our future work, we will do more experiments on different datasets and models (e.g. CIFAR-100 and ImageNet datasets, and AlexNets , VGG and ResNets models. )\n", "Thank you for your reading. I'll reply your several questions as below: \n1: There is a typo in here. The 0.737% error rate refers to the MNIST dataset using the LeNet-5 model. \n2: The main concern of our work is to reduce the memory requirements of the neural network. L1 regularization is one compression technique that is efficient in reducing the number of parameters whilst maintaining accuracy. SVRG is better than SGD at efficiently finding the solution in strongly convex problems. However, using SVRG with L1 regularization (SVRG-C-L1) is not efficient when applied in non-convex problems such as neural networks. As a result, our work aims to improve this situation. We have modified SVRG-C-L1 by using adaptive learning rates, with the results showing that our method is better suited in non-convex problem. In our main contribution, we analyze and provide the condition when SVRG has faster convergence rate than SGD in section 3 “mini-batch non-convex SVRG” and sub section 3.1 “Mini-batch Non-convex SVRG on Sparse Representation” using training loss as a way to measure the convergence rates. (https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf\n\n3: Here, IFO is one type of complexity proposed by Agarwal and Bottou (2015). http://proceedings.mlr.press/v37/agarwal15.pdf \nIn this section, we followed the work from Reddi et.at 2016 that compared the IFO complexity of different algorithms (such as SGD and SVRG). We determined that SVRG has better performance of optimization than SGD (in other words, SVRG has faster speed of convergence than SGD) in non-convex problems, but this depends on the number of training samples. In our modified method, we experimented with two datasets and two models and showed that our method has the fastest speed of convergence than SVRG and SGD in figure 4. \n\n3.1: CIFAR-10 has 163MB and MNIST has about 3MB. MNIST images are smaller (1,28,28) than the CIFAR-10 (3,224,224). \n\n4: Table I explains the details in section 5.1 and the notation is explained in the table caption. D is our method that reduces the number of weights and Q is weight quantization that reduces the bit precision for storing each weight. D+Q represents both steps of weight reduction and quantization.\n\n4.1 and 4.2: Memory reduction is our main objective. So we first use the same model and datasets to compare the compression rate of our method with the methods of others. Secondly, we compared our results with other related L1 regularization compression techniques that use different optimization methods (SGD and SVRG), and show our method has faster convergence rates than other optimizations on different size of datasets. \n" ]
[ 4, 4, 2, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1 ]
[ "iclr_2018_Sk0pHeZAW", "iclr_2018_Sk0pHeZAW", "iclr_2018_Sk0pHeZAW", "SyPaSBDxz", "Hku4bLqgM", "SyRmCWAxf" ]
iclr_2018_r1ISxGZRb
Generation and Consolidation of Recollections for Efficient Deep Lifelong Learning
Deep lifelong learning systems need to efficiently manage resources to scale to large numbers of experiences and non-stationary goals. In this paper, we explore the relationship between lossy compression and the resource constrained lifelong learning problem of function transferability. We demonstrate that lossy episodic experience storage can enable efficient function transferability between different architectures and algorithms at a fraction of the storage cost of lossless storage. This is achieved by introducing a generative knowledge distillation strategy that does not store any full training examples. As an important extension of this idea, we show that lossy recollections stabilize deep networks much better than lossless sampling in resource constrained settings of lifelong learning while avoiding catastrophic forgetting. For this setting, we propose a novel dual purpose recollection buffer used to both stabilize the recollection generator itself and an accompanying reasoning model.
rejected-papers
The reviewers were uniformly unimpressed with the contributions of this paper. The method is somewhat derivative and the paper is quite long and lacks clarity. Moreover, the tactic of storing autoencoder variables rather than full samples is clearly an improvement, but it still does not allow the method to scale to a truly lifelong learning setting.
val
[ "ryfA9SYez", "S1iEoBnlf", "B1GkSWIWM", "ryAT4O6QM", "rJ9c-u67z", "HyMjedamG", "ByV-gu6mG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper proposes an architecture for efficient deep lifelong learning. The key idea is to use recollection generator (autoencoder) to remember the previously processed data in a compact representation. Then when training a reasoning model, recollections generated from the recollection generator are used with real-world examples as input data. Using the recollection, it can avoid forgetting previous data. In the experiments, it has been shown that the proposed approach is efficient for transfer knowledge with small data compared to random sampling approach.\n\nIt is an interesting idea to remember previous examples using the compact representation from autoencoder and use it for transfer learning. However, I think the paper would be improved if the following points are clarified.\n\n1. It seems that reconstructed data from autoencoder does not contain target values. It is not clear to me how the reasoning model can use the reconstructed data (recollections) for supervised learning tasks. \n\n2. It seems that the proposed framework can be better presented as a method for data compression for deep learning. Ideally, for lifelong learning, the reasoning model should not forget previously learned kwnoledge embeded in their weights. \nHowever, under the current architecture, it seems that the reasoning model does not have such mechanisms.\n\n3. For lifelong learning, it would be interesting to test if the same reasoning model can deal with increasing number of tasks from different datasets using the recollection mechanisms.\n\n \n\n\n\n", "This paper addresses lifelong learning setting under resource constraints, i.e. how to efficiently manage the storage and how to generalise well with a relatively small diversity of prior experiences. The authors investigate how to avoid storing a lot of original training data points while avoiding catastrophic forgetting at the same time.\nThe authors propose a complex neural network architecture that has several components. One of the components is a variational autoencoder with discrete latent variables, where the recently proposed Gumbel-softmax distribution is used to efficiently draw samples from a categorical distribution (Jang et al ICLR 2017). Discrete variables are categorical latent variables using 1-hot encoding of the class variables. In fact, in the manuscript, the authors describe one-hot encoding of c classes as l-dimensional representation. Why is it not c-dimentional? Also the class probabilities p_i are not defined in (7). \nThis design choice is reasonable, as autoencoder with categorical latent variables can achieve more storage compression of input observations in comparison with autoencoders with continuos variables. \nAnother component of the proposed model is a recollection buffer/generator, a generative module (alongside the main model) which produces pseudo-experiences. These self generated pseudo experiences are sampled from the buffer and are combined with available real samples during training to avoid catastrophic forgetting of prior experiences. This module is inspired by episodic training proposed by Lopez-Paz and Ranzato in ICLR2017 for continual learning. In fact, a recollection buffer for MNIST benchmark has 50K codes to store. How fast would it grow with more tasks/training data? Is it suitable for lifelong learning? \n\nMy main concern with this paper is that it is not easy to grasp the gist of it. The paper is 11 pages long and often has sections with weakly related motivations described in details (essentially it would be good to cut the first 6 pages into half and concentrate on the relevant aspects only). It is easy to get lost in unimportant details, where as important details on model components are not very clear and not structured. Second concern is limited novelty (from what I understood). \n\n\n", "This paper presents an approach to lifelong learning with episodic experience storage under resource constraints. The key idea of the approach is to store the latent code obtained from a categorical Variational Autoencoder as opposed to the input example itself. When a new task is learnt, catastrophic forgetting is avoided by randomly sampling stored codes corresponding to past experience and adding the corresponding reconstruction to a batch of data from a new problem. The authors show that explicitly storing data provides better results than random sampling from the generative model. Furthermore, the method is compared to other techniques relying on episodic memory and as expected, achieves better results given a fixed effective buffer size due to being able to store more experience.\n\nWhile the core idea of this paper is reasonable, it provides little insight into how episodic experience storage compares to related methods as an approach to lifelong learning. While the authors compare their method to other techniques based on experience replay, I feel that a comparison to other techniques is important. A natural choice would be a model which introduces task-specific parameters for each problem (e.g. (Li & Hoiem, 2016) or (Rusu et al., 2016)).\n\nA major concern is the fact that the VAE with categorical latents itself suffers from catastrophic forgetting. While the authors propose to \"freeze decoder parameters right before each incoming experience and train multiple gradient descent iterations over randomly selected recollection batches before moving on to the next experience\", this makes the approach both less straight-forward to apply and more computationally expensive. \n\nMoreover, the authors only evaluate the approach on simple image recognition tasks (MNIST, CIFAR-100, Omniglot). I feel that an experiment in Reinforcement Learning (e.g. as proposed in (Rusu et al., 2016)) would provide more insight into how the approach behaves in more challenging settings. In particular, it is not clear whether experience replay may lead to negative transfer when subsequent tasks are more diverse.\n\nFinally, the manuscript lacks clarity. As another reviewer noted, detailed sections of weakly related motivations fail to strengthen the reader's understanding. As a minor point, the manuscript contains several grammar and spelling mistakes.", "We appreciate your concern that the VAE itself would suffer from catastrophic forgetting. We have attempted to provide more clarification about how the VAE is able to stabilize itself with self-generated recollections. We have also provided two additional charts to serve as empirical evidence that this happens when training in a continual lifelong learning setting on CIFAR-100. In the left chart of Figure 3, we demonstrate that self-generated recollections can stabilize a lifelong autoencoder as well as real example replay of a comparable resource footprint, and significantly better than online training on CIFAR-100. In Figure 4, we demonstrate that after running our CIFAR-100 models for many training examples on CIFAR-10, the benefit of the extra diversity of experiences we can have when using lossy recollections to prevent forgetting outweighs negative effects associated with forgetting that the VAE experiences.\n\nThank you for your question about freezing the decoder parameters before each incoming experience. We have now made this clearer in section 2.4. Instead of freezing the decoder parameters and keeping two copies, we can simply forward propagate for all of the replay mini-batches associated with learning the current example ahead of time. This feature, as well as the encoding and decoding of memories and training of the autoencoder, do indeed add computation over lossless methods. In our experiments, however, these costs were pretty negligible as the computation associated with the larger Resnet-18 reasoning model overshadows the computation associated with our much smaller VAEs. While this strategy does indeed add some computation for our approach, it is also critical for stabilizing the training of the autoencoder for continual lifelong learning, as we explain in section 2.4.\n\nThe techniques you mentioned for comparison are, unfortunately, not suited for the resource constrained lifelong learning problems we explore in this paper. We tried LwF for continual learning saving the old model parameters after every task on CIFAR-100 and found it to be very ineffective in terms of performance. It is also very computationally expensive later in training as the number of terms in the loss function grow linearly with the number of tasks. This result aligns with the experiments in (Lopez-Paz & Ranzato, NIPS 2017) that found a similar forgetting prevention technique EwC (Kirkpatrick et al., PNAS 2017) to be less effective than episodic techniques for lifelong learning on CIFAR-100. This is largely because forgetting prevention techniques focus on retaining poor performance on early tasks while episodic storage techniques continue to improve on these tasks when they learn relevant concepts later. \n\nProgressive Neural Networks have not been shown to scale to the number of tasks and deep residual network architectures that we consider. This is because model parameters scale even more than linearly with the number of tasks due to lateral connections with all prior task representations at each layer. As a result, each new task adds more parameters than the task before it. Our approach is not reliant on human defined tasks to work. Additionally, incremental storage and computation costs from adding model parameters with each task for such a large model consumes far more resources than the episodic storage footprints we consider in our experiments. We should also note that the work of (Rusu et al., 2016) is not directly comparable to ours in that their few task reinforcement learning experiments are not performing continual learning. They use A3C, which may be superior to experience replay methods in terms of wall clock time for convergence. However, A3C involves multiple agents performing RL episodes at the same time on different threads, which is not the same as continual learning of a single agent. While A3C is fast in terms of wall clock time, it is not efficient with the total number of episodes needed to reach good performance. On the other hand, this kind of efficiency is an important criteria for the very difficult and ambitious task of continual lifelong learning.\n\nWhile this work does not address the related topic of alleviating negative transfer in multi-task learning, our work does provide a clear advancement in the study of experience replay mechanisms for lifelong learning. Experience storage has been a key component to stabilize training of many of the most successful lifelong learning and reinforcement learning algorithms to date. It is not the goal of this paper to compare this very successful family of methods with other alternatives that function quite differently. \n\n", "Thank you for your comment about the beginning of the paper. We have significantly reorganized the way we present the ideas to address your comment. Hopefully this also helps highlight some of the novel ideas presented in this paper. Our approach is novel in that it is the first that models hippocampal memory index theory using modern deep neural networks. We are also the first to demonstrate how the theory’s signature combination of pattern completion and pattern separation work together to enable faster knowledge transfer using recollections. This capability, in turn, leads to a model that can efficiently distill its knowledge to a student network of a different architecture without storing any real examples. Additionally, it can enable more effective experience replay with superior scaling in resource constrained settings of continual lifelong learning. \n\nWe have also tried to address your confusion related to the description of the Gumbel-Softmax function. To clarify, we are using c variables that are each l dimensional, implying we use c separate one hot encodings of size l to represent a latent code. This is standard practice for discrete latent variable autoencoders. We adopt conventions from (Jang et al., ICLR 2017) where possible in our presentation of the approach. We have also reworked the presentation of our approach to make the scaling considerations clear. A key benefit of the proposed technique we argue for in section 2.4 is that because of transfer learning, scaling is less than linear with the number of experiences in contrast with the linear scaling of storing lossless experiences.\n", "We have attempted to address your concern about retention of knowledge by the reasoning model when it is presented with many additional experiences. In Figure 4, we plot CIFAR-100 model performance after switching from continual lifelong learning on CIFAR-100 to the disjoint set of labels from CIFAR-10 for many training examples. Our results highlight that the increased diversity of experiences helps the resource constrained system retain knowledge better when using lossy storage than it does when using comparable lossless storage techniques. \n\nIn our continual lifelong learning experiments, we store the task and label index along with the latent code in the recollection buffer, as this information is already very light weight. We have reformatted the presentation of the approach to make this clearer in the paper.\n\nRegarding the benefit of the reasoning model not forgetting previously learned knowledge, we would first comment that our approach makes very few assumptions about the reasoning model. This feature would likely be orthogonal and complimentary to our approach in many settings. However, we would like to highlight that our goal isn’t only to prevent forgetting. Our goal is to navigate the stability-plasticity dilemma in a way that maximizes performance on old and new examples. Experience replay provides an approximation of i.i.d. stationary random input sampling in non-stationary environments, allowing neural networks to effectively optimize for the true objective with the efficacy of offline training in the limit of an unbounded experience buffer size. (Lopez-Paz & Ranzato, NIPS 2017) found EwC (Kirkpatrick et al., PNAS 2017) a popular forgetting prevention technique to be ineffective relative to techniques leveraging episodic storage for continual lifelong learning on CIFAR-100. One of the big reasons they found for the performance difference was that EwC focuses on retaining its poor performance on early tasks, while techniques with episodic storage continually improve on old examples as they learn relevant concepts later.\n", "We would like to thank the reviewers for their time and feedback. To address reviewer concerns about clarity, we substantially reorganized and edited the paper. We hope our revised draft makes both the novelty and motivation of our approach clearer. There are substantial differences with the earlier version due to an adjusted presentation structure, but we did not significantly change the ideas presented.\n\nWe will now directly address the concerns raised by each reviewer. " ]
[ 5, 5, 5, -1, -1, -1, -1 ]
[ 2, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2018_r1ISxGZRb", "iclr_2018_r1ISxGZRb", "iclr_2018_r1ISxGZRb", "B1GkSWIWM", "S1iEoBnlf", "ryfA9SYez", "iclr_2018_r1ISxGZRb" ]
iclr_2018_S14EogZAZ
Acquiring Target Stacking Skills by Goal-Parameterized Deep Reinforcement Learning
Understanding physical phenomena is a key component of human intelligence and enables physical interaction with previously unseen environments. In this paper, we study how an artificial agent can autonomously acquire this intuition through interaction with the environment. We created a synthetic block stacking environment with physics simulation in which the agent can learn a policy end-to-end through trial and error. Thereby, we bypass to explicitly model physical knowledge within the policy. We are specifically interested in tasks that require the agent to reach a given goal state that may be different for every new trial. To this end, we propose a deep reinforcement learning framework that learns policies which are parametrized by a goal. We validated the model on a toy example navigating in a grid world with different target positions and in a block stacking task with different target structures of the final tower. In contrast to prior work, our policies show better generalization across different goals.
rejected-papers
The authors present a toy stacking task where the goal is to stack blocks to match a given configuration, and a method that is a slightly modified DQN algorithm where the target configuration is observed by the network as well as the current state. There are a few problems with this paper. First, the method lacks novelty - it is very similar to DQN. Second, the claims of learning physical intuitions is not borne out by the method or experimental results. Third, the tasks are very simple and there is no held-out test set of target configurations.
train
[ "HJUMdjteM", "H1uUNm9ef", "B171xj6eM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose a model for learning physical interaction skills through trial and error. They use end-to-end deep reinforcement learning - the DQN model - including the task goal as an input in order to to improve generalization over several tasks, and shaping the reward depending on the visual differences between the goal state and the current state. They show that the task performance of their model is better than the DQN on two simulated tasks.\nThe paper is well-written, clarity is good, it could be slightly improved by updating the title \"Toy example with Goal integration\" to make it consistent with the naming \"navigation task\" used elsewhere.\n\nIf the proposed model is new given the reviewer's knowledge, the contribution is small. The biggest change compared to the DQN model is the addition of information in the input.\nThe authors initially claim that \"In this paper, [they] study how an artificial agent can autonomously acquire this intuition through interaction with the environment\", however the proposed tasks present little to no realistic physical interaction: the navigation task is a toy problem where no physics is simulated. In the stacking task, only part of the simulation actually use the physical simulation result. Given that machine learning methods are in general good at finding optimal policies that exploit simulation limitations, this problem seems a threat to the significance of this work.\n\nThe proposed GDQN model shows better performance than the DQN model. However, as the authors do not provide in-depth analysis of what the network learns (e.g. by testing policies in the absence of an explicit goal), it is difficult to judge if the network learnt a meaningful representation of the world's physics. This limitation along with potential other are not discussed in the paper.\n\nFinally, more than a third (10/26) references point to Arxiv papers. Despite Arxiv definitely being an important tool for paper availability, it is not peer-reviewed and there are also work that are non-finished or erroneous. It is thus a necessary condition that all Arxiv references are replaced by the peer-reviewed material when it exist (e.g. Lerer 2016 in ICML or Denil 2016 in ICLR 2017), once again to strengthen the author's claim.", "Summary: This paper proposes to use deep Q-learning to learn how to reconstruct a given tower of blocks, where DQN is also parameterized by the desired goal state in addition to the current observed state.\n\nPros:\n- Impressive results on a difficult block-stacking task.\n\nCons:\n- The idea of parameterizing an RL algorithm by goals is not particularly novel.\n\nQuality and Clarity:\n\nThe paper is extremely well-written, easy to follow, and largely technically correct, though I am somewhat concerned about how the results were obtained as it does not seem like the vanilla DQN agent could do so well, even on the 2-block scenes. Even just including stable scenes, I estimated based on Figure 5 that there must be about 70 different configurations that are stable (and this is likely an underestimate). So, if each of these scenes occurs equally often and the vanilla DQN agent does not receive any information about the target goal and just acts based on an \"average\" policy, I would expect it to only achieve success about 1/70th of the time. Am I missing something here?\n\nAnother thing that was unclear to me is how the rotation of the blocks is chosen: is the agent given the next block with the correct rotation, or can it also choose to rotate the block? In the text it is implied that the only actions are {left, right, down}, which seems to simplify the task immensely. It would be interesting to include results where the agent additionally has to choose from actions of {rotate left by 90 degrees, rotate right by 90 degrees}.\n\nAlso: are the scenes used during testing separate from those used during training? If not, it's not obvious that the agent isn't just learning to memorize the solution (which somewhat defeats the idea behind parameterizing the Q-network with new goals every time).\n\nOriginality and Significance:\n\nThe block-stacking task is very cool and is more complex than many other physics-based RL tasks in the literature, which often involve just stacking square blocks in a single tower. I think it is a useful contribution to introduce this task and the GDQN agent as a baseline. However, the notion of parameterizing the policy by the goal state is not particularly novel. While it is true that many RL papers do train to optimize just a single reward function for a single goal, it is also very straightforward to modify the state space to include a goal and indeed [1-4] are just a few examples of recent papers that have done this. In general, any time there is a procedurally generated environment (e.g. Sokoban, as in [5]) the goal necessarily is included as part of the state space---so the idea of GDQN isn't really that new.\n\n[1] Oh, J., Singh, S., Lee, H., & Kohli, P. (2017). Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning. arXiv Preprint arXiv:1706.05064.\n[2] Dosovitskiy, A., & Koltun, V. (2017). Learning to act by predicting the future. Proceedings of the 5th International Conference on Learning Representations (ICLR 2017).\n[3] Hamrick, J. B., Ballard, A. J., Pascanu, R., Vinyals, O., Heess, N., & Battaglia, P. W. (2017). Metacontrol for adaptive imagination-based optimization. Proceedings of the 5th International Conference on Learning Representations (ICLR 2017).\n[4] Pascanu, R., Li, Y., Vinyals, O., Heess, N., Buesing, L., Racanière, S., … Battaglia, P. (2017). Learning model-based planning from scratch. arXiv Preprint arXiv: 1707.06170. Retrieved from https://arxiv.org/abs/1707.06170\n[5] Weber, T., Racanière, S., Reichert, D. P., Buesing, L., Guez, A., Rezende, D. J., … Wierstra, D. (2017). Imagination-Augmented Agents for Deep Reinforcement Learning. arXiv Preprint arXiv: 1707.06203. Retrieved from http://arxiv.org/abs/1707.06203", "The authors use a variant of deep RL to solve a simplified 2d physical stacking task. To accommodate different goal stacking states the authors extend the state representation of DQN. The input to the network is the current state of the environment as represented by the 2d projection of the objects in the simulated grid world and a representation of the goal state in the same projection space. The reward function in its basic form rewards only the correctly finished model. A number of heuristics are used to augment this reward function so as to provide shaping rewards along the way and speed up learning. The learnt policy is evaluated on the successful assembly of the target stack and on a distance measure between the stack specified as goal and the actual stack. \n\nCurrently, I don’t understand from the manuscript, how DQN is actually trained. Are all different tasks used on a single network? If so, is it surprising that the network performs worse than when augmenting the state representation with the goal? Or are separate DQNs trained for multiple tasks?\n\nThe definition of value function at the bottom of page 4 uses the definition for continual tasks but in the current setting the tasks are naturally episodic. This should be reflected by the definition.\n\nIt would be good if the authors could comment on any classic research in RL augmenting the state representation with the goal state and any recent related developments, e.g. multi-task RL or the likes of Dosovitskiy & Koltun “Learning to act by predicting the future”.\n\nIt would be helpful do obtain more information about the navigation task, especially a plot of sorts would be helpful. Currently, it is particularly difficult to judge exactly what the authors did. \n\nHow physically “rich” is this environment compared to some of the cited work, e.g. Yildirim et al. or Battaglia et al:?\n\nOverall it feels as if this is an interesting project but that it is not yet ready for publication. " ]
[ 5, 4, 5 ]
[ 4, 4, 3 ]
[ "iclr_2018_S14EogZAZ", "iclr_2018_S14EogZAZ", "iclr_2018_S14EogZAZ" ]
iclr_2018_rJssAZ-0-
TRL: Discriminative Hints for Scalable Reverse Curriculum Learning
Deep reinforcement learning algorithms have proven successful in a variety of domains. However, tasks with sparse rewards remain challenging when the state space is large. Goal-oriented tasks are among the most typical problems in this domain, where a reward can only be received when the final goal is accomplished. In this work, we propose a potential solution to such problems with the introduction of an experience-based tendency reward mechanism, which provides the agent with additional hints based on a discriminative learning on past experiences during an automated reverse curriculum. This mechanism not only provides dense additional learning signals on what states lead to success, but also allows the agent to retain only this tendency reward instead of the whole histories of experience during multi-phase curriculum learning. We extensively study the advantages of our method on the standard sparse reward domains like Maze and Super Mario Bros and show that our method performs more efficiently and robustly than prior approaches in tasks with long time horizons and large state space. In addition, we demonstrate that using an optional keyframe scheme with very small quantity of key states, our approach can solve difficult robot manipulation challenges directly from perception and sparse rewards.
rejected-papers
The paper proposes an extension to the reverse curriculum RL approach which uses a discriminator to label states as being on a goal trajectory or off the goal trajectory. The paper is well-written, with good empirical results on a number of task domains. However, the method relies on a number of assumptions on the ability of the agent to reset itself and the environment which are unrealistic and limiting, and beg the question as to why use the given method at all if this capability is assumed to exist. Overall, the method lacks significance and quality, and the motivation is not clear enough.
train
[ "B129GzFxf", "r1Kg9atxz", "BkFL6KCxf", "S1-osRMEM", "ryCOkjmmz", "rJRVQimmf", "Bkq9MWEmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes a new method for reverse curriculum generation by gradually reseting the environment in phases and classifying states that tend to lead to success. It additionally proposes a mechanism for learning from human-provided \"key states\".\n\nThe ideas in this paper are quite nice, but the paper has significant issues with regard to clarity and applicability to real-world problems:\nFirst, it is unclear is the proposed method requires access only high-dimensional observations (e.g. images) during training or if it additionally requires low-dimensional states (e.g. sufficient information to reset the environment). In most compelling problems settings where a low-dimensional representation that sufficiently explains the current state of the world is available during training, then it is also likely that one can write down a nicely shaped reward function using that state information during training, in which case, it makes sense to use such a reward function. This paper seems to require access to low-dimensional states, and specifically considers the sparse-reward setting, which seems contrived.\nSecond, the paper states that the assumption \"when resetting, the agent can be reset to any state\" can be satisfied in problems such as real-world robotic manipulation. This is not correct. If the robot could autonomously reset to any state, then we would have largely solved robotic manipulation. Further, it is not always realistic to assume access to low-dimensional state information during training on a real robotic system (e.g. knowing the poses of all of the objects in the world).\nThird, the experiments section lacks crucial information needed to understand the experiments. What is the state, observation, and action space for each problem setting? What is the reward function for each problem setting? What reinforcement learning algorithm is used in combination with the curriculum and tendency rewards? Are the states and actions continuous or discrete? Without this information, it is difficult to judge the merit of the experimental setting.\nFourth, the proposed method seems to lack motivation, making the proposed scheme seem a bit ad hoc. Could each of the components be motivated further through more discussion and/or ablative studies?\nFinally, the main text of the paper is substantially longer than the recommended page limit. It should be shortened by making the writing more concise.\n\nBeyond my feedback on clarity and significance, here are further pieces of feedback with regard to the technical content, experiments, and related work:\nI'm wondering -- can the reward shaping in Equation 2 be made to satisfy the property of not affecting the final policy? (see Ng et al. '09) If so, such a reward shaping would make the method even more appealing.\nHow do the experiments in section 5.4 compare to prior methods and ablations? Without such a comparison, it is impossible to judge the performance of the proposed method and the level of difficulty of these tasks. At the very least, the paper should compare the performance of the proposed method to the performance a random policy.\n\nThe paper is missing some highly relevant references. First, how does the proposed method compare to hindsight experience replay? [1] Second, learning from keyframes (rather than demonstrations) has been explored in the past [1]. It would be preferable to use the standard terminology of \"keyframe\".\n\n[1] Andrychowicz et al. Hindsight Experience Replay. 2017\n[2] Akgun et al. Keyframe-based Learning from Demonstration. 2012\n\nIn summary, I think this paper has a number of promising ideas and experimental results, but given the significant issues in clarity and significance to real world problems, I don't think that the current version of this paper is suitable for publication in ICLR.\n\nMore minor feedback on clarity and correctness:\n- Abstract: \"Deep RL algorithms have proven successful in a vast variety of domains\" -- This is an overstatement.\n- The introduction should be more clear with regard to the assumptions. In particular, it would be helpful to see discussion of requiring human-provided keyframes. As is, it is unclear what is meant by \"checkpoint scheme\", which is not commonly used terminology.\n- \"This kind of spare reward, goal-oriented tasks are considered the most difficult challenges\" -- This is also an overstatement. Long-horizon tasks and high-dimensional observations are also very difficult. Also, the sentence is not grammatically correct.\n- \"That is, environment\" -> \"That is, the environment\"\n- In the last paragraph of the intro, it would be helpful to more clearly state what the experiments can accomplish. Can they handle raw pixel inputs?\n- \"diverse domains\" -> \"diverse simulated domains\"\n- \"a robotic grasping task\" -> \"a simulated robotic grasping task\"\n- There are a number of issues and errors in citations, e.g. missing the year, including the first name, incorrect reference\n- Assumption 1: \\mathcal{P} has not yet been defined.\n- The last two paragraphs of section 3.2 are very difficult to understand without reading the method yet\n- \"conventional RL solver tend\" -> \"conventional RL tend\", also should mention sparse reward in this sentence.\n- Algorithm 1 and Figure 1 are not referenced in the text anywhere, and should be\n- The text in Figure 1 and Figure 3 is extremely small\n- The text in Figure 3 is extremely small\n\n\n", "The authors extend the approach proposed in the \"Reverse Curriculum Learning for Reinforcement Learning\" paper by adding a discriminator that gives a bonus reward to a state based on how likely it thinks the current policy is to reach the goal from said state. The discriminator is a potentially interesting mechanism to approximate multi-step backups in sparse-reward environments. \n\nThe approach of this paper seems severely severely limited by the assumptions made by the authors, mainly assuming a deterministic environment, known goal states and the ability to sample anywhere in the state space. Some of these assumptions may be reasonable in domains such as robotics, but they seem very restrictive in the domains like the games considered in the paper.\n\n\nAdditional Comments:\n\n-The authors demonstrate some benefits of using Tendency rewards, but made little attempt to explain why it leads to accelerated learning. Results are pure performance results.\n\n-The authors should probably structure the tendency reward as potential based instead of using the Gaussian kernel hack they introduce in section 4.2\n\n- Presentation: There are several mistakes and formatting issues in References\n\n- Assumption 2 transformations -> transitions?\n\n-Need to add assumption 3: advance knowledge of goal state\n\n- the use of gamma as a scale factor in equation 2 is confusion, it was already introduced as the discount factor ( which is default notation in RL). It also isn't clear what the notation r_f denotes (is it the same as r^f in appendix?).\n\n-It is nice to see that the authors compare their method with alternative approaches. Unfortunately, the proposed method does not seem to offer many benefits. \n", "The authors present a new method for doing reverse curriculum training for reinforcement learning tasks with deterministic dynamics, a desired goal state at which reward is received, and the ability to teleport to any state. This covers a number of important cases of interest, including all simulated domains, and a number of robotics applications. The training proceeds in phases, where in each phase the initial starting set of states is expanded. The initial set of states used is close to the desired state goal. Each phase is initiated when 80% of the states in the current phase can reach the goal. Once the initial set of start states overlaps with the desired initial set of states for the task, training can terminate. During the training in a single phase, the algorithm uses a shaping reward (the tendency) which is based on a binary classifier that predicts if it will be possible to reach the goal from this state. This reward is combined in a hybrid reward signal. The authors suggest the use of a small number of checkpoints to guide the backwards state expansion to improve the search efficiency. Results are presented on several domains: maze, Super Mario, and Mujoco domains. \n\nThe topic of doing more sample efficient training is important and interesting, and the subset of settings the authors consider is still a good set. \n\nThe paper was clearly written though some details were relegated to the appendix which would’ve been useful to see in the main text.\n\nI’m not yet convinced about this method for the desired setting in terms of significance and quality.\n\nAn alternative to using tendency shaping reward would be (during phase expansion) make the new “goal” states any of the states in the previous phase of initial states P_{i} that did reach the goal. This should greatly reduce the decision making horizon needed in each phase. Since the domain is deterministic, as soon as one can reach one of those states, we have a path to the goal. If we care about the number of steps to reach the goal (vs finding any path), then each of the states in P_{i} for which a successful path can be achieved to the goal can also be labeled by the cost / number of time steps to reach the goal. This should decompose the problem into a series of smaller problems. Perhaps I’m missing something-- could the authors please address this suggestion and/or explain why this wouldn’t be beneficial?\n\nThe authors currently use checkpoints to help guide the search towards the true task desired set of initial states. If those are lacking, it seems like the generation of the new P_{i+1} could be biased towards that desired set of states. One approach could be to randomly roll out from the start state and then bias P_{i+1} towards any states close to states along such trajectories. In general one could imagine a situation in which one both does forward learning/planning from the task start state and backwards learning from the goal state to obtain a significant benefit, similar to ideas that have been used in robot motion planning.\n\nWhy learn from pixels for the robot domains considered? Here it would be nice to compare to some robotics approaches. With the action space of the robot and motion planning, it seems like this problem could be tackled using existing techniques. It is interesting to have a method that can be used with pixels, but in cases where there are other approaches, it would be useful to compare to them.\n\nSmall point\nD.2 Why not compare to GAIL instead? \n", "> 1a. We respectfully remind that full sentence in our paper is “When resetting, the agent can start from any state s ∈ Pi.” We don’t assume that the agent can reset to any state\n\nThank you for the clarification. However, assuming resets even in Pi is not practical in many robotic manipulation problems, e.g. any problem involving free moving objects such as pushing or pick and place (e.g. when the robot must learn to also move the object back to where it started). \n\n\n> 1b. TRL does need these low-dimensional data to restore visited states during the generation of new phases and doesn’t require these data for real training… Since these low-dimensional data is easy to acquire…\n\nI agree that joint angle and end-effector information is easy to acquire. But in practice, *full* low-dimensional state information is not easy to acquire (i.e. positions of free moving objects) and if you assume access to it during some parts of training, then you might as well use it for all parts of training. For example, imagine you wanted to apply this method to a robot learning pushing an object (a fairly simple task). You would need to put some sort of tracker on the object to get its low-dimensional state. If you need to put a tracker on the object, then you might as well use the tracker during training too.\n\n\nThank you for running the additional experiments. I think that they improve the paper.", "1-Thanks for mentioning that. Actually, this alternative has been carefully considered, and we decided not to use it mainly because this method largely impairs the agent's ability to find new policies. We tested this idea with an experiment setup similar to the one in Appendix E.3 (Fig 10), and found that if we change the goal state to any of the successful states from the previous phase, the agent is highly likely to lose the capability of finding a new shortcut (the fifth graph in Fig 10). The reason is that TRL's reward function is hybrid (tendency + final goal), where the final goal reward is meant to guarantee the agent's motivation for finding new policies. That’s why keeping the final goal state constant in training of each phase makes sense.\n\n2-Thanks for your suggestion. Based on some experiments on this idea, we find that in small state space tasks (e.g. the Maze) this approach can lead to similar performance compared to keyframe scheme (\"checkpoint\" is renamed \"keyframe\"), but it might be impractical in large state space multistage tasks such as “Pick and Place”. Since the shaping of tendency reward hasn’t covered the area close to the start state, exploration beginning from the start state might be biased as well, and the complexity of generating P_{i+1} can be very high. As a matter of fact, several keyframes can already solve this problem well in these domains.\n\n3-We learn from raw pixel perceptions based on the assessment that it is a more general form of environment information and contains more details of the environment than low-dimensional data. Classic approaches, due to hand-designed detectors and grasp policies, cannot be easily generalized to new objects or varying background scenes. Additionally, images are less expensive to acquire and are more practical than precise sensor information. Taking robotic grasping and picking as an example, the location and shape of the object are hard to acquire and define (we cannot mount sensors everywhere), we will have to rely on perceptions (image or video).\n\n4-TRL does NOT fall in the track of imitation learning. The optional keyframes are only used in large-scale experiments like grasping from perception, not in simpler ones like Mario and Maze. By our design, TRL works without any expert policies. The keyframe scheme only helps to shrink search space and does not influence the learned policy. Our experiments show that the agent does not necessarily follow the keyframes (Appendix E.3 Fig 10).", "1-As is claimed in the paper, our assumption follows [Carlos et al 2017]. For deterministic environments, we found it not necessary since we can change the discriminator to the probability of success between 0-1 and TRL can then handle stochastic as well. We have revised the claim. For the sample-anywhere assumption, in fact, we don’t need to reach everywhere but only start states in the current phase which it has reached during the generation process. We can record those states through low dimensional data (angles of joint etc) easily. In games, actually we find it’s easier than robotics to reset to any state given access to the corresponding API from developers. Given that many game developers are interested in training AI agent automatically for their games, such APIs are usually not hard to acquire.\n\n2-As is explained in the Introduction, we have pointed out the reason why the method in Reverse Curriculum paper is lack of efficiency (Close to the end of the 2nd paragraph). Then we show that with the help of tendency rewards, our model can get rid of the unnecessary time-consuming reviewing process where the agent switches start states between old and new ones to avoid forgetting old policies (End of the 3rd & middle of the 4th paragraph). To prove our idea, we make a comparison in Experiment 5.1 (Fig 3), which shows our advantage in efficiency compared to Reverse Curriculum algorithm. TRL’s main advantage over reverse curriculum is that it no longer requires keeping all starting sets. \n\n3-Thanks for mentioning the potential based reward shaping. However, if we define the shaped reward as r = T(St’) - T(St), although this approach can avoid repeated rewards, it still suffer from the reward sparsity problem, since T(St’_positive) - T(St_positive) and T(St’_negative)- T(St_negative) remain 0 at most time and won’t help the agent learn to tackle these tasks.\n\n4-Thanks! We have added this assumption. This assumption is also listed in [Carlos et al 2017].\n\n5-This gamma is only used for weight balance for two rewards. We are sorry to use a confusing notation. Another notation $\\lambda$ has been used to address the confusion.\n\n6-We explained in Experiment 5.3 that the reward function used in PBRS is well hand-engineered by us. We tried more than 10 different reward functions shaped from demonstration and keep adjusting them to let PBRS solve this task. In our experiments, only 2 of all the reward functions we tried can let PBRS work, the others are not shown on the Fig 6. This approach costs much human elaboration and different maps in the Maze need different reward function. Moreover, in most robotic domains where the reward function cannot be easily shaped by hands, human elaboration will increase to an unpractical level. TRL is able to solve this problem with negligible human elaboration with merely several labeled keyframes (\"checkpoint\" is renamed \"keyframe\"). We also proved TRL’s robustness to keyframes with different quality and scale in Appendix E.3 & E.4 (Fig 10, Fig 11). Although the training efficiency of TRL and PBRS may seem similar in the figure, the human elaboration behind the performance is quite different.", "1-We respectfully remind that full sentence in our paper is “When resetting, the agent can start from any state s ∈ Pi.” We don’t assume that the agent can reset to any state. Actually, we only assume that it can reset to a certain state in each phase where it has reached before. Thanks for mentioning the access to low-dimensional states. TRL does need these low-dimensional data to restore visited states during the generation of new phases and doesn’t require these data for real training. During each generation process, the newly sampled states will be stored in the form of low-dimensional states such as the angle of joints and velocity of motors. Since these low-dimensional data is easy to acquire and only used for resetting the agent, we just summarized it as “a way of adding new states to the new phase”. It seems that there is no need for special emphasize.\n\n2-As is mentioned in the last paragraph of Introduction: “The major contribution of this work is that we present a reliable tendency reinforcement learning method that is capable of training agents to solve large state space tasks with only final reward. ” This is our reward setting and is just the definition of goal-oriented tasks. And the detail of experiments is also shown in Appendix C, where we explain all of the settings. The RL used in all of our experiments is A3C and our action control is discrete.\n\n3-There are three components: (a) Phase administrator (b) Tendency reward (c) Keyframes (\"checkpoint\" is renamed \"keyframe\")\nWe ran rough ablation studies with three different settings of difficulties: \n(i) small state space with only final reward (10*10 Maze with observation 10*10): None of the three components are needed since a traditional RL method can tackle it. \n(ii) medium state space with only final reward (40*40 Maze with observation 9*9, Mario Bros): We can solve it by only using (b) with around 53000 training steps(40*40 Maze). We can also accelerate learning by combine (b) and (a), which will take around 35000 training steps. \n(iii) large state space with only final reward (100*100 Maze with observation 9*9, robotic manipulation from perception(grasping, pick and place)): We use (a), (b)and (c) to solve these problems. If we only use (a) and (b), the generation of each phase might be biased and will fail in multistage tasks. Then we include (c) and test the influence of keyframes with different quality and scale (Appendix E.3 E.4 Fig 10 11). We do not find clear relationship between the number of keyframes and the efficiency of training, but keyframes can indeed help TRL learn well (33000 iterations in Grasping, 99000 iterations in Conveyance challenge).\n\n4-We ran some tests based on [Ng et al 1999] and found that if we structure the tendency reward as potential based, the efficiency will largely decrease. We tested it in 40*40 Maze with observation 9*9. Since the tendency reward then be defined as rT(St’) - T(St), the hybrid reward is still very sparse and the agent takes more than 50000 iterations to complete 60% of the whole task (our method takes around 35000 steps to complete the whole one).\n\n5-Goal-oriented tasks are among the most difficult challenges in RL and traditional methods (e.g. TRPO, AC, PPO) alone are not capable of tackling them. The most recent approach to tackle it is based on intrinsic motivation. We made an experiment comparing TRL with curiosity-driven RL in Appendix E.1.2 (Table 2) and showed TRL’s advantages. Other methods mainly focus on tackling this problem with demonstrations, which we also compare TRL with in Experiment 5.3 (Fig 6). The result shows that we only need a small number of keyframes to achieve better results compared to them without much human elaboration or well hand-engineered reward function.\n\n6-Thanks. We have incorporated these two works in discussion." ]
[ 4, 4, 5, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_rJssAZ-0-", "iclr_2018_rJssAZ-0-", "iclr_2018_rJssAZ-0-", "Bkq9MWEmM", "BkFL6KCxf", "r1Kg9atxz", "B129GzFxf" ]
iclr_2018_BkeC_J-R-
Combination of Supervised and Reinforcement Learning For Vision-Based Autonomous Control
Reinforcement learning methods have recently achieved impressive results on a wide range of control problems. However, especially with complex inputs, they still require an extensive amount of training data in order to converge to a meaningful solution. This limitation largely prohibits their usage for complex input spaces such as video signals, and it is still impossible to use it for a number of complex problems in a real world environments, including many of those for video based control. Supervised learning, on the contrary, is capable of learning on a relatively small number of samples, however it does not take into account reward-based control policies and is not capable to provide independent control policies. In this article we propose a model-free control method, which uses a combination of reinforcement and supervised learning for autonomous control and paves the way towards policy based control in real world environments. We use SpeedDreams/TORCS video game to demonstrate that our approach requires much less samples (hundreds of thousands against millions or tens of millions) comparing to the state-of-the-art reinforcement learning techniques on similar data, and at the same time overcomes both supervised and reinforcement learning approaches in terms of quality. Additionally, we demonstrate the applicability of the method to MuJoCo control problems.
rejected-papers
The proposed method combines supervised pretraining given some expert data and further uses the supervision to regularize the Q-updates to prevent the agent from exploring 'nonsense' directions. There a significant problems with the paper: the approach is not novel, the assumption of large amounts of expert data is problematic, and the claim of vastly accelerated learning is not supported empirically, either in the main paper or in the additional mujoco experiments added in the appendix.
train
[ "BJCXSFZgz", "SJmcBU_ez", "Bk05KWcgz", "HkJ5oC37M", "rJyGRRhmM", "Sku3FC27f", "ry-wuA3QG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes leveraging labelled controlled data to accelerate reinforcement-based learning of a control policy. It provides two main contributions: pre-training the policy network of a DDPG agent in a supervised manner so that it begins in reasonable state-action distribution and regalurizing the Q-updates of the q-network to be biased towards existing actions. The authors use the TORCS enviroment to demonstrate the performance of their method both in final cumulative return of the policy and speed of learning.\n\nThis paper is easy to understand but has a couple shortcomings and some fatal (but reparable) flaws:.\n\n1) When using RL please try to standardize your notation to that used by the community, it makes things much easier to read. I would strongly suggest avoiding your notation a(x|\\Theta) and using \\pi(x) (subscripting theta or making conditional is somewhat less important). Your a(.) function seems to be the policy here, which is invariable denoted \\pi in the RL literature. There has been recent effort to clean up RL notation which is presented here: https://sites.ualberta.ca/~szepesva/papers/RLAlgsInMDPs.pdf. You have no obligation to use this notation but it does make reading of your paper much easier on others in the community. This is more of a shortcoming than a fundamental issue.\n\n2) More fatally, you have failed to compare your algorithm's performance against benchline implementations of similar algorithms. It is almost trivial to run DDPG on Torcs using the openAI baselines package [https://github.com/openai/baselines]. I would have loved, for example, to see the effects of simply pre-training the DDPG actor on supervised data, vs. adding your mixture loss on the critic. Using the baselines would have (maybe) made a very compelling graph showing DDPG, DDPG + actor pre-training, and then your complete method.\n\n3) And finally, perhaps complementary to point 2), you really need to provide examples on more than one environment. Each of these simulated environments has its own pathologies linked to determenism, reward structure, and other environment particularities. Almost every algorithm I've seen published will often beat baselines on one environment and then fail to improve or even be wors on others, so it is important to at least run on a series of these. Mujoco + AI Gym should make this really easy to do (for reference, I have no relatinship with OpenAI). Running at least cartpole (which is a very well understood control task), and then perhaps reacher, swimmer, half-cheetah etc. using a known contoller as your behavior policy (behavior policy is a good term for your data-generating policy.)\n\n4) In terms of state of the art you are very close to Todd Hester et. al's paper on imitation learning, and although you cite it, you should contrast your approach more clearly with the one in that paper. Please also have a look at some more recent work my Matej Vecerik, Todd Hester & Jon Scholz: 'Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards' for an approach that is pretty similar to yours.\n\nOverall I think your intuitions and ideas are good, but the paper does not do a good enough job justifying empirically that your approach provides any advantages over existing methods. The idea of pre-training the policy net has been tried before (although I can't find a published reference) and in my experience will help on certain problems, and hinder on others, primarily because the policy network is already 'overfit' somewhat to the expert, and may have a hard time moving to a more optimal space. Because of this experience I would need more supporting evidence that your method actually generalizes to more than one RL environment.", "This paper proposes to combine reinforcement learning with supervised learning to speed up learning. Unlike their claim in the paper, the idea of combining supervised and RL is not new. A good example of this is a supervised actor-critic by Barto (2004). I think even alphaGo uses some form of supervision. However, if I understand correctly, it seems that combining supervision of RL at a later fine-tuning phase by considering supervision as a regularization term is an interesting idea that seems novel.\n\nHaving the luxury of some supervised episodes is of course useful. The first step of building a supervised initial model looks straight forward. The next step of the algorithm is less easy to follow, and presentation of the ideas could be much better. This part of the paper leaves me already with many questions such as why is it essential to consider only a deterministic case and also to consider greedy optimization? Doesn’t this prevent exploration? What are the network parameters (e.g. size of layers) etc. I am not sure I could redo the work from the provided information.\n\nOverall, it is unclear to me what the advantage of the algorithm is over pure supervised learning, and I don’t think a compelling case has been made. Since the influence of the supervision is increased by increasing alpha, it can be expected that results should be better for increasing alpha. The results seem to indicate that an intermediate level of alpha is best, though I would even question the statistical significance by looking at the curves in Figure 3. Also, what is the epoch number, and why is this 1 for alpha=0? If the combination of supervised learning with RL is better, than this should be clearly stated. Some argument is made that pure supervision is overfitting, but would one then not simply add some other regularizer? \n\nThe presentation could also be improved with some language edits. Several articles are wrongly placed and even some meaning is unclear. For example, the phrase “continuous input sequence” does not make sense; maybe you mean “input sequence of real valued quantities”.\n\nIn summary, while the paper contains some good ideas, I certainly think it needs more work to make a clear case for this method. \n", "\nThe paper was fairly easy to follow, but I would not say it was well written. These are minor annoyances; there were some typos and a strange citation format. There is nothing wrong with the fundamental idea itself, but given the experimental results it just is not clear that it is working.\n\nThe bot performance significantly better than the fully trained agent. This leads to a few questions:\n\n1. What was the performance of the \"regression policy\", that was learned during the supervised pretraining phase?\n2. Given enough time would the basic RL agent reach similar performance? (Guessing no...) Why not?\n3. Considering the results of Figure 3 (right) shouldn't the conclusion be that the RL portion is essentially contributing nothing?\n\nPros:\nThe regularization of the Q-values w.r.t. the policy of another agent is interesting\n\nCons:\nNot very well setup experiments\nPerformance is lower than you would expect just using supervised training\nNot clear what parts are working and what parts are not\n\n\n", "Thank you so much for these valuable comments. We have carefully considered them in order to improve the contents of the paper.\nBelow you can see our comments on your questions:\n\n“ This paper proposes to combine reinforcement learning with supervised learning to speed up learning. Unlike their claim in the paper, the idea of combining supervised and RL is not new. A good example of this is a supervised actor-critic by Barto (2004). I think even alphaGo uses some form of supervision. However, if I understand correctly, it seems that combining supervision of RL at a later fine-tuning phase by considering supervision as a regularization term is an interesting idea that seems novel.”\nThank you for mentioning this previous works. We have cited Rosenstein and Barto in our new revision, and amended our claims (see the very end of introduction).\n\n\n“Having the luxury of some supervised episodes is of course useful. The first step of building a supervised initial model looks straight forward. The next step of the algorithm is less easy to follow, and presentation of the ideas could be much better. This part of the paper leaves me already with many questions such as why is it essential to consider only a deterministic case and also to consider greedy optimization? Doesn’t this prevent exploration? What are the network parameters (e.g. size of layers) etc. I am not sure I could redo the work from the provided information.”\nThe greedy optimisation method was chosen in order to meet the requirements of real-time policy testing (first line in the while loop in the algorithm). We believe that the practical necessity of minimising the difference between the measurements per second rate in testing and training scenarios is vital for real world control scenarios as, unlike the gym environment, it would not be possible to wait for the training procedures. We’ve made the amendments in the discussion before the algorithm in order to highlight this issue. Considering the exploration prevention — we believe that it is a very valid case to explore, and we are now working on exploring this case in different reinforcement learning algorithms beyond the strategies described in this paper.\nConsidering the network parameters — again, we completely agree with you that it was an omission, which we have corrected in this version (see updated Appendix A). \n\n\n“Overall, it is unclear to me what the advantage of the algorithm is over pure supervised learning, and I don’t think a compelling case has been made. Since the influence of the supervision is increased by increasing alpha, it can be expected that results should be better for increasing alpha. The results seem to indicate that an intermediate level of alpha is best, though I would even question the statistical significance by looking at the curves in Figure 3. Also, what is the epoch number, and why is this 1 for alpha=0? If the combination of supervised learning with RL is better, than this should be clearly stated. Some argument is made that pure supervision is overfitting, but would one then not simply add some other regularizer? ”\nThank you for mentioning the error with epoch number. We’ve amended the algorithm in order to reflect our view that the epochs are counted from one, not zero, as it is in the rest of the text. We believe that this may be considered as a special kind of regulariser which is explicitly aimed to maximising the discounted reward. And in order to further analyse the mutual impact and statistical significance of the improvement of the proposed method, we have made some additional tests with MuJoCo environment which we have presented in Appendix B. We show that in some cases (Hopper) the proposed regularisation could overcome not only the reinforcement learning algorithm but also the reference actor.\n\n\n“The presentation could also be improved with some language edits. Several articles are wrongly placed and even some meaning is unclear. For example, the phrase “continuous input sequence” does not make sense; maybe you mean “input sequence of real valued quantities”.\nWe’ve made some amendments in the text in order to improve the presentation as you can see throughout the updated text of the paper. Many thanks for noting this.", "First of all, many thanks for your response. We have addressed the points you’ve mentioned in order to improve the quality of the article. \n Below you can find our responses on your questions and suggestions. \n\nConsidering the first point, thank you for mentioning this, in the new version of the paper we have changed the notation in order to improve the overall quality of presentation as you could see throughout all the text. We have also amended the text in a bid to improve the overall quality of the text.\n\nOn the second and third points, we completely agree that we need some further tests. And it is greatly appreciated that you could even show us the way to conduct these experiments with OpenAI package. We have carried out such experiments on MuJoCo scenarios and presented them in the Appendix B. \n\nConsidering the fourth point, we’ve read the paper by Matej Vecerik et al., it is indeed somewhat similar (but not the same) to what we propose. We have also amended the introduction to reflect that the authors of this paper are proposing similar ideas and contrast them to the ours. It is also remarkable that they are also thinking of applying it online to real-world scenario. However, their approach is different methodologically, as the authors of the paper are injecting the data into the replay buffer while we are regularising the Q-function. Also, it differs in terms of the application: the authors do not aim to use it for video and focus on robotic applications in real world.", "Thank you for these very meaningful comments. In the following paragraphs we explain the amendments we’ve made in order to address the raised issues.\n\nConsidering the first point, ‘What was the performance of the \"regression policy\", that was learned during the supervised pretraining phase?’, we’ve amended the text of the article to explain that, according to Algorithm 1, the pretraining stage performance is evaluated during the first epoch. Therefore the points for the stage one in the graphs in Figure 3 show the performance of the retrained stage. We have put the additional explanations to the section 3.2.\n\nFor the second point, “Given enough time would the basic RL agent reach similar performance? (Guessing no...) Why not?” In order to make the necessary assessments for this point, we have carried out the additional experiments in Appendix B on MuJoCo tasks, which confirm that while usually it is limited by the performance of the RL method in some cases pretraining allows to go even beyond the capabilities of both the ‘pure’ RL method and the supervised model performance (see Figure 6, Hopper scenario). But our claim is that by pretraining and supervised learning assistance we minimise the time of ‘nonsense’ control signals with extremely low rewards in order to enable the real-time training scenarios (with potential applications to real world environments). \n\nConsidering the third point, ‘Considering the results of Figure 3 (right) shouldn't the conclusion be that the RL portion is essentially contributing nothing?’, despite the reasonably bad performance of the RL portion on this task, the combination of reinforcement and supervised learning still provides better results in terms of both maximum and average rewards. But as we totally agree with you that this evidence in the original paper was not sufficient, we hope the additional experiments on MuJoCo tasks would strengthen this point. ", "Many thanks for the very useful comments from all the reviewers. We have taken them into account with the following list of amendments:\n- We have added a new Appendix B, describing the results of the experiments on MuJoCo tasks. Reflecting these changes, we have also added the references on OpenAI baselines and MuJoCo publications.\n- Throughout the description of the method, we have made the notation closer to the one used in many of the papers within the community (a is replaced by \\pi, and where it was possible, we have removed parameterisation Theta_pi, which was cluttering the notation).\n- We have slightly amended the claims (in abstract and the introduction) in order to address the comments from the reviewers. It includes: stating in the introduction that the combination of reinforcement and supervised learning did exist before but not in the problem statement of supervised regularisation for the optimisation problem; adding the information about the previous works in supervised actor-critic by Barto (2004), and also Matej Vecerik, Todd Hester & Jon Scholz: 'Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards’. We have also contrasted in the introduction the differences between those approaches and the proposed one.\n- We have also repeatedly proofread the text in order to remove ambiguities, including those found by the reviewers ( “continuous input subsequence” -> “subsequences of real valued quantities”). Also we have stated explicitly for the Algorithm 1 that “the 0-th epoch's testing episodes reflect the performance of the model with supervised pretraining.”\n- We have changed the alignment of some figures (notably Figure 4 and 5) in order to improve presentation\n- We have added the network parameters (sizes of the layers) to Appendix A to ensure repeatability of the experiments." ]
[ 4, 5, 3, -1, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_BkeC_J-R-", "iclr_2018_BkeC_J-R-", "iclr_2018_BkeC_J-R-", "SJmcBU_ez", "BJCXSFZgz", "Bk05KWcgz", "iclr_2018_BkeC_J-R-" ]
iclr_2018_HktXuGb0-
Reward Estimation via State Prediction
Reinforcement learning typically requires carefully designed reward functions in order to learn the desired behavior. We present a novel reward estimation method that is based on a finite sample of optimal state trajectories from expert demon- strations and can be used for guiding an agent to mimic the expert behavior. The optimal state trajectories are used to learn a generative or predictive model of the “good” states distribution. The reward signal is computed by a function of the difference between the actual next state acquired by the agent and the predicted next state given by the learned generative or predictive model. With this inferred reward function, we perform standard reinforcement learning in the inner loop to guide the agent to learn the given task. Experimental evaluations across a range of tasks demonstrate that the proposed method produces superior performance compared to standard reinforcement learning with both complete or sparse hand engineered rewards. Furthermore, we show that our method successfully enables an agent to learn good actions directly from expert player video of games such as the Super Mario Bros and Flappy Bird.
rejected-papers
The paper presents a method for learning from expert state trajectories using a similarity metric in a learned feature space. The approach uses only the states, not the actions of the expert. The reviewers were variously dissatisfied with the novelty, the theoretical presentation, and the robustness of the approach. Though it empirically works better than the baselines (without expert demos) this is not surprising, especially since thousands of expert demonstrations were used. This would have been more impressive with fewer demonstrations, or more novelty in the method, or more evidence of robustness when the agent's state is far from the demonstrations.
train
[ "S1ucldOlf", "S1qg275gM", "SkwCEXalM", "r1WNJ7KfM", "B1TekQFMG", "B1tP0GFfG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors propose to solve the inverse reinforcement learning problem of inferring the reward function from observations of a behaving agent, i.e. trajectories, albeit without observing state-action pairs as is common in IRL but only with the state sequences. This is an interesting problem setting. But, apparently, this is not the problem the authors actually solve, according to eq. 1-5. Particularly eq. 1 is rather peculiar. The main idea of RL in MDPs is that agents do not maximize immediate rewards but instead long term rewards. I am not sure how this greedy action should result in maximizing the total discounted reward along a trajectory. \nEquation 3 seems to be a cost function penalizing differences between predicted and observed states. As such, it implements a sort of policy imitation, but that is quite different from the notion of reward in RL and IRL. Similarly, equation 4 penalizes differences between predicted and observed state transitions. \nEssentially, the current manuscript does not learn the reward function of an MDP in the RL setting, but it learns some sort of a shaping reward function to do policy imitation, i.e. copy the behavior of the demonstrator as closely as possible. This is not learning the underlying reward function. So, in my view, the manuscript does a nice job at policy fitting, but this is not reward estimation. The manuscript has to be rewritten that way. \nOne could also argue that the manuscript would profit from a better theoretical analysis of the IRL problem, say:\nC. A. Rothkopf, C. Dimitrakakis. Preference elicitation and inverse reinforcement learning. ECML 2011\nOverall the manuscript leverages on deep learning’s power of function approximation and the simulation results are nice, but in terms of the soundness of the underlying RL and IRL theory there is some work to do.", "This paper uses inverse reinforcement learning to infer additional shaping rewards from demonstrated expert trajectories. The key distinction from many previous works in this area is that the expert’s actions are assumed to not be available, and the inferred reward on a transition is assumed to be a function of the previous and subsequent state. The expert trajectories are first used to train either a generative model or an LSTM on next state prediction. The inferred reward for a newly experienced transition is then defined from the negative error between the predicted and actual next state. The method is tested on several reacher tasks (low dimensional continuous control), as well as on two video games (Super Mario Bros and Flappy Bird). The results are positive, though they are often below the performance of behavioral cloning (which only trains from the expert data but also uses the expert’s actions). The proposed methods perform competitively with hand-designed dense shaping rewards for each task.\n\nThe main weakness of the proposed approach is that the addition of extra rewards from the expert trajectories seems to skew the system’s asymptotic behavior away from the objective provided by the actual environment reward. One way to address this would be to use the expert trajectories to infer not only a reward function, but also an initial state value function (trained on the expert trajectories with the inferred reward). This initial value function could be added to the learned value function and would not limit asymptotic performance (unlike the addition of inferred rewards as proposed here). This connection between reward shaping and initial Q values was described by Wiewirora in 2003 (“Potential-based Shaping and Q-Value Initialization are Equivalent”). \n\nI am also uncertain of the robustness of the proposed approach when the learning agent goes beyond the distribution of states provided by the expert (where the inferred reward model has support). Will the inferred reward function in these situations go towards zero? Will the inferred reward skew the learning algorithm to a worse policy? How does one automatically balance the reward scale provided by the environment with the the reward scaling provided by psi, or is this also assumed to be manually crafted for each domain? These questions make me uncertain of the utility of the proposed method.\n", "To speed up RL algorithms, the authors propose a simple method based on utilizing expert demonstrations. The proposed method consists in explicitly learning a prediction function that maps each time-step into a state. This function is learned from expert demonstrations. The cost of visiting a state is then defined as the distance between that state and the predicted state according to the learned function. This reward is then used in standard RL algorithms to learn to stick close to the expert's demonstrations. An on-loop variante of this method consists of learning a function that maps each state into a next state according to the expert, instead of the off-loop function that maps time-steps into states.\nWhile the experiments clearly show the advantage of this method, this is hardly surprising or novel. The concept of encoding the demonstration explicitly in the form of a reward has been around for over a decade. This is the most basic form of teaching by demonstration. Previous works had used other models for generalizing demonstrations (GMMs, GPs, Kernel methods, neural nets etc..). This paper uses a three layered fully connected auto-encoder (which is not that deep of a model, btw) for the same purpose. The idea of using this model as a reward instead of directly cloning the demonstrations is pretty straightforward. \n\nOther comments:\n- Most IRL methods would work just fine by defining rewards on states only and ignoring actions all together. If you know the transition function, you can choose actions that lead to highly rewarding states, so you don't need to know the expert's executed actions.\n- \"We assume that maximizing likelihood of next step prediction in equation 1 will be globally optimized in RL\". Could you elaborate more on this assumption? Your model finds rewards based on local state features, where a greedy (one-step planning) policy would reproduce the expert's demonstrations (if the system is deterministic). It does not compare the global performance of the expert to alternative policies (as is typically done in IRL).\n- Related to the previous point: a reward function that makes every step of the expert optimal may not be always exist. The expert may choose to go to terrible states with the hope of getting to a highly rewarding state in the future. Therefore, the objective functions set in this paper may not be the right ones, unless your state description contains features related to future states so that you can incorporate future rewards in the current state (like in the reacher task, where a single image contains all the information about the problem). What you need is actually features that can capture the value function (like in DQN) and not just the immediate reward (as is done in IRL methods). \n- What if in two different trajectories, the expert chooses opposite actions for the same state appearing in both trajectories? For example, there are two shortest paths to a goal, one starts with going left and another starts with going right. If you try to generate a state that minimizes the sum of distances to the two states (left and right ones), then you may choose to remain in the middle, which is suboptimal. You wouldn't have this issue with regular IRL techniques, because you can explain both behaviors with future rewards instead of trying to explain every action of the expert using only local state description. ", "Thank you very much for your comments.\nWe are very happy you understood this is nice simulations and interesting problem setting.\nAnd we put the answers to your questions and suspicions, also we updated the paper by following your comments.\n\n>I am not sure how this greedy action should result in maximizing the total discounted reward along a trajectory.\n\nThis is a very important point of this proposed method.\nThe expert agent (it will be also human demonstrations in some tasks) will do actions that are maximizing the future reward.\nThe proposed method will be trained as similar as possible to the expert agent by equation 1.\nWhen the agent took similar actions, it will get the high reward.\nAnd, we elaborated the context by your comments also.\n\"We assume the maximizing likelihood of next step prediction in equation 1 will be globally optimized in RL.\"\n->\n\"We assume the performing to maximize the likelihood of next step prediction in equation 1 will be leading the maximizing the future reward when the task is deterministic. Because this likelihood is based on similarity with demonstrations which are obtained while an expert agent is performing by maximizing the future reward. Therefore we assume the agent will be maximizing future reward when it takes the action that gets the similar next step to expert demonstration trajectory data \\tau. \"\n\n>Essentially, the current manuscript does not learn the reward function of an MDP in the RL setting, but it learns some sort of a shaping reward function to do policy imitation, i.e. copy the behavior of the demonstrator as closely as possible.\n\nActually, this is true, we agree this opinion.\nThe objective of proposed reward is copying the behavior of the demonstrator.\nHowever, with our assumption, the agent could not get the \"actual\" reward during testing, but the expert agent got the actual reward or knew the task.\nThen the reward of the proposed method is based on similarity of behavior with the demonstrator.\nSo, the predicted reward likes (hidden) actual reward that is used by the expert agent.\nWe used \"reward estimation\" for such meaning.\n\nAnd also, if we could use the \"actual\" reward during testing, the agent can simply combine these rewards and do some explorations for normal RL.\n", "Thank you very much for your comments.\nWe are very happy you understood the effectiveness of the proposed method.\nAnd we put the answers to your questions and suspicions, also we updated the paper by following your comments.\n\n>The main weakness of the proposed approach is that the addition of extra rewards from the expert trajectories seems to skew the system’s asymptotic behavior away from the objective provided by the actual environment reward\n\nActually, yes, that's true.\nThe proposed method will try to get the \"actual\" environment reward from the demonstrations from the expert agent that is having \\pi^*.\nThe reward of the proposed method is not perfectly same as such actual reward, of course.\n\n>This connection between reward shaping and initial Q values was described by Wiewirora in 2003\n\nThank you for suggesting the new reference.\nWe added this paper as references.\n\"Another use of the expert demonstrations is initializing the value function; this was described by Wiewiora (2003).\"\n\n>I am also uncertain of the robustness of the proposed approach when the learning agent goes beyond the distribution of states provided by the expert (where the inferred reward model has support). Will the inferred reward function in these situations go towards zero?\n\nWe agree the robustness of the proposed method is very difficult to understand.\nHence, we tried to apply to many experiments in different environments.\n\nWe expected the inferred reward will be zero, when the state will be beyond the distribution of expert states.\nWe confirmed these point experimentally.\nPlease see the figure 3 and figure 8, fig 3 shows the reward value for each point in reacher task, and fig 8 shows the kind of distribution.\nThe reward value at a place that shown low frequent is nearly zero.\nOn the other hand, the reward value in the distribution of expert states is high value.\n\n>Will the inferred reward skew the learning algorithm to a worse policy?\n\nThe proposed method will not lead to training worse policy.\nBecause the proposed reward estimation network has been trained from demonstrations of given expert agent.\nHowever, of course, if the given agent has a bad policy, it will learn this policy.\nOn the other hand, if the inferred reward skew to a worse policy, the RL will not be converged.\nIn all experiments of this paper, the proposed method converged good behaviors.\n\n>How does one automatically balance the reward scale provided by the environment with the the reward scaling provided by psi, or is this also assumed to be manually crafted for each domain?\n\nActually, if we use the tanh or exp function for \\phi, the reward shape was similar.\nBut \\beta in tanh or \\sigma in exp is important for RL training.\nIf the \\beta is too high or too low, the convergence will be slow or the reward will be jerky.\nIn this paper, we tried a few values for each domain and picked one of it.\n(we forgot to describe this setting for \\sigma, so we added the way to choose this hyper-parameter)\n", "Thank you very much for your comments.\nWe are very happy you understood the benefit of the proposed method.\nAnd we put the answers to your questions and suspicions, also we updated the paper by following your comments.\n\n>Previous works had used other models for generalizing demonstrations\n\nBy our understandings, the other methods are always using the action information of demonstrations, which is simple and straightforward, such as behavior cloning. \nBut in this paper, we are tackling without action information for demonstrations.\nIf you know, could you please give references about the previous works that are only using observation values?\nIf there are similar methods, we want to compare with the proposed method.\n\nAnd, we are thinking GMMs or GPs could be difficult to predict reward for image inputs.\nIt could not adopt the differences between parts of the image, the convolution layer must be needed.\nAnd we are thinking LSTM and 3D-CNN also consider the time-sequence values, that will be another advantage from these methods.\n\n> - Most IRL methods would work just fine by defining rewards on states only and ignoring actions all together.....\n\nOur assumption is the agent doesn't know the transition function as well as optimal actions.\nWe agree if we know the function, we could use this function values to getting expert actions.\n\n> - \"We assume that maximizing likelihood of next step prediction in equation 1 will be globally optimized in RL\". Could you elaborate more on this assumption? ....\n\nWe elaborated the context by your comments.\n\"We assume the maximizing likelihood of next step prediction in equation 1 will be globally optimized in RL.\"\n->\n\"We assume the performing to maximize the likelihood of next step prediction in equation 1 will be leading the maximizing the future reward when the task is deterministic. Because this likelihood is based on similarity with demonstrations which are obtained while an expert agent is performing by maximizing the future reward. Therefore we assume the agent will be maximizing future reward when it takes the action that gets the similar next step to expert demonstration trajectory data \\tau. \"\n\n> - Related to the previous point: a reward function that makes every step of the expert optimal may not be always exist......\n\nActually, this is the very important point for this proposed method; we were, of course, thinking this point.\nWe thought, if the going this way (terrible states then highly rewarding) is the best way for the RL agent, the expert agent will also take this actions during performing.\nThus, the proposed method also can find such way.\nHowever, the samples are not learned by the expert agent, the proposed method cannot find.\nWe agree the proposed method is the value function based method.\n\n> - What if in two different trajectories, the expert chooses opposite actions for the same state appearing in both trajectories?....\n\nThis is also important; we considered this point.\nIf the numbers of multiple trajectories (these future rewards are same by an expert agent) are same, this will have occurred and other IRL techniques also have same problems.\nBecause deciding the one way from these multiple trajectories are not possible. \nHowever, normally (or experimentally), the expert agents that trained RL will take the one choice from multiple trajectories.\nHence, these points will not be issues of the proposed method." ]
[ 4, 5, 3, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1 ]
[ "iclr_2018_HktXuGb0-", "iclr_2018_HktXuGb0-", "iclr_2018_HktXuGb0-", "S1ucldOlf", "S1qg275gM", "SkwCEXalM" ]
iclr_2018_BJgVaG-Ab
AUTOMATA GUIDED HIERARCHICAL REINFORCEMENT LEARNING FOR ZERO-SHOT SKILL COMPOSITION
An obstacle that prevents the wide adoption of (deep) reinforcement learning (RL) in control systems is its need for a large number of interactions with the environment in order to master a skill. The learned skill usually generalizes poorly across domains and re-training is often necessary when presented with a new task. We present a framework that combines techniques in \textit{formal methods} with \textit{hierarchical reinforcement learning} (HRL). The set of techniques we provide allows for the convenient specification of tasks with logical expressions, learns hierarchical policies (meta-controller and low-level controllers) with well-defined intrinsic rewards using any RL methods and is able to construct new skills from existing ones without additional learning. We evaluate the proposed methods in a simple grid world simulation as well as simulation on a Baxter robot.
rejected-papers
The authors make an argument for constructing an MDP from the formal structures of temporal logic and associated finite state automata and then applying RL to learn a policy for the MDP. This does not provide a solution for low-level skill composition, because there are discontinuities between states, but does provide a means for high level skill composition. The reviewers agreed that the paper suffered from sloppy writing and unclear methods. They had concerns about correctness, and were not impressed by the novelty (combining TL and RL has been done previously). These concerns tip this paper to rejection.
val
[ "ryRVwuOeM", "SJnC0yKez", "Syp3P75gz", "BJlOg2YXz", "r1i7y2FQG", "Sk3PqjK7f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper argues for structured task representations (in TLTL) and shows how these representations can be used to reuse learned subtasks to decrease learning time.\n\nOverall, the paper is sloppily put together, so it's a little difficult to assess the completeness of the ideas. The problem being solved is not literally the problem of decreasing the amount of data needed to learn tasks, but a reformulation of the problem that makes it unnecessary to relearn subtasks. That's a good idea, but problem reformulation is always hard to justify without returning to a higher level of abstraction to justify that there's a deeper problem that remains unchanged. The paper doesn't do a great job of making that connection.\n\nThe idea of using task decomposition to create intrinsic rewards seems really interesting, but does not appear to be explored in any depth. Are there theorems to be had? Is there a connection to subtasks rewards in earlier HRL papers?\n\nThe lack of completeness (definitions of tasks and robustness) also makes the paper less impactful than it could be.\n\nDetailed comments:\n\n\"learn hierarchical policies\" -> \"learns hierarchical policies\"?\n\n\"n games Mnih et al. (2015)Silver et al. (2016),\": The citations are a mess. Please proof read.\n\n\"and is hardly reusable\" -> \"and are hardly reusable\".\n\n\"Skill composition is the idea of constructing new skills with existing skills (\" -> \"Skill composition is the idea of constructing \nnew skills out of existing skills (\".\n\n\"to synthesis\" -> \"to synthesize\".\n\n\"set of skills are\" -> \"set of skills is\".\n\n\"automatons\" -> \"automata\".\n\n\"with low-level controllers can\" -> \"with low-level controllers that can\".\n\n\"the options policy π o is followed until β(s) > threshold\": I don't think that's how options were originally defined... beta is generally defined as a termination probability.\n\n\"The translation from TLTL formula FSA to\" -> \"The translation from TLTL formula to FSA\"?\n\n\"four automaton states Qφ = {q0, qf , trap}\": Is it three or four?\n\n\"learn a policy that satisfy\" -> \"learn a policy that satisfies\".\n\n\"HRL, We introduce the FSA augmented MDP\" -> \"HRL, we introduce the FSA augmented MDP.\".\n\n\" multiple options policy separately\" -> \" multiple options policies separately\"?\n\n\"Given flat policies πφ1 and πφ2 that satisfies \" -> \"Given flat policies πφ1 and πφ2 that satisfy \".\n\n\"s illustrated in Figure 3 .\" -> \"s illustrated in Figure 2 .\"?\n\n\", we cam simply\" -> \", we can simply\".\n\n\"Figure 4 <newline> .\" -> \"Figure 4.\".\n\n\", disagreement emerge\" -> \", disagreements emerge\"?\n\nThe paper needs to include SOME definition of robustness, even if it just informal. As it stands, it's not even clear if larger \nvalues are better or worse. (It would seem that *more* robustness is better than less, but the text says that lower values are \nchosen.)\n\n\"with 2 hidden layers each of 64 relu\": Missing word? Or maybe a comma?\n\n\"to aligns with\" -> \"to align with\".\n\n\" a set of quadratic distance function\" -> \" a set of quadratic distance functions\".\n\n\"satisfies task the specification)\" -> \"satisfies the task specification)\".\n\nFigure 4: Tasks 6 and 7 should be defined in the text someplace.\n\n\"current frame work i\" -> \"current framework i\".\n\n\" and choose to follow\" -> \" and chooses to follow\".\n\n\" this makes\" -> \" making\".\n\n\"each subpolicies\" -> \"each subpolicy\".\n", "I very much appreciate the objectives of this paper: learning compositional structures is critical for scaling and transfer. \n\nThe first part of the paper offers a strategy for constructing a product MDP out of an original MDP and the automaton associated with an LTL formula, and reminds us that we can learn within that restricted MDP. Some previous work is cited, but I would point the authors to much older work of Parr and Russell on HAMs (hierarchies of abstract machines) and later work by Andre and Russell, which did something very similar (though, indeed, not in hybrid domains). The idea of extracting policies corresponding to individual automaton states and making them into options seems novel, but it would be important to argue that those options are likely to be useful again under some task distribution. \n\nThe second part offers an exciting result: If we learn policy pi_1 to satisfy objective phi_1 and policy pi_2 to satisfy objective phi_2, then it will be possible to switch between pi_1 and pi_2 in a way that satisfies phi_1 ^ phi_2. This just doesn't make sense to me. What if phi_1 is o ((A v B) Until C) and phi_2 is o ((not A v B) Until C). Let's assume that o(B Until C) is satisfiable, so the conjunction is satisfiable. However, we may find policy pi_1 that makes A true and B false (in general, there is no single optimal policy) and find pi_2 that makes A false and B false, and it will not be possible to satisfy the phi_1 and phi_2 by switching between the policies. But, perhaps I am misunderstanding something.\n\nSome other smaller points:\n- \"zero-shot skill composition\" sounds a lot like what used to be called \"planning\" or \"reasoning\"\n- The function rho is originally defined on whole trajectories but in eq 7 it is only on a single s': I'm not sure exactly what that means.\n- Section 4: How is \"as soon as possible\" encoded in this objective?\n- How does the fixed horizon interact with conjoining goals?\n- There are many small errors in syntax; it would be best to have this paper carefully proofread.", "This paper proposes to join temporal logic with hierarchical reinforcement learning to simplify skill composition. The combination of temporal logic formulas with reinforcement learning was developed previously in the literature, and the main contribution of this paper is for fast skill composition. The system uses logic formulas in truncated linear temporal logic (TLTL), which lacks an Always operator and where the LTL formula (A until B) also means that B must eventually hold true. The temporal truncation also requires the use of a specialized MDP formulation with an explicit and fixed time horizon T. The exact relationship between the logical formulas and the stochastic trajectories of the MDP is not described in detail here, but relies on a robustness metric, rho. The main contributions of the paper are to provide a method that converts a TLTL formula that specifies a task into a reward function for a new augmented MDP (that can be used by a conventional RL algorithm to yield a policy), and a method for quickly combining two such formulas (and their policies) into a new policy. The proposed method is evaluated on a small Markov chain and a simulated Baxter robot.\n\nThe main problem with this paper is that the connections between the TLTL formulas and the conventional RL objectives are not made sufficiently clear. The robustness term rho is essential, but it is not defined. I was also confused by the notation $D_\\phi^q$, which was described but not defined. The method for quickly combining known skills (the zero-shot skill composition in the title) is switching between the two policies based on rho. The fact that there may be many policies which satisfy a particular reward function (or TLTL formula) is ignored. This means that skill composition that is proposed in this paper might be quite far from the best policy that could be learned directly from a single conjunctive TLTL formula. It is unclear how this approach manages tradeoffs between objectives that are specified as a conjunction of TLTL goals. is it better to have a small probability of fulfilling all goals, or to prefer a high probability of fulfilling half the goals? In short the learning objectives of the proposed composition algorithm are unclear after translation from TLTL formulas to rewards.\n", "Thank you for your detailed comments. We have incorporated all of them in our updated paper plus additional proofreading. The following are our attempts to answer your questions.\n\n1. “. The problem being solved is not literally the problem of decreasing the amount of data needed to learn tasks, but a reformulation of the problem that makes it unnecessary to relearn subtasks. That's a good idea, but problem reformulation is always hard to justify without returning to a higher level of abstraction to justify that there's a deeper problem that remains unchanged. ”\n\nWe will try to address this concern but we are not certain that we’ve fully understood it. We provide solutions to two problems in this paper, the first is to use the FSA augmented MDP to impose hierarchical structure/constraint to the original MDP and by doing so enhance the sample efficiency and interpretability of policy learning using existing RL methods. The second is to take advantage of the structure of product automatons to compose new policy from existing policies. We didn’t intentionally try to reformulate the problem but rather to incorporate the extra knowledge and structure provided by temporal logic and FSA into the original problem and proposed methods to ensure that this incorporation is helpful to the learning process.\n\n2. “The idea of using task decomposition to create intrinsic rewards seems really interesting, but does not appear to be explored in any depth. Are there theorems to be had? Is there a connection to subtasks rewards in earlier HRL papers?”\n\nThe reasoning behind the construction of the intrinsic reward ($D^1_\\phi$ in Definition 3) is to encourage the system to exit the current automaton state and eventually reach the final acceptance state which satisfies the TLTL specification or result in the trap state which given a large terminal penalty restarts the episode. This is a property of the FSA and therefore we didn’t go into much depth (we added a reference in section 2.2 (Vasile 2017) that show the application of similar idea). \n\nAs far as we know, existing HRL methods require some kind of human effort to engineer the hierarchical structure into the learner. This ranges from explicitly defining the options (initial set, policy and terminal condition) in the original options paper to designing the finite state machine in HAM (hierarchies of abstract machines). More recent efforts have relaxed these requirements of defining what each option is and how they interact with each other and depend on the learning algorithm to figure out the specifics, however the user still has to define either the intrinsic motivation (such as the h-DQN by Kulkarni et al) or the number of options to discover (such as the option-critic framework by Bacon et al). Our work free the user from designing any of the above and utilize the FSA to provide hierarchical structure and motivation. Due to space constraints, a detailed literature review on HRL is not included.\n\n3. “The lack of completeness (definitions of tasks and robustness) also makes the paper less impactful than it could be.“\n\nWe originally included a reference to the paper containing the definition of robustness, but given that this has caused enough confusion, we have provided the full definition of the boolean and quantitative semantics along with small examples in Appendix E. We are not sure what exactly “definitions of tasks” stands for but we used TLTL formula as task specifications and examples of them are provided in Appendix A.\n\n4. “The paper needs to include SOME definition of robustness, even if it just informal. As it stands, it's not even clear if larger values are better or worse. (It would seem that *more* robustness is better than less, but the text says that lower values are chosen.)”\n\nThe higher the robustness the better satisfaction of the specification. The reason Equation (15) chooses the policy with lower robustness is because we are trying to maximize the minimum of two robustnesses (Equation (13)) with means we have to maximize the lower of the two, assuming that following each policy maximizes its own robustness at the step level (which is a limitation of the current method and discussed in the conclusion, we are working on improving this).", "Thank you for your comments. The following are our attempts to answer your questions.\n\n1. “The idea of extracting policies corresponding to individual automaton states and making them into options seems novel, but it would be important to argue that those options are likely to be useful again under some task distribution”\n\nEach option that corresponds to an automaton state q satisfies the predicate defined by $D^q_\\phi$ (in Definition 3). Since the FSA for an LTL formula is constructed by the conjunction, disjunction, and negation of various predicates, the already learned options can be used as is or to construct policies that satisfy new LTL specifications given the state and action distributions remain the same. \n\n2. “Section 4: How is \"as soon as possible\" encoded in this objective?”\n\nThis is our neglect in proofreading, but a discount factor in addition to the terminal reward are used to ensure “as soon as possible”. We’ve made this correction in our updated version\n\n\n3. “ What if phi_1 is o ((A v B) Until C) and phi_2 is o ((not A v B) Until C). Let's assume that o(B Until C) is satisfiable, so the conjunction is satisfiable. However, we may find policy pi_1 that makes A true and B false (in general, there is no single optimal policy) and find pi_2 that makes A false and B false, and it will not be possible to satisfy the phi_1 and phi_2 by switching between the policies. ”\n\n\nThis is a good example and we’ll try our best to clarify. Given the learning objective in Equation (7), if the initial condition doesn’t violate (A v B), then $pi_1$ will head for C while ensuring that (A v B) is satisfied along its path to making C true. A transition from A=true to B=true is possible if the shortest path to C requires so (shortest path is the result of the discount factor and terminal reward). If the initial position violates (A v B), then the episode restarts (results in the trap state). The same goes for $pi_2$. So for both $phi_1$ and $phi_2$, the goal is to quickly get to C while satisfying (A v B) and (not A v B) respectively. Under Definition 3, the optimal policies for $pi_1$ and $pi_2$ are unique. The necessary condition for $phi_1 ^ phi_2$ to be satisfiable is that the intersection of B and C is nonempty and the initial condition satisfies (A v B) ^ (not A v B) = B v (A ^ not A) = B. Having met these conditions, choosing either $pi_1$ or $pi_2$ will result in satisfaction of $phi_1 ^ phi_2$ \n\n4. \"zero-shot skill composition\" sounds a lot like what used to be called \"planning\" or \"reasoning\"\n\nWe understand “planning” and “reasoning” as obtaining the optimal policy under known system transition function. \n\n5. “The function rho is originally defined on whole trajectories but in eq 7 it is only on a single s': I'm not sure exactly what that means.”\n\nThank you for raising this confusion, we’ve added a footnote to Equation (7) as well as the full definition for robustness in Appendix E\n\n6. “How does the fixed horizon interact with conjoining goals?”\n\nWe’re not sure what “conjoining goals” means.\n\n7. “There are many small errors in syntax; it would be best to have this paper carefully proofread.”\n\nWe’ve put much effort in proofreading and have uploaded the newer version", "Thank you for your comments, the following are our attempts to address your questions and concerns.\n\n1. “The combination of temporal logic formulas with reinforcement learning was developed previously in the literature, and the main contribution of this paper is for fast skill composition” and “The main contributions of the paper are to provide a method that converts a TLTL formula that specifies a task into a reward function for a new augmented MDP ”\n\nCompared to similar ideas in previous literature, we extended the combination of temporal logic and RL to hybrid domains and proposed the FSA augmented MDP as a bridge between the learned flat policy and the hierarchical structure of the task. By doing so, options can be easily learned and extracted from the flat policy without the need manually design the specifics of the hierarchy. The FSA that results from the TLTL formula does not only provide the extrinsic and intrinsic rewards but also the temporal constraints of how the task should proceed which is incorporated into the system dynamics in Equation (6).\n\n2. “The exact relationship between the logical formulas and the stochastic trajectories of the MDP is not described in detail here, but relies on a robustness metric, rho” and “ The robustness term rho is essential, but it is not defined”\n\nThank you for raising this concern, we’ve originally made a reference to the paper containing the definition of robustness in Section 2.1, but now we have also added the full definition in Appendix E.\n\n3. “The main problem with this paper is that the connections between the TLTL formulas and the conventional RL objectives are not made sufficiently clear.”\n\nWe try to make this connection in Definition 1 and Problem 1, specifically Equation (1) and Equation (5). The goal of conventional RL is to maximize the expected return, the goal of RL with TLTL specification is to maximize the expected satisfaction of the TLTL formula.\n\n4. “I was also confused by the notation $D_\\phi^q$, which was described but not defined”\n\n$D_\\phi^q$ is defined in Definition 3 in the text after Equation (7). An example is provided after Equation (8).\n\n5. “The fact that there may be many policies which satisfy a particular reward function (or TLTL formula) is ignored. This means that skill composition that is proposed in this paper might be quite far from the best policy that could be learned directly from a single conjunctive TLTL formula.”\n\nThe optimal policy for the FSA augmented MDP is unique under the effect of a discount factor and the terminal reward (we carelessly neglected the discount factor during proofreading which is now added). The optimal policy should guide the system out of the current automaton state as fast as possible and towards the final accepting state. Therefore, given enough terminal motivation, the desired behavior is to find the shortest path to satisfying the specification at any given state. And the composed policy will also achieve this following the characteristics of the product automaton and the derivations in Section 5. However, if we assume no discount factor (discount=1), we end up with a set of (possibly infinite) satisfying policies. The composed policy will thus also be one of many satisfying policies that satisfies the conjunction of two TLTL specs. Depending on how hyperparameters are set, the composed policy is likely different from that learned directly from a single conjunctive TLTL formula, but their expected return will be the same (given the terminal rewards are set up to encourage the same behavior). Optimality aside, the goal of finding a satisfying policy given by Problem 1 and Problem 2 will be met. \n\n6. “It is unclear how this approach manages tradeoffs between objectives that are specified as a conjunction of TLTL goals. is it better to have a small probability of fulfilling all goals, or to prefer a high probability of fulfilling half the goals? In short the learning objectives of the proposed composition algorithm are unclear after translation from TLTL formulas to rewards.”\n\nFor the skill composition part, the objective is to fulfill all goals and hence the conjunction. If we only want to fulfill a subset of all the goals, then a disjunction would be used and the policy switching scheme would be slightly different but easily adaptable (part of our on-going work). There is not the notion of probability and we hope to show that by using our method it can be guaranteed that the conjunctive goal is fulfilled given that each sub-policy can fulfill their own goal.\n" ]
[ 5, 3, 4, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_BJgVaG-Ab", "iclr_2018_BJgVaG-Ab", "iclr_2018_BJgVaG-Ab", "ryRVwuOeM", "SJnC0yKez", "Syp3P75gz" ]
iclr_2018_rJFOptp6Z
Model Distillation with Knowledge Transfer from Face Classification to Alignment and Verification
Knowledge distillation is a potential solution for model compression. The idea is to make a small student network imitate the target of a large teacher network, then the student network can be competitive to the teacher one. Most previous studies focus on model distillation in the classification task, where they propose different architectures and initializations for the student network. However, only the classification task is not enough, and other related tasks such as regression and retrieval are barely considered. To solve the problem, in this paper, we take face recognition as a breaking point and propose model distillation with knowledge transfer from face classification to alignment and verification. By selecting appropriate initializations and targets in the knowledge transfer, the distillation can be easier in non-classification tasks. Experiments on the CelebA and CASIA-WebFace datasets demonstrate that the student network can be competitive to the teacher one in alignment and verification, and even surpasses the teacher network under specific compression rates. In addition, to achieve stronger knowledge transfer, we also use a common initialization trick to improve the distillation performance of classification. Evaluations on the CASIA-Webface and large-scale MS-Celeb-1M datasets show the effectiveness of this simple trick.
rejected-papers
The authors propose a distillation-based approach that is applied to transfer knowledge from a classification network to non-classification tasks (face alignment and verification). The writing is very imprecise - for instance repeatedly referring to a 'simple trick' rather than actually defining the procedure - and the method is described in very task-specific ways that make it hard to understand how or whether it would generalize to other problems.
train
[ "B1736j_gz", "SkX-5ijlG", "rJR-8EAgG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes knowledge distillation on two very specific non-classification tasks. I find the scope of the paper is quite limited and the approach seems hard to generalize to other tasks. There is also very limited technical contribution. I think the paper might be a better fit in conferences on faces such as FG.\n\nPros:\n1. The application of knowledge distillation in face alignment is interesting. \n\nCons:\n1. The writing of the paper can be significantly improved. The technical description is unclear.\n2. The method has two parameters \\alpha and \\beta, and Section 4.2.3. mentions the key is to measure the relevance of tasks. It seems to me defining the relevance between tasks is quite empirical and often confusing. How are they actually selected in the experiments? Sometimes alpha=0, beta=0 works the best which means the added terms are useless?\n3. The paper works on a very limited scope of face alignment. How does the proposed method generalize to other tasks?", "This paper proposed to transfer the classifier from the model for face classification to the task of alignment and verification. The problem setting is interesting and valuable, however, the contribution is not clearly demonstrated. \n\nSpecifically, it proposed to utilize the teacher model from classification to other tasks, and proposed a unified objective function to model the transferability as shown in Equation (5). The two terms in (5), (7) and (9) are used to transfer the knowledge from the teacher model. It maybe possible to claim that the different terms may play different roles for different tasks. However, there should be some general guidelines for choosing these different terms for regularization, rather than just make the claim purely based on the final results. In table 4 and table 5, the results seem to be not so consistent for using the distillation loss. The author mentioned that it is due to the weak teacher model. However, the teacher model just differs in performance with around 3% in accuracy. How could we define the “good” or “bad” of a teacher model for model distillation/transfer?\n\nBesides, it seems that the improvement comes largely from the trick of initialization as mentioned in Section 3.2. Hence, it is still not clear which parts contribute to the final performance improvements. It could be better if the authors can report the results from each of the components together. \n\n The authors just try the parameter (\\alpha, \\beta) to be (0,0), (1,0), (0,1) and (1,1). I think the range for both values could be any positive real value, and how about the performance for other sets of combinations, like (0.5, 0.5)?", "Summary:\nThe manuscript presents experiments on distilling knowledge from a face classification model to student models for face alignment and verification. By selecting a good initialization strategy and guidelines for selecting appropriate targets for non-classification tasks, the authors achieve improved performance, compared to networks trained from scratch or with different initialization strategies.\n\nReview:\nThe paper seems to be written in a rush. \nI am not sure about the degree of novelty, as pretraining with domain-related data instead of general-purpose ImageNet data has been done before, Liu et al. (2014), for example pretrain a CNN on face classification to be used for emotion recognition. Admitted, knowledge transfer from classification to regression and retrieval tasks is not very common yet, except via pretraining on ImageNet, followed by fine-tuning on the target task.\nMy main concern is with the presentation of the paper. It is very hard to follow! Two reasons are that it has too many grammatical mistakes and that very often a “simple trick” or a “common trick” is mentioned instead of using a descriptive name for the method used.\n\nHere are a few points that might help improving the work:\n1) Many kind of empty phrases are repeated all over the paper, e.g. the reader is teased with mention of a “simple trick” or a “common trick”. I don’t think the phrase “breaking point”, that is repeated a couple of times, is correctly used (see https://www.merriam-webster.com/dictionary/breaking%20point for a defininition).\n2) Section 4.1 does not explain the initialization but just describes motivation and notation.\n3) Clarity of the approach: Using the case of alignment as an example, do you first pretrain both the teacher and student on classification, then finetune the teacher on alignment before the distillation step? \n4) Table 1 mentions Fitnets, but cites Ba & Caruana (2014) instead of Romero et al. (2015)\n5) The “experimental trick” you mention for setting alpha and beta, seems to be just validation, comparing different settings and picking the one yielding the highest improvements. On what partition of the data are you doing this hyperparameter selection?\n6) The details of the architectures are missing, e.g. exactly what changes do you make to the architecture, when you change the task from classification to alignment or verification? What exactly is the “hidden layer” in that architecture?\n7) Minor: Usually there is a space before parentheses (many citations don’t have one)\n\nIn its current form, I cannot recommend the manuscript for acceptance. I get the impression that the experimental work might be of decent quality, but the manuscript fails to convey important details of the method, of the experimental setup and in the interpretation of the results. The overall quality of the write-up has to be significantly improved.\n\nReferences:\nLiu, Mengyi, Ruiping Wang, Shaoxin Li, Shiguang Shan, Zhiwu Huang, and Xilin Chen. \"Combining multiple kernel methods on riemannian manifold for emotion recognition in the wild.\" In Proceedings of the 16th International Conference on Multimodal Interaction, pp. 494-501. ACM, 2014." ]
[ 3, 5, 3 ]
[ 4, 5, 4 ]
[ "iclr_2018_rJFOptp6Z", "iclr_2018_rJFOptp6Z", "iclr_2018_rJFOptp6Z" ]
iclr_2018_Sktm4zWRb
Soft Value Iteration Networks for Planetary Rover Path Planning
Value iteration networks are an approximation of the value iteration (VI) algorithm implemented with convolutional neural networks to make VI fully differentiable. In this work, we study these networks in the context of robot motion planning, with a focus on applications to planetary rovers. The key challenging task in learning-based motion planning is to learn a transformation from terrain observations to a suitable navigation reward function. In order to deal with complex terrain observations and policy learning, we propose a value iteration recurrence, referred to as the soft value iteration network (SVIN). SVIN is designed to produce more effective training gradients through the value iteration network. It relies on a soft policy model, where the policy is represented with a probability distribution over all possible actions, rather than a deterministic policy that returns only the best action. We demonstrate the effectiveness of the proposed method in robot motion planning scenarios. In particular, we study the application of SVIN to very challenging problems in planetary rover navigation and present early training results on data gathered by the Curiosity rover that is currently operating on Mars.
rejected-papers
The authors have proposed a 'soft' version of VIN which is differentiable, where the cost function is trained by behavior cloning / imitation learning from expert/computer trajectories. The method is applied to a toy problem and to real historical data from mars rovers. The paper does not acknowledge nor compare against other methods, and the contribution is unclear, as is the justification for some of the aspects of the method. Additionally it is difficult to interpret the relevance or significance of the results (45% correct).
train
[ "Bknbc_kxG", "Sksl-n_xf", "HJTvyeceM", "BkJ2XCpQz", "HyB1eAp7M", "rk130paQM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Summary:\n\nThe Value-Iteration-Network (VIN) architecture is modified to have a softmax loss function at the end. This is termed SVIN. It is then applied in a behavior cloning manner to the task of rover path planning from start to goal from overhead imagery.\n\nSimulation results on binary obstacle maps and using real-world Mars overhead orbiter maps are shown. On the simulation maps SVIN is shown to achieve 15-20% better lower training error than VIN.\n\nOne the Mars images it trains up to 45% training accuracy. (What was testing accuracy?)\n\n\nComments:\n\n- Section 1.1: \"Autonomous driving techniques also exist, though they lack the ability to do high level planning to choose paths that are advantageous for longer term navigation.\" --This is not true. See any of the numerous good systems described in literature. See the special editions of the Journal of Field Robotics on DARPA Urban Challenge and Desert Challenge or any of the special editions for the Learning Applied to Ground Robots (LAGR) program for excellent literature describing real-world autonomous ground vehicle systems. And specifically for the case of predicting good long-term trajectories from overhead imagery see: Sofman, B., Lin, E., Bagnell, J. A., Cole, J., Vandapel, N., & Stentz, A. (2006). Improving robot navigation through self‐supervised online learning. Journal of Field Robotics. (Papers related to this have been cited in this paper already).\n\n- Section 4.1: \"During training the output from the VI module is fed to an action selection function to compare those results against actions chosen in the training data.\": What is the action selection function? Is it a local planner (e.g. receding-horizon model-predictive control)? Is it a global planner with access to full map to the goal (e.g. A* run all the way to the goal location assuming that during training the entire map is available)? Same question for Figure 2 where the 'expert action' block doesn't specify who is the expert here (computational or human).\n\n- Section 2: \"Imitation learning for navigation has been studied by other groups as well (Silver et al. 2010)\": That particular paper is about using inverse optimal control (aka inverse reinforcement learning) and not imitation learning for first learning a good terrain cost function and then using it in a receding-horizon fashion. For imitation learning in navigation see \"Learning Monocular Reactive UAV Control in Cluttered Natural Environments\" by Ross et al. and relevant literature cited therein.\n\n- My main concerns with the experiments is that they are not answering two main questions: 1. What is SVIN/VIN bringing to the table as a function approximator as opposed to using a more traditional but similar capacity CNN? 2. Why are the authors choosing to do essentially behavior cloning as opposed to imitation learning? It is well established (both theoretically and empirically) that imitation learning has mistake bounds which are linear in the time horizon while behavior cloning is quadratic. See Ross et al., \"A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning.\"\n\n- Figure 6: Please mark the goal points. It is not obvious where it is from the field arrows.\n\n- Figure 8: Are white regions high/low cost? It is not obvious from the pictures what is being avoided by the paths.\n\n- What does 45% accuracy actually mean? Are the predicted paths still usable? No figures showing some qualitative good and bad examples are shown so hard to tell.\n\n- On the rover overhead imagery if a simple A*/Dijkstra search algorithm was run from start to goal using the DEM as a heuristic cost map, how well will it do compared to SVIN?\n", "To my understanding, the focus of the paper is to learn a reward function based on expert trajectories. To do so, they use a Value Iteration Module to make the planning step differentiable.\nTo improve training, they propose to replace the max-operator in value iteration by a softmax operator.\n\nI believe the paper should be rejected. I will outline my major concerns below.\n\n1. I have a difficult time identifying the (large enough) novel contribution. The algorithm uses automatic differentiation through an approximate value iteration (approximate because it is only performed for k iterations) to learn a reward function. To me this seems like a very straightforward case of inverse reinforcement learning (with which I admittedly am not too familiar). I think at least inverse RL should be mentioned and the relationship to it should be discussed in the paper (which it is currently not). \n2. The experimental results are not convincing. In my opinion, the first experiment is too simple to showcase the algorithm. In particular, note that the number of total parameters learned are in the order of only 100 or less for the 2 layer reward network, depending on how many 'other layers' are used as input (and everything else I understand to be fixed). Furthermore, one input layer corresponds to a one hot encoding of the goal, which is also a correct output of the reward function. Consequently, the reward function must only learn the identity function (or a multiple thereof). This is further simplified by only using 1x1 convolutions. The second experiment doesn't have a baseline to compare the results against so it is hard to judge how well the algorithm performs.\n3. I am not sure that there is enough similarity to Value Iteration Networks that it should be described as an extension thereof. As far as I understand it, the original Value Iteration Network consists of the Value-Iteration Module as well as several other modules. In the reviewed paper, it seems that only the Value Iteration Module is used. Furthermore, in the current paper, the transition kernels are not learned. \n\n\nSmaller notes:\n- One could include a background section briefly explaining Value Iteration Networks. Or leave out VINs completely as I'm not sure there is too much similarity. But I might be wrong.\n- 1. Paragraph in \"3. Preliminaries\", last sentence: Albeit obvious, it should be included how $a_t$ and $s_t$ are drawn for $t>0$ and $t>1$ respectively\n- In 4.2: Traninable => trainable\n- In 4.3: As the algorithm is not actually tested on any mars rover, I wouldn't include that part in the \"Algorithm\" section. Maybe in the conclusion/outlook instead?\n- In 4.4, second paragraph: stacked = added? I guess both would work but what would be the advantage of stacking, especially when the kernel is known and fixed (and I assume simply performs a discounted addition?)\n- Please use a larger font in your plots\n- Figure 6: While I like the idea of having a qualitative analysis of the results, it would be nice if red and green arrows would be easier to tell apart. The green ones are hard to find at the moment.\n", "Summary:\nThe submission proposes a simple modification to the Value Iteration Networks (VIN) method of Tamar et al., basically consisting of assuming a stochastic policy and replacing the max-over-actions in value iteration with an expectation that weights actions proportional to their exponentiated Q-values. Since this change removes the main nondifferentiability of VINs, it is hypothesized that the resulting method will be easier to train than VINs, and experiments seem to support this hypothesis.\n\nPros:\n+ The proposed modification to VIN is simple, well-motivated, and addresses the nondifferentiability of VIN\n+ Experiments on synthetic data demonstrate a significant improvement over the standard VIN method\n\nCons:\n+ Some important references are missing (e.g., MaxEnt IOC with deep-learned features)\n+ Although intuitive, more detailed justification could be provided for replacing the max-over-actions with an exponentially-weighted average\n+ No baselines are provided for the experiments with real data\n+ All the experimental scenarios are fairly simple (2D grid-worlds with discrete actions, 1-channel input features)\n\nThe proposed method is simple, well-motivated, and addresses a real concern in VINs, which is their nondifferentiability. Although many of the nonlinearities used in CNNs for computer vision applications are nondifferentiable, the theoretical grounds for using these in conjunction with gradient-based optimization is obviously questionable. Despite this, they are widely used for such applications because of strong empirical results showing that such nonlinearities are beneficial in image-processing applications. However, it would be incorrect to assume that because such nonlinearities work for image processing, they are also beneficial in the context of unrolling value iteration.\n\nReplacing the max-over-actions with an exponentially-weighted average is an intuitively well-motivated alternative because, as the authors note, it incorporates the values of suboptimal actions during the training procedure. We would therefore expect better or faster training, as the values of these suboptimal actions can be updated more frequently. The (admittedly limited) experiments bear out this hypothesis.\n\nPerhaps the most significant downside of this work is that it fails to acknowledge prior work in the RL and IOC literature that result in similar “smoothed” or “softmax\" Bellman updates: in particular, MaxEnt IOC [A] and linearly-solvable MDPs [B] both fall in this category. Both of those papers clearly derive approximate Bellman equations from modified optimal control principles; although I believe this is also possible for the proposed update (Eq. 11), along the lines of the sentence after Eq. 11, this should be made more explicit/rigorous, and the result compared to [A,B].\n\nAnother important missing reference is [C], which learned cost maps with deep neural networks in a MaxEnt IOC framework. As far as I can tell, the application is identical to that of the present paper, and [C] may have some advantages: for instance, [C] features a principled, fully-differentiable training objective while also avoiding having to backprop through the inference procedure, as in VIN. Again, this raises the question of how the proposed method compares to MaxEnt IOC, both theoretically and experimentally.\n\nThe experiments are also a bit lacking in a few ways. First, a baseline is only provided for the experiments with synthetic data. Although that experiment shows a promising, significant advantage over VIN, the lack of baselines for the experiment with real data is disappointing. Furthermore, the setting for the experiments is fairly simple, consisting of a grid-world with 1-channel input features. The setting is simple enough that even shallow IOC methods (e.g., [D]) would probably perform well; however, the deep IOC methods of [C] is also applicable and should probably also be evaluated as a baseline.\n\nIn summary, although the method proposes an intuitively reasonable modification to VIN that seems to outperform it in limited experiments, the submission fails to acknowledge important related work (especially the MaxEnt IOC methods of [A,D]) that may have significant theoretical and practical advantages. Unfortunately, I believe the original VIN paper also failed to articulate the precise advantages of VIN over this prior work—which is not to say there are none, but it is clear that VINs applied to problems as simple as the one considered here have real competitors in prior work. Clarifying this connection, both theoretically and experimentally, would make this work much stronger and would be a valuable contribution to the literature.\n\n[A] Ziebart, Brian D. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. Carnegie Mellon University, 2010.\n[B] Todorov, Emanuel. \"Linearly-solvable Markov decision problems.\" Advances in neural information processing systems. 2007.\n[C] Wulfmeier et al. Watch This: Scalable Cost-Function Learning for Path Planning in Urban Environments. IROS 2016\n[D] Ratliff, Nathan D., David Silver, and J. Andrew Bagnell. \"Learning to search: Functional gradient techniques for imitation learning.\" Autonomous Robots 27.1 (2009): 25-53.\n", "We thank the reviewer for their comments and suggestions.\n\nRe: testing accuracy: Testing accuracy in our experiments has been essentially identical to training accuracy, which we attribute to what amounts to a very strong prior from value iteration module. We will include this comparison in the next revision.\n\nRe: Section 1.1: The reviewer is correct that there is substantial work in autonomous driving of cars and other terrestrial vehicles, however, this sentence is specifically referring to currently operational (certified for use on Mars) techniques for planetary rovers.\n\nRe: Section 4.1: The action selection block represent the process of turning Q values into an action. The exact behavior here is described in section 4.2. The expert action here comes from the trajectory demonstrations (training data), which, depending on the data source, may have been a human or computer.\n\nRegarding the reviewers main concerns:\n1: A traditional CNN would not incorporate the planning structure of the MDP formulation of our problem. The VIN/SVIN formulation produces a ‘value map’ as an intermediate result, which is the key data product for our application.\n2: We dispute the reviewer’s assertion that our technique would not be described as imitation learning.", "We thank the reviewer for their comments and suggestions.\n\nResponding the reviewer's major concerns:\n1. The reviewer is correct that inverse reinforcement learning and imitation learning are essentially interchangeable terms (though some may draw some minor distinctions in they way they are applied), though we disagree that the application is straightforward as most IL and IRL architectures do not include an explicit planning stage.\n\n2. The gridworld environment is indeed quite simple, however, useful reward functions are deceptively complex. If the relative costs and rewards of different states are not properly balanced they can produce counter-intuitive behavior, particularly when it takes many steps to reach a reward.\n\n3. The reviewer is correct that Tamar et al's VIN paper included a number of other elements around the VI module. We see the use of the VI module as one of the important innovations of that paper, and in our work we suggest modifications to the module as well as different surrounding components to serve a different application.", "We thank the reviewer for their thorough comments, suggestions, and comparisons.\n\nThe reviewer’s suggestions of additional baselines, particularly for the real data experiments, are well received and something we are currently working on. In particular, the reviewer points out similarities to MaxEnt IOC techniques, and we believe these comparisons are quite interesting and worthy of further investigation, which we hope to incorporate in a future version of this paper.\n" ]
[ 3, 3, 4, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1 ]
[ "iclr_2018_Sktm4zWRb", "iclr_2018_Sktm4zWRb", "iclr_2018_Sktm4zWRb", "Bknbc_kxG", "Sksl-n_xf", "HJTvyeceM" ]
iclr_2018_S1xDcSR6W
Hybed: Hyperbolic Neural Graph Embedding
Neural embeddings have been used with great success in Natural Language Processing (NLP) where they provide compact representations that encapsulate word similarity and attain state-of-the-art performance in a range of linguistic tasks. The success of neural embeddings has prompted significant amounts of research into applications in domains other than language. One such domain is graph-structured data, where embeddings of vertices can be learned that encapsulate vertex similarity and improve performance on tasks including edge prediction and vertex labelling. For both NLP and graph-based tasks, embeddings in high-dimensional Euclidean spaces have been learned. However, recent work has shown that the appropriate isometric space for embedding complex networks is not the flat Euclidean space, but a negatively curved hyperbolic space. We present a new concept that exploits these recent insights and propose learning neural embeddings of graphs in hyperbolic space. We provide experimental evidence that hyperbolic embeddings significantly outperform Euclidean embeddings on vertex classification tasks for several real-world public datasets.
rejected-papers
This paper does not meet the acceptance bar this year, and thus I must recommend it for rejection.
train
[ "HyiISdgef", "rynh2mGgf", "SyVv9AjWG", "HkfDunaZf", "BJ930g14M", "SyfSY3hmf", "rydq92kzf", "Hk6Ye6CWG", "Hk88g60Zf", "r1hExT0WG", "SJaMeTRbz", "SJRleTC-G", "r1YtIJCbM", "HJ5atCj-G", "rkKEbeO-z", "Sk2scfwWM", "SyFo-HpeM", "Sy6BFU_eG", "r1aJmPBef", "SylAxmQeM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "public" ]
[ "== Preamble ==\n\nAs promised, I have read the updated paper from scratch and this is my revised review. My original review is kept below for reference. My original review had rating \"4: Ok but not good enough - rejection\".\n\n== Updated review ==\n\nThe revised improves upon the original submission in several ways and, in particular, does a much better job at positioning itself within the existing body of literature. The new experiments also indicate that the proposed model offer some improvement over Nickel & Douwe, NIPS 2017).\n\nI do have remaining concerns that unfortunately still prevent me from recommending acceptance:\n\n- Throughout the paper it is argued that we should embed into a hyperbolic space. Such a space is characterized by its metric, but the proposed model do not use a hyperbolic metric. Rather it relies on a heuristic similarity measure that is inspired by the hyperbolic metric. I understand that this may be a practical choice, but then I find it misleading that the paper repeatedly states that points are embedded into a hyperbolic space (which is incorrect). This concern was also raised on this forum prior to the revision.\n\n- The resulting optimization is one of the key selling points of the proposed method as it is unconstrained while Nickel & Douwe resort to a constrained optimization. Clearly unconstrained optimization is to be preferred. However, it is not entirely correct (from what I understand), that the resulting optimization is indeed unconstrained. Nickel & Douwe work under the constraint that |x| < 1, while the proposed model use polar coordinates (r, theta): r in (0, infinity) and theta in (0, 2 pi]. Note that theta parametrize a circle, and therefore wrapping may occur (this should really be mentioned in the paper). The constraints on theta are quite easy to cope with, so I agree with the authors that they have a more simple optimization problem. However, this is only true since points are embedded on the unit disk (2D). Should you want to embed into higher dimensional spaces, then theta need to be confined to live on the unit sphere, i.e. |theta| = 1 (the current setting is just a special-case of the unit sphere). While optimizing over the unit sphere is manageable it is most definitely a constrained optimization problem, and it is far from clear that it is much easier than working under the Nickel & Douwe constraint, |x| < 1.\n\nOther comments:\n- The sentence \"even infinite trees have nearly isometric embeddings in hyperbolic space (Gromov, 2007)\" sounds cool (I mean, we all want to cite Gromov), but what does it really mean? An isometric embedding is merely one that preserves a metric, so this statement only makes sense if the space of infinite trees had a single meaningful metric in the first place (it doesn't; that's a design choice).\n\n- In the \"Contribution\" and \"Conclusion\" sections it is claimed that the paper \"introduce the new concept of neural embeddings in hyperbolic space\". I thought that was what Nickel & Douwe did... I understand that the authors are frustrated by this parallel work, but at this stage, I don't think the present paper can make this \"introducing\" claim.\n\n- The caption in Figure 2 miss some indication that \"a\" and \"b\" refer to subfigures. I recommend \"a\" --> \"a)\" and \"b\" --> \"b)\".\n\n- On page 4 it is mentioned that under the heuristic similarity measure some properties of hyperbolic spaces are lost while other are retained. From what I can read, it is only claimed that key properties are kept; a more formal argument (even if trivial) would have been helpful.\n\n\n== Original Review ==\n\nThe paper considers embeddings of graph-structured data onto the hyperbolic Poincare ball. Focus is on word2vec style models but with hyperbolic embeddings. I am unable to determine how suitable an embedding space the Poincare ball really is, since I am not familiar enough with the type of data studied in the paper. I have a few minor comments/questions to the work, but my main concern is a seeming lack of novelty:\nThe paper argues that the main contribution is that this is the first neural embedding onto a hyperbolic space. From what I can see, the paper\n\n Poincaré Embeddings for Learning Hierarchical Representations\n https://arxiv.org/abs/1705.08039\n\nconsider an almost identical model to the one proposed here with an almost identical motivation and application set. Some technicalities appear different, but (to me) it seems like the main claimed novelties of the present paper has already been out for a while. If this analysis is incorrect, then I encourage the authors to provide very explicit arguments for this in the rebuttal phase.\n\nOther comments:\n*) It seems to me that, by construction, most data will be pushed towards the boundary of the Poincare ball during the embedding. Is that a property you want?\n*) I found it rather surprising that the log-likelihood under consideration was pushed to an appendix of the paper, while its various derivatives are part of the main text. Given the not-so-tight page limits of ICLR, I'd recommend to provide the log-likelihood as part of the main text (it's rather difficult to evaluate the correctness of a derivative when its base function is not stated).\n*) In the introduction must energy is used on the importance of large data sets, but it appears that only fairly small-scale experiments are considered. I'd recommend a better synchronization.\n*) I find visual comparisons difficult on the Poincare ball as I am so trained at assuming Euclidean distances when making visual comparisons (I suspect most readers are as well). I think one needs to be very careful when making visual comparisons under non-trivial metrics.\n*) In the final experiment, a logistic regressor is fitted post hoc to the embedded points. Why not directly optimize a hyperbolic classifier?\n\nPros:\n+ well-written and (fairly) well-motivated.\n\nCons:\n- It appears that novelty is very limited as highly similar work (see above) has been out for a while.\n\n", "This paper proposes tree vertex embeddings over hyperbolic space. The conditional predictive distribution is the softmax of <v1, v2>_H = ||v1|| ||v2|| cos(theta1-theta2), and v1, v2 are points defined via polar coordinates (r1,theta1), and (r2,theta2).\nTo evaluate, the authors show some qualitative embeddings of graph and 2-d projections, as well as F1 scores in identifying the biggest cluster associated with a class. \n\nThe paper is well motivated, with an explanation of the technique as well as its applications in tree embedding in general. I also like the evaluations, and shows a clear benefit of this poincare embedding vs euclidean embedding.\n\nHowever, graph embeddings are now a very well explored space, and this paper does not seem to mention or compare against other hyperbolic (or any noneuclidean) embedding techniques. From a 2 second google search, I found several sources with very similar sounding concepts:\n\nMaximilian Nickel, Douwe Kiela, Poincaré Embeddings for Learning Hierarchical Representations\n\nA Cvetkovski, M Crovella, Hyperbolic Embedding and Routing for Dynamic Graphs\n\nYuval Shavitt, Tomar Tankel, Hyperbolic Embedding of Internet Graph for Distance Estimation and Overlay Construction\n\nThomas Bläsius, Tobias Friedrich, Anton Krohmer, andSören Laue. Efficient Embedding of Scale-Free Graphs in the Hyperbolic Plane\n\nI think this paper does have some novelty in applying it to the skip-gram model and using deep walk, but it should make more clear that using hyperbolic space embeddings for graphs is a popular and by now, intuitive construct. Along the same lines, the benefit of using the skip-gram and deep-walk techniques should be compared against some of the other graph embedding techniques out there, of which none are listed in the experiment section. \n\nOverall, a detailed comparison against 1 or 2 other hyperbolic graph embedding techniques would be sufficient for me to change my vote to accept. \n\n\n", "The authors present a neural embedding technique using a hyperbolic space.\nThe idea of embedding data into a space that is not Euclidean is not new.\nThere have been attempts to project onto (hyper)spheres.\nAlso, the proposal bears some resemblance with what is done in t-SNE, where an (exponential) distortion of distances is induced. Discussing this potential similarity would certainly broaden the readership of the paper.\n\nThe organisation of the paper might be improved, with a clearer red line and fewer digressions.\nThe call to the very small appendix via eq. 17 is an example.\nThe position of Table in the paper is odd as well.\nThe order of examples in Fig.5 differs from the order in the list.\n\nThe experiments are well illustrative but rather small sized.\nThe qualitative assessment is always interesting and it is completed with some label prediction task.\nDue the geometrical consideretations developed in the paper, other quality criteria like e.g. how well neighbourhoods are preserved in the embeddings would give some more insights.\n\nAll in all the idea developed in the paper sounds interesting but the paper organisation seems a bit loose and additional aspects should be investigated. ", "The authors present a method to embed graphs in hyperbolic space, and show that this approach yields stronger attribute predictions on a set of graph datasets. I am concerned by the strong similarity between this work and Poincaré Embeddings for Learning Hierarchical Representations (https://arxiv.org/abs/1705.08039). The latter has been public since May of this year, which leads me to doubt the novelty of this work.\n\nI also find the organization of the paper to be poor.\n- There is a surprisingly high number of digressions.\n- For some reason, Eq 17 is not included in the main paper. I would argue that this equation is one of the most important equations in the paper, given that it is the one you are optimizing.\n- The font size in the main result figure is so small that one cannot hope to parse what the plots are illustrating.\n- I am not sure what insights the readers are supposed to gain from the visual comparisons between the Euclidean and Poincare embeddings. \n\nDue to the poor presentation, I actually have difficulty making sense of the evaluation in this paper (it would help if the text was legible). I think this paper requires significant work and it not suitable for publication in its current state.\n\nAs a kind of unrelated note. It occurs to me that papers on hyperbolic embeddings tend to evaluate evaluate on attribute or link prediction. It would be great if authors would also evaluate these pretrained embeddings on downstream applications such as relation extraction, knowledge base population etc.", "The authors have addressed my major issue so I have changed my vote to accept.", "We thank the reviewers for their careful consideration of our paper and for offering improvements to the manuscript. We have submitted a new version of the paper that addresses the comments of the reviewers:\n\nRelated work on hyperbolic embeddings of graphs has been added. The paper ‘Poincaré Embeddings for Learning Hierarchical Representations’, which appeared concurrently with ours and was mentioned by several reviewers is now explicitly referenced and the differences between this work and our own are described and quantitatively evaluated.\n\nAn explanation of how our embedding sacrifices some of the properties of the hyperbolic space and why this is acceptable in the context of learning embeddings has been added.\n\nThe appendix has been merged with the main text to include Eq 17. as several reviewers identified that this inhibited readability.\n\nThe results section has been improved by including a comparison with the hyperbolic method of ‘Poincaré Embeddings for Learning Hierarchical Representations’ and the charts have been regenerated with larger labels and displayed in an order that is consistent with the tables. A new results table has been added to ease quantitative comparisons between methods. An explanation of how the embedding diagrams in Fig 5 should be interpreted has also been added.", "Okay, I think that's a reasonable point. From this description, I'm having a hard time determining if this is an important or a minor insight, but it sounds like there's a valuable insight.\n\nBut I'll make you an offer: if you update the paper to discuss these matters in detail, then I'll re-review the paper from scratch. Please also take comments from the other reviewers into account. If you can document that there's an empirical benefit to your parametrization (you already hinted at that), then that's also great (given the short time available, I acknowledge that this may not be possible).", "Thank you, we appreciate your sympathy. \n\nThe Poincare representation has circular symmetry such that all elements on the boundary of a circle centered at the origin have the same distortion to the Euclidean length. By representing the system in spherical co-ordinates the update equations are greatly simplified with the radial updates being exactly the Euclidean updates and the angular update a simple modification of the Euclidean. This simplicity could explain why we are able to get compelling results after just 5 epochs while “Poincaré Embeddings for Learning Hierarchical Representations” (Nickel and Kiela, NIPS 2017) require a 20 epoch burn in period. \n\nIn addition, the heuristic of “Poincaré Embeddings for Learning Hierarchical Representations” (Nickel and Kiela, NIPS 2017) that places points that the optimizer pushes outside of the boundary of the Poincare ball is troubling as it requires arbitrarily moving some points an infinite hyperbolic distance, which seems like an undesirable property for an optimizer. This problem does not exist in natural coordinates as the Poincare ball boundary is at infinity.\n", "Thank you for your helpful comments. We will add comparisons with more graph embedding methods and expand the literature review in future versions.", "This seems to be a duplicate comment", "Thank you for your suggestions on how to improve the paper. We do not claim that the non-Euclidean embeddings are new, only that doing so through a neural network is and that a change in geometry can markedly improve the representations learned by the very popular SkipGram / Word2vec model. We agree that eq. 17 should be moved to the main body and that it would improve readability to re-order some of the tables.", "Our paper is similar to “Poincaré Embeddings for Learning Hierarchical Representations” (Nickel and Kiela, NIPS 2017). However, both were written independently at the same time. While they are similar, there are several important differences between them. Our work uses a spherical coordinate system, instead of cartesian coordinates, which we believe is more elegant as it exploits the symmetry of the Poincare ball. In addition, we use natural hyperbolic coordinates that extend radially (0,inf) instead of (0,1). Natural coordinates remove the numerical issues at the boundary of the ball. In the paper by Nickel and Kiela, this is a problem as the optimization will push points to values greater than 1. To fix this, they introduce a heuristic that moves points a small distance back inside the ball when the optimizer pushes them outside. As most of the space is towards the edge of the ball, many of the points in their system are effectively placed solely by the heuristic. This is not a problem in natural coordinates and we simply switch back coordinate systems on completion of the optimization process. In addition, we use a different similarity metric based on cosine similarity instead of the hyperbolic distance.\n\nDespite the similarity, it is worthwhile publishing this paper as it provides a complementary perspective on the problem and grows the evidence base for this as a powerful technique that other researchers can employ. Our paper shows that the principle of hyperbolic embedding can be achieved using a cosine-similarity approach as well as the distance-similarity approach used in the other paper, and so demonstrates a more general applicability of the idea. An example of where this has previously occurred are Auto-Encoding Variational Bayes (Kingma and Welling, ICLR 2014) https://arxiv.org/abs/1312.6114 and later Stochastic Backpropagation and Approximate Inference in Deep Generative Models Rezende et al., ICML 2014) https://arxiv.org/abs/1401.4082\nBoth papers contained the same idea and appeared on arxiv within a month, but provided different perspectives.\n\nIn response to the other comments, eq. 17 was removed as it is well known, but we agree that this affects readability and will include this in the main text of future versions. Similarly we will increase the font size of the axis labels. The comparisons of hyperbolic and Euclidean embeddings are showing that in hyperbolic space the classes are more easily separable. This is important as the results use a logistic regression classifier.\n", "Since hyperbolic space is not a vector space (only a metric space) it doesn't make sense to define an inner product between two points on the manifold. This is a significant flaw in the paper.", "The authors present a neural embedding technique using a hyperbolic space.\nThe idea of embedding data into a space that is not Euclidean is not new.\nThere have been attempts to project onto (hyper)spheres.\nAlso, the proposal bears some resemblance with what is done in t-SNE, where an (exponential) distortion of distances is induced. Discussing this potential similarity would certainly broaden the readership of the paper.\n\nThe organisation of the paper might be improved, with a clearer red line and fewer digressions.\nThe call to the very small appendix via eq. 17 is an example.\nThe position of Table in the paper is odd as well.\nThe order of examples in Fig.5 differs from the order in the list.\n\nThe experiments are well illustrative but rather small sized.\nThe qualitative assessment is always interesting and it is completed with some label prediction task.\nDue the geometrical consideretations developed in the paper, other quality criteria like e.g. how well neighbourhoods are preserved in the embeddings would give some more insights.\n\nAll in all the idea developed in the paper sounds interesting but the paper organisation seems a bit loose and additional aspects should be investigated. ", "I'm sympathetic to your position; on a personal level I understand the frustration of seeing a large body of work reduced simply because somebody else got there infinitesimally faster.\n\nWith that in mind, I'm willing to listen to the arguments of why this paper brings \"new perspective\" to the table. You bring up two key differences:\n\n1) You use a natural parametrization of the Poincare ball, which you state is more \"elegant\". I see that there is some benefit to having an unconstrained optimization vs one which is constrained. Is that what you mean by \"elegant\" or is there more to this claimed elegance? Can you empirically quantify the benefits of your parametrization?\n\n2) You use a cosine-based similarity measure rather than the standard hyperbolic distance. You mention this is a difference, but does it also offer benefits? From a geometric point of view, I guess it seems more \"elegant\" to use the standard hyperbolic distance. ", "Our paper is similar to “Poincaré Embeddings for Learning Hierarchical Representations” (Nickel and Kiela, NIPS 2017). However, both were written independently at the same time. While they are similar, there are several important differences between them. Our work uses a spherical coordinate system, instead of cartesian coordinates, which we believe is more elegant as it exploits the symmetry of the Poincare ball. In addition, we use natural hyperbolic coordinates that extend radially (0,inf) instead of (0,1). Natural coordinates remove the numerical issues at the boundary of the ball. In the paper by Nickel and Kiela, this is a problem as the optimization will push points to values greater than 1. To fix this, they introduce a heuristic that moves points a small distance back inside the ball when the optimizer pushes them outside. As most of the space is towards the edge of the ball, many of the points in their system are effectively placed solely by the heuristic. This is not a problem in natural coordinates and we simply switch back coordinate systems on completion of the optimization process. In addition, we use a different similarity metric based on cosine similarity instead of the hyperbolic distance.\n\nDespite the similarity, it is worthwhile publishing this paper as it provides a complementary perspective on the problem and grows the evidence base for this as a powerful technique that other researchers can employ. Our paper shows that the principle of hyperbolic embedding can be achieved using a cosine-similarity approach as well as the distance-similarity approach used in the other paper, and so demonstrates a more general applicability of the idea. An example of where this has previously occurred are Auto-Encoding Variational Bayes (Kingma and Welling, ICLR 2014) https://arxiv.org/abs/1312.6114 and later Stochastic Backpropagation and Approximate Inference in Deep Generative Models Rezende et al., ICML 2014) https://arxiv.org/abs/1401.4082\nBoth papers contained the same idea and appeared on arxiv within a month, but provided different perspectives.\n\n", "We agree with the comment that the similarity is somewhat heuristic and inspired by the hyperbolic geometry. We care about the quality of the embeddings most of all. The use of heuristics to find good embeddings is quite common in the literature. For instance, negative sampling is often used, which abandons the strict maximization of the log probability of the softmax to learn more efficiently. What we are doing is in the same spirit as negative sampling.", "What is misleading is to describe these embeddings as inheriting the geometrical properties of the hyperbolic space, which isn't true. \n\nIt is true that the similarity function defined by the authors inherits some of the properties of the hyperbolic space, as they explained it above. \n\nHowever, it should be emphasized that it is only vaguely related to the hyperbolic metric via some heuristic, and that most properties characterizing a space endorsed with a hyperbolic structure will not be satisfied by the word-embedding space.\n\nNamely, with this similarity measure, one loses the possibility to use the conformality of the hyperbolic metric, which gave us closed forms to compute curvature tensors, volume elements, the metric tensor, the exponential map and geodesics... Which should be clearly stated at the beginning of the paper, in order to not mislead readers expecting a real exploitation of hyperbolic geometry in the embedding space. ", "\nHyperbolic space is not a vector space and so does not have a globally defined inner-product. We have defined a measure of similarity between embedded points, which is a cosine similarity weighted by the distance in hyperbolic space. \nThe hyperbolic metric comes into this because it is the hyperbolic distances from the origin (using this metric) that weight the cosine distance.\nThe net effect is that when the coordinates of points are updated, the updates in angular directions (ie. perpendicular to the radial direction) are suppressed for points far away from the origin, by a factor related to their distance from the origin in the hyperbolic space. This has the desired effect of allowing many peripheral points to be mutually distant, while simultaneously close to central points, as, for example, in figures 3 and 4.\nThe intention is to be able to use the machinery of neural embeddings while also getting the useful geometric properties of hyperbolic space.", "This paper is completely wrong: by changing the dot-product, you cannot talk about a hyperbolic space anymore. \n\nThe dot-product given by the authors has nothing to do with the hyperbolic riemannian metric.\n\n" ]
[ 4, 7, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 2, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1xDcSR6W", "iclr_2018_S1xDcSR6W", "iclr_2018_S1xDcSR6W", "iclr_2018_S1xDcSR6W", "Hk88g60Zf", "iclr_2018_S1xDcSR6W", "Hk6Ye6CWG", "rkKEbeO-z", "rynh2mGgf", "HJ5atCj-G", "SyVv9AjWG", "HkfDunaZf", "SylAxmQeM", "iclr_2018_S1xDcSR6W", "Sk2scfwWM", "HyiISdgef", "Sy6BFU_eG", "r1aJmPBef", "SylAxmQeM", "iclr_2018_S1xDcSR6W" ]
iclr_2018_SJd0EAy0b
Generalized Graph Embedding Models
Many types of relations in physical, biological, social and information systems can be modeled as homogeneous or heterogeneous concept graphs. Hence, learning from and with graph embeddings has drawn a great deal of research interest recently, but only ad hoc solutions have been obtained this far. In this paper, we conjecture that the one-shot supervised learning mechanism is a bottleneck in improving the performance of the graph embedding learning algorithms, and propose to extend this by introducing a multi-shot unsupervised learning framework. Empirical results on several real-world data set show that the proposed model consistently and significantly outperforms existing state-of-the-art approaches on knowledge base completion and graph based multi-label classification tasks.
rejected-papers
This paper does not meet the acceptance bar this year, and thus I must recommend it for rejection.
train
[ "SJ5CeLYef", "S12o7fqlM", "Byp8oT3xf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper is well-written and provides sufficient background on the knowledge graph tasks. The current state-of-the-art models are mentioned and the approach is evaluated against them. The proposed model is rather simple so it is really surprising that the proposed model performs on par or even outperforms existing state-of-the art approaches.\n\n\n? The E_CELLs share the parameters. So, there is a forced symmetry on the relation i.e. given input head h and relation r predicting x and given input relation r and tail t predicting y would result in the same entity embedding x=y with h=t?\n\n? In Table 2, you report the results of the retrained models GEN(x). There, the weights for the MLPs are learned based on the existing embeddings which do not get changed. I am missing a comparison of the change in the prediction score. Was it always better than the original model? Did all models improve in a similar fashion?\n\n? Did you try training the other models e.g. TransE with alternating objective functions for respectively predicting the head, tail or relation based on the information from the other two? \n\n? Are the last 3 Gen(x,y -> z) rows in Table 2 simple MLPs for the three different tasks and not the parts from the overall joint learned GEN model?\n\n? Why is a binary classifier for Q4 not part of the model?\n\n? Is the code with the parameter settings online?\n\n\n+ outperforms previous approaches\n\n+ proposes a general use case framework\n\n- no run-time evaluation although it is crucial when one deals with large-scale knowledge graphs\n\n\nFurther comments:\n* p.4: “it will take the embedding of h and r as input, and take r as its target label” -> “it will take the embedding of h and t as input, and take r as its target label”\n* “ComplEX” -> “ComplEx”\n", "This paper tackles the task of learning embeddings of multi-relational graphs using a neural network. As much of previous work, the proposed architecture works on triples (h, r, t) wth h, t entities and r the relation type. \n\n\nDespite interesting experimental results, I find that the paper carries too many imprecisions as is.\n* One of the main originality of the approach is to be able for a given input triple to train by sequentially removing in turn the head h, then the tail t and finally the relation r. (called multi-shot in the paper). However, most (if not all) approaches learning embeddings of multi-relational graphs also create multiple examples given a triple. And that, at least since \"Learning Structured Embeddings of Knowledge Bases\" by Bordes et al. 2011 that was predicting h and t (not r). The only difference is that here it is done sequentially while most methods sample one case each time. Not really meaningful or at least not proved meaningful here.\n* The sequential/RNN-like structure is unclear and it is hard to see how it relates to the data.\n* Writing that the proposed method \"unsupervised, which is distinctly different from previous works\" is not true or should be rephrased. The only difference comes from that the prediction function (softmax and not ranking for instance) and the loss used. But none of the methods compared in the experiments use more information than GEN (the original graph). GEN is not the only model using a softmax by the way.\n* The fact of predicting indistinctly a fact or its reverse seems rather worrying to me. Predicting that \"John is_father_of Paul\" or that \"John is_child_of Paul\" is not the same..! How is assessed the fact that a prediction is conceptually correct? Using types?\n* The bottom part of Table 2 is surprising. How come for the task of predicting Head, the model trained only at predicting heads (GEN(t,r => h)) performs worse than the model trained only at predicting tails (GEN(h,r => t))? \n\n\n\n", "The paper proposes a new method to compute embeddings of multirelational graphs. In particular, the paper proposes so-called E-Cells and R-Cells to answer queries of the form (h,r,?), (?,r,t), and(h,?,t). The proposed method (GEN), is evaluated on standard datasets for link prediction as well as datasets for node classification.\n\nThe paper tackles an interesting problem, as learning from graphs via embedding methods has become increasingly important. The experimental results of the proposed model, especially for the node classification tasks, look promising. Unfortunately, the paper makes a number of claims which are not justified or seem to result from misconceptions about related methods. For instance, the abstract labels prior work as \"ad hoc solutions\" and claims to propose a principled approach. However, I do not see how the proposed method is a more principled than previously proposed methods. For instance, methods such as RESCAL, TransE, HolE or ComplEx can be motivated as compositional models that reflect the compositional structure of relational data. Furthermore, RESCAL-like models can be linked to prior research in cognitive science on relational memory [3]. HolE explicitly motivates its modeling through its relation to models for associative memory. \n\nFurthermore, due to their compositional nature, these model are all able to answer the queries considered in the paper (i.e, (h,r,?), (h,?,t), (?,r,t)) and are implicitly trained to do so. The HolE paper discusses this for instance when relating the model to associative memory. For RESCAL, [4] shows how even more complicated queries involving logical connectives and quantification can be answered. It is therefore not clear how to proposed method improves over these models.\n\nWith regard to the evaluation: It is nice that the authors provided an evaluation which compares to several SOTA methods. However, it is unclear under which setting these results where obtained. In particular, how were the hyperparameter for each model chosen and which parameters ranges were considered in the grid search. Appendix B.2 in the supplementary seems to specify the parameter setting for GEN, but it is unclear whether the same parameters where chosen for the competing models and whether they were trained with similar methods (e.g., dropout, learning rate decay etc.). The big difference in performance of HolE and ComplEx is also surprising, as they are essentially the same model (e.g. see [1,2]). It is therefore not clear to me which conclusions we can draw from the reported numbers.\n\nFurther comments:\n- p.3: The statement \"This is the actual way we humans learn the meaning of concepts expressed by a statement\" requires justification\n- p.4: The authors state that the model is trained unsupervised, but eq. 10 clearly uses supervised information in form of labels.\n- p.4: In 3.1, E-cells are responsible to answer queries of the form (h,r,?) and (?, r, t), while Section 3.2 says E-Cells are used to answer (h, ?, t). I assume in the later case, the task is actually to answer (h,r,?)?\n- p.2: Making a closed-world assumption is quite problematic in this context, especially when taking a principled approach. Many graphs such as Freebase are very incomplete and make an explicit open-world assumption. \n- The paper uses a unusual definition of one-shot/multi-shot learning, which makes it confusing to read at first. The authors might consider using different terms to improve readability.\n- Paper would benefit if the model is presented earlier. GEN Cells are defined only in Section 3.2, but the model is discussed earlier. Reversing the order might improve presentation.\n\n[1] K. Hayashi et al: \"On the Equivalence of Holographic and Complex Embeddings for Link Prediction\", 2017\n[2] T.Trouillon et al: \"Complex and holographic embeddings of knowledge graphs: a comparison\", 2017\n[3] G. Halford et al: \"Processing capacity defined by relational complexity: Implications for comparative, developmental, and cognitive psychology\", 1998.\n[4] D. Krompaß et al: \"Querying factorized probabilistic triple databases\", 2014" ]
[ 6, 4, 3 ]
[ 4, 4, 4 ]
[ "iclr_2018_SJd0EAy0b", "iclr_2018_SJd0EAy0b", "iclr_2018_SJd0EAy0b" ]
iclr_2018_S1viikbCW
TCAV: Relative concept importance testing with Linear Concept Activation Vectors
Despite neural network’s high performance, the lack of interpretability has been the main bottleneck for its safe usage in practice. In domains with high stakes (e.g., medical diagnosis), gaining insights into the network is critical for gaining trust and being adopted. One of the ways to improve interpretability of a NN is to explain the importance of a particular concept (e.g., gender) in prediction. This is useful for explaining reasoning behind the networks’ predictions, and for revealing any biases the network may have. This work aims to provide quantitative answers to \textit{the relative importance of concepts of interest} via concept activation vectors (CAV). In particular, this framework enables non-machine learning experts to express concepts of interests and test hypotheses using examples (e.g., a set of pictures that illustrate the concept). We show that CAV can be learned given a relatively small set of examples. Testing with CAV, for example, can answer whether a particular concept (e.g., gender) is more important in predicting a given class (e.g., doctor) than other set of concepts. Interpreting with CAV does not require any retraining or modification of the network. We show that many levels of meaningful concepts are learned (e.g., color, texture, objects, a person’s occupation), and we present CAV’s \textit{empirical deepdream} — where we maximize an activation using a set of example pictures. We show how various insights can be gained from the relative importance testing with CAV.
rejected-papers
This paper does not meet the acceptance bar this year, and thus I must recommend it for rejection.
train
[ "ryetNfcxG", "rkMtrl6bz", "H1EFxgC-f", "BykC2IA-G", "SJCVW6WfM", "r1C4WcxzM", "SyqwWceMM", "BJT3eceMG", "BkAu19xzM", "H18MkJAWf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "Summary\n---\nThis paper proposes the use of Concept Activation Vectors (CAVs) for interpreting deep models. It shows how concept activation vectors can be used to provide explanations where the user provides a concept (e.g., red) as a set of training examples and then the method provides explanations like \"If there were more red in this image then the model would be more likely to classify it as a fire truck.\"\n\nFour criteria are enumerated for evaluating interpretability methods:\n1. accessibility: ML background should not be required to interpret a model\n2. customization: Explanations should be generated w.r.t. user-chosen concepts\n3. plug-in readiness: Should be no need to re-train/modify the model under study\n4. quantification: Explanations should be quantitative and testable\n\nA Concept Activation Vector is simply the weight vector of a linear classifier trained on some examples (100-500) of a user-provided concept of interest using features extracted from an intermediate network layer. These vectors can be trained in two ways:\n1. 1-vs-all: The user provides positive examples of a concept and all other existing training data is treated as negatives\n2. 1-vs-1: The user provides sets of positive and negative examples, allowing the negative examples to be targeted to one category\n\nOnce a CAV is obtained it is used in two ways:\nFirst, it provides further verification that higher level concepts tend to be \"disentangled\" in deeper network layers while low level concepts are \"disentangled\" earlier in the network. This work shows that linear classifier accuracy increases significantly using deeper features for higher level concepts but it only increases marginally (or even decreases) when modeling lower level concepts.\n\nSecond, and this is the main point of the paper, relative importance of concepts w.r.t. a particular task can be evaluated. Suppose an image (e.g., of a zebra) produces a feature vector f_l at layer l and v_l is a concept vector learned to classify the presence of stripes from layer l features. Then the probability the model assigns to the zebra class can be evaluated using features f_l and then f_l + v^c_l. If the latter probability is greater then adding stripes will increase the model's confidence in the zebra class. Furthermore, the method goes on to measure how often stripes increase zebra confidence across all images. Rather than explaining the network's decision for a particular image, this average metric measures the global importance of the stripes concept for zebra. The paper reports examples of the relative importance of certain concepts with respect to others in figure 5.\n\n\nPros\n---\n\nThe paper proposes a simple and novel idea which could have a major impact on how deep networks are explained. At a high level the novelty comes from replacing the gradient (or something similar) used in saliency methods with a directional derivative. Users can align the direction to any concept they find relevant, so the concept space used to explain a prediction is no longer fixed a-priori (e.g. to pixels in the input space). It can adapt to user suspicions and expectations.\n\n\nCons\n---\n\nConcerns about story/presentation:\n\n* The second use of CAVs, to test relative importance of concepts, is basically an improved saliency method. It's advantages over other saliency methods are stated clearly in 2.1, but it should not be portrayed as fundamentally different.\n\nThe two quantities in eq. 1 can be thought of in terms of directional derivatives. To compute I_w^up start by computing a finite differences approximation of directional derivative of the linear classifier probability p_k(y) with respect to layer l features in the direction of the CAV v_C^l. Call this quantity g_i (for the ith example). Then I_w^up is the average of 1(g_i > 0) over all examples. I think the notion of relative importance used here is basically the idea of a directional derivative.\n\nThis doesn't change the contribution of the paper but it should be mentioned and section 2.1 should be changed so it doesn't suggest this method is fundamentally different than saliency methods in terms of criteria 4.\n\n* Evaluation and Desiderata 4: The fourth criteria for interpretability laid out by the paper says an explanation should be quantitative and testable. I'm not sure exactly what this is supposed to mean. I see two ways to interpret the quantitative criterion.\n\nOne way to interpret the \"quantifiability\" criterion is to say that it requires explanations to be presented as numeric values. But most methods do this. In particular, saliency methods report results in terms of pixel brightness (that is a numeric quantity) even though humans may not know how to interpret that correctly. I do not think this is what was intended, so my second option is to say that the criterion requires an explanation be judged good or bad according to some quantitative metric. But this paper provides no such metric. The explanations in figure 5 are not presented as good or bad according to any metric.\n\nWhile it is significant that the method meets the first 3 criteria, these do not establish the fidelity of the method. Do humans generalize these explanations to valid inferences about model behavior? Maybe consider some evaluation options from section 3 of Doshi-Velez and Kim 2017 (cited in the paper).\n\n* Section 4.1.1: \"This experiment does not yet show that these concept activation vectors align with the concepts that makes sense semantically to humans.\"\n\nIsn't test set accuracy a better measure of alignment with the human concept than the visualizations? Given a choice between a concept vector which produced good test accuracy and poor visualizations and another concept vector which produced poor test accuracy and good visualizations I would think the one with good test accuracy is better aligned to the human concept. I would still prefer a concept vector which satisfies both.\n\n* Contrary to the description in section 2.2, I think DeepDream optimizes a natural image (non-random initialization) rather than starting from a random image. It looks like these visualization start from a random initialization. Which method is used? Maybe cite this paper, which gives a nice overview: \"Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks\" by Nguyen et. al. in the Visualization for Deep Learning workshop at ICML 2016\n\n* In section 4.1.3 I'm not quite sure what the point is. Please state it more clearly. Is the context class the same as the negative set used to train the classifier? Why should it be different/harder to sort corgi examples according to a concept vector as opposed to sorting all examples according to a concept vector? This seems like a useful way of testing to be sure CAV's represent human concepts, but I'm not sure what context concepts like striped/CEO provide.\n\n* Relative vs absolute importance and user choice: Section 4.2 claims that figure 5 shows that a CAV \"captures an important aspect of the prediction.\" I would be a bit more careful about the distinction between relative and absolute here. If red makes images more probably fire trucks then it doesn't necessarily mean that red is important for the fire truck concept in an absolute sense. Can we be sure that there aren't other concepts which more dramatically affect outputs? What if a user makes a mistake and only requests explanations with respect to concepts that are irrelevant to the class being explained? Do we need to instruct users on how to best interpret the explanation?\n\n* How practical is this method? Is it a significant burden for users to provide 100-500 images per concept? Are the top 100 or so images from a search engine good enough to specify a CAV?\n\n\nMinor missing experimental settings and details:\n\n* Section 3 talks about a CAV defined with respect to a non-generic set D of negative examples. Is this setting ever used in the experiments or is the negative set always the same? How does specifying a narrow set of negatives change the CAV for concept C?\n\n* I assume the linear classifier is a logistic regressor, but this is never stated.\n\n* TCAV measures importance/influence as an average over a dataset. This is a validation set, right? For how many of these images are both the user concept and target concept unrelated to the image content (e.g., stripes and zebra for an image of a truck)? When that happens is it reasonable to expect meaningful explanations? They may not be meaningful because the data distribution used to train the CAV probably does not even sparsely cover all concepts in the network's train set. (related to \"reference points\" in \"The (Un)reliability of Saliency Methods\" submitted to ICLR18)\n\n* For relative importance testing it would be nice to see a note about the step size selection (1.0) and experiments that show the effect of different step sizes. Hopefully influence is monotonic in step size so that different step sizes do not significantly change the results.\n\n* How large is the typical difference between p_k(y) and p_k(y_w) in eq. 1? If this difference is small then is it meaningful? Are small differences signal or noise?\n\n\nFinal Evaluation\n---\nI would like to see this idea published, but not in its current form. The method meets a relevant set of criteria that no other method seems to meet, but arguments set forth in the story need some revision and the empirical evaluation needs improvement, especially with respect to model fidelity. I would be happy to change my rating if the above points are addressed.", "Strengths:\n1. This paper proposes a novel method called Concept Activation Vectors (CAV) which facilitates interpretability of neural networks by explaining how much a specific concept influences model predictions. \n2. The proposed method tries to incorporate multiple desiderata, namely, accessibility to non ML experts, customizability w.r.t. being able to explain any concept of interest, plug-in readiness i.e., providing explanations\nwithout requiring retraining of the model. \n\nWeaknesses:\n1. While this work is conceptually interesting, the technical novelty and contributions seem fairly minimal. \n2. The presentation of this paper is one of its weakest points. The organization of the content is quite incoherent. The paper also makes a lot of claims (e.g., hypothesis testing) which are not really justified. \n3. The experimental evaluation of this paper is quite rudimentary. Lots of details are missing. \n\nSummary: This paper proposes a novel framework for explaining the functionality of neural networks by using a simple idea. The intuition behind the proposed approach is as follows: by using the weight vectors of linear classifiers, which take as inputs the activation layer outputs of a given neural network (NN) model and predict the concepts of interest, we can understand the influence of specific concepts of interest on the NN model behavior. The authors claim that this simple approach can be quite useful in providing explanations that can be useful for a variety of purposes including testing specific hypothesis which is never really demonstrated or explained well in the paper. Furthermore, lot of details are lacking in both the experimentation section and the methods section (detailed comments below). The experiments also do not correspond well to the claims made in the introduction and abstract. This paper is also very hard to read which makes understanding the proposed method and other details quite challenging. \n\nNovelty: The novelty of this paper mainly stems from its proposed method of using prototypes which serve as positive and negative examples w.r.t. a specific concept, and leveraging the weight vectors obtained when predicting the positive/negative classes using activation layer outputs to understand the influence of concepts of interest. The technical novelty of the proposed approach is fairly minimal. The experiments also do not support a lot of novelty claims made about the proposed approach. \n\nOther detailed comments:\n1. I would first encourage the authors to improve the overall presentation and organization of this paper. \n2. Please add some intuition about the approach in the introduction. Also, please be succinct in explaining what kind of interpretability is provided by the explanations. I would advise the authors to refrain from making very broad claims and using words such as hypothesis testing without discussing them in detail later in the paper. \n3. Sections 2.3 and 2.4 are quite confusing and can probably be organized and titled differently. In fact, I would advise the authors to structure related work as i. inherently interpretable models ii. global explanations \niii. local explanations iv. neuron level investigation methods. Highlight how existing methods do not incorporate plug-in readiness and/or other desiderate wherever appropriate within these subsections. \n4. Additional related work on inherently interpretable models and global explanations: \ni. Interpretable classifiers using rules and Bayesian analysis, Annals of Applied Statistics, 2015\nii. Interpretable Decision Sets: A joint framework for description and prediction, KDD, 2016\niii. A Bayesian Framework for Learning Rule Sets for Interpretable Classification, JMLR, 2017\niv. Interpretable and Explorable Explanations of Black Box Models, FAT ML, 2017\n5. In section 3, clearly identify what are the inputs and outputs of your method. Also, clearly highlight the various ways in which outputs of your method can be used to understand the model behavior. While Secction 3.2 and 3.3 attempt to describe how the CAV can be used to explain the model behavior, the presentation in these sections can be improved. \n6. I think the experimental sections suffers from the following shortcomings: i. it does not substantiate all the claims made in the introduction ii. some of the details about which layer outputs are being studied are missing through out the section. \n\nOverall, while this paper proposes some interesting ideas, I think it can be improved significantly in terms of its clarity, claims, and evaluation. \n", "The paper deals with concept activation vectors, which the authors aim at using for interpretability in deep feed-forward networks. This is a critical sub-field of deep learning and its importance is only rising. While deep networks have yielded grounbreaking results across several application domains, without explanations for why the network predicts a certain class for a data point, its applicability in sensitive fields, such as medicine, will be limited. The authors put forth four desiderata and aim to construct a methodology that satisfies all of them. The concept vector is the 2-class logistic regression solution that discriminates between two classes of images (a grounded idea and other). This vector is used to amplify or diminish the effect of a concept at a certain layer, thus leading to differing output probabilities. The difference in probability can be used to understand, qualitatively, the importance of the concept. I have a few major and minor concerns, which I detail below. \n\n* The structure and exposition of the paper needs to be significantly improved. Important sections of the paper are difficult to parse, for instance, Sections 2.3 and 2.4 seem abrupt. Also, the text and the contributions have a mismatch. The authors make several strong claims (hypothesis testing, testable quantifying information, etc.) about their approach which are not entirely validated by the results. The authors should especially consider rewriting portions of Sections 1 and 2; many of the statements are difficult to understand. There are many instances (e.g., the ears of the cat example) where a picture or graphic of some kind will greatly benefit the reader. What would also be useful is a Table with the rows being the 4 desiderata and the columns being various previous approaches. \n\n* Am I right in assuming that the concept vector discriminator is simple (un-regularized) logistic regression?\n\n* I don't quite understand why the weights of a discriminator of activations stands as a concept activation vector. The weights of the discriminator would be multiplied by the activations to figure out whether are in the concept class or not; I especially don't grasp why adding those weights should help tease the effect. \n\n* Is the idea limited to feed-forward networks, or is it also applicable for recurrent-like networks? If not, I would encourage the authors to clarify in the title and abstract that this is the case. \n\n* For Equation (1), what is the index 'i' over? \n\n* In reference to Figure 1, have you experimented with using more data for the concepts that are difficult to discriminate? Instead of asking the practitioners for a set amount of examples, one could instead ask them for as much as to discriminate the classes with a threshold (say, 70%) accuracy. \n\n* In the same vein, if a certain concept has really poor predictability, I would assume that the interpretability scores will be hampered as well. How should this be addressed?\n\n* The authors desire a quantitative and testable explanation. I'm not sure what the authors do for the latter. \n", "This paper tries to analyze the interpretability of a trained neural network, by representing the concepts, as their hidden features (vectors) learned on training data. They used images of several example of a concept or object to compute the mean vector, which represent the concept, and analyzed, both qualitatively and quantitatively, the relationship between different concepts. The author claimed that this approach is independent of concept represented in training data, and can be expanded to any concepts, i.e. zero shot examples. \n\nMajor comments:\n\n1- The analysis in the experiment is limited on few examples on how different concept in the training set is related, measures by relative importance, or not related by created a negative concept vector of un related or random images. However, this analysis severely lacks in situation where training set is limited and induces biases towards existing concepts\n\n2-The author claims that this approach encompass following properties,\naccessibility: Requires little to no user expertise in machine learning. \ncustomization: Adapt to any concept of interest (e.g., gender) on the fly without pre-listing a set of concepts before training. \nplug-in readiness: Work without retraining or modifying the model. \nquantification: Provide quantitative and testable information.\n\nRegarding 1) analyzing the relationship between concepts vectors and their effect of class probability need some minimal domain knowledge, therefore this claim should be mitigated\nRegarding 2) Although some experiment demonstrates the relationship between different colors or properties of the object wearing a bikini can shed a light in fairness of the model, it is still unclear that how this approach can indicates the biases of training data that is learned in the model. In case of limited train data, the model is incapable of generalize well in capturing the relationship between all general concepts that does not exist in the training data. Therefore, a more rigorous analysis is required.\nRegarding 3) compare to deepdream that involved an optimization step to find the image maximizing a neuron activation, this is correct. However, guided back propagation or grad-cam method also does not need any retraining or model tweaking.\n\nMinor comments:\n \n1- there are many generic and implicit statements with no details in the paper which need more clarification, for example, \n\nPage 4, paragraph 2: “For example, since the importance of features only needs to be truthful in the vicinity of the data point of interest, there is no guarantee that the method will not generate two completely conflicting explanations.”\n\n2- equation 1: subscript “i” is missing\n\n3- section 4.2: definition for I^{up/down} of equation 1 is inconsistent with the one presented in this section\n", "Thanks for the re-write. This has improved the paper, though it has made it 12 pages long and I still have some concerns.\n\nAuthor Response: \"TCAV is quantitative and directly ties the quantification to the explanation... The p-value of this test is thus the metric we use.\"\n\nI can not find these p-values in the paper. Ideally, for every selected class k these should be reported for all all concepts C (e.g., instead of just using \"yellow, green, blue, red\" for the Fire engine class). Additionally, at least some classes should be selected randomly and the method for choosing random concept vectors should be described.\n\nThe proposed experiments only establish that CAVs have a significant effect on model outputs. Showing this will be enough for me to increase my rating, but I'm still more interested in what effect TCAV has on human understanding of models. I would increase my rating further if this kind of evaluation was provided.\n\nHere's a suggested experiment along those lines: Take the top 5 classes the model predicted for a certain example. Provide the subject who is trying to predict model behavior with CAV explanations for each of these classes and the image. Ask the user to predict which class the model actually ranked 1st (and maybe which the model ranked 2-5 as well). If humans with TCAV do better than humans without TCAV then I'm much more comfortable saying TCAV helps with interpretability.\n\n\nAuthor response: \"Thank you for pointing out the relationship between saliency methods and this work, it is...\"\n\nThe incorporation of directional derivatives wasn't quite correct: \"saliency maps take the derivative of the logits with respect to each pixel, while our work takes derivatives with respect to a concept direction.\" It should be that TCAV takes derivatives \"in the direction of\" a concept, not \"with respect to\" a concept. Directional derivatives are not gradients.\n\nI think the paper's presentation of saliency techniques is a bit misleading.\nThere is a significant difference between TCAV and saliency methods (criteria 2), but I don't think either of these preclude the kind of quantification (criteria 4) described in this paper.\nYou can measure how an image before/after DeepDream changes class probablility.\n\n\nI'd also like to reiterate some concerns, mainly about missing details:\n\n* Relative vs absolute importance and user choice: Section 4.2 claims that figure 5 shows that a CAV \"captures an important aspect of the prediction.\" I would be a bit more careful about the distinction between relative and absolute here. If red makes images more probably fire trucks then it doesn't necessarily mean that red is important for the fire truck concept in an absolute sense. Can we be sure that there aren't other concepts which more dramatically affect outputs? What if a user makes a mistake and only requests explanations with respect to concepts that are irrelevant to the class being explained? Do we need to instruct users on how to best interpret the explanation?\n\n* TCAV measures importance/influence as an average over a dataset. This is a validation set, right? For how many of these images are both the user concept and target concept unrelated to the image content (e.g., stripes and zebra for an image of a truck)? When that happens is it reasonable to expect meaningful explanations? They may not be meaningful because the data distribution used to train the CAV probably does not even sparsely cover all concepts in the network's train set. (related to \"reference points\" in \"The (Un)reliability of Saliency Methods\" submitted to ICLR18)\n\n* For relative importance testing it would be nice to see a note about the step size selection (1.0) and experiments that show the effect of different step sizes. Hopefully influence is monotonic in step size so that different step sizes do not significantly change the results.\n\n* How large is the typical difference between p_k(y) and p_k(y_w) in eq. 1? If this difference is small then is it meaningful? Are small differences signal or noise?", "Many thanks to the reviewers for their thoughtful and helpful comments. We are glad that the reviewers clearly saw the potential for this work on interpreting NNs. We have uploaded a new version of the paper which contains significant changes from the original version (please see the revision). With the help of the reviewers comments we have significantly improved the presentation and added a number of clarifying details throughout the paper. \n\nWe also added the details of how we apply hypothesis testing in order to obtain quantitative explanations. These details were not included in our earlier version. This addresses several of the reviewers' comments on the lack of evidence on testability. Given samples of class images (e.g., zebra pictures) and two concept vectors A and B, we perform two-tailed z-testing to invalidate the null hypothesis that there is no difference in importance of concepts A and B for the class. We perform this testing for each pair of concepts.\n\nWe address common concerns in this thread. We also individually addressed the comments from two reviewers in individual threads.\n\nReviewer Comment: “The technical novelty of the proposed approach is fairly minimal. The experiments also do not support a lot of novelty claims made about the proposed approach.“\nAuthor’s response: Could you please provide more information regarding the lack of novelty? In the related works section we detail why previous methods do not meet all 4 desiderata that we describe in the introduction. Do you have a related work in mind that meets all of these desiderata? We view the simplicity as a strength of the work and not a weakness. We believe this work should be judged based on the novelty of the desiderata it satisfies over previous approaches and not by the technical complexity of the method. \n\n", "Reviewer Comment: “...But this paper provides no such metric. The explanations in figure 5 are not presented as good or bad according to any metric.”\n\nAuthor’s response: This is a great point and we will clarify the metric that we provide and experiments we conduct in order to detail the advantages the TCAV method has over saliency methods in terms of quantification.\n\nTCAV is quantitative and directly ties the quantification to the explanation given in a testable manner. For example, if I_w+^up is higher for red than yellow in relation to fire engine then the explanation is that the concept of red is more important to the classification of firetruck than the concept of yellow. The relative differences in the magnitudes of I_w+^up for red and yellow is thus the measure of the relative importances of the concepts. This is made more precise by testing against the null hypothesis that no color is significant. To test this hypothesis we can do two tailed z-testing on the measured importance values, and ask the question what is the probability that random concept vectors would observe the measured difference. The p-value of this test is thus the metric we use. \n\n\nBy contrast, while saliency does produce a quantitative measure of “importance” for each pixel in the image, the user still needs to figure out how to use these quantities in order to interpret the network. Simply looking at the saliency map is a qualitative explanation, and prior work applying saliency methods to image datasets has used qualitative comparisons as part of the evaluation of their methods. For example [1, figure 2], is a qualitative comparison of the saliency maps produced by two different methods and the authors note that their method is better at identifying distinctive features in the image. [1, figure 3] also provides a qualitative explanation of which areas of the image are most important for classification. Although this qualitative explanation of the most important regions ultimately results from quantitative pixel intensities, it is unclear how the exact pixel intensities relate to the measure of “importance”. For example the authors in discussing figure 3 note that the saliency highlights the boundary of the area of interest with large positive values and the interior with large negative values. The authors interpret this as meaning the network focuses on the boundary of these regions and not the interior. However, it is still unclear how relative pixel intensities relate to importance. If one area of the image is brighter than the other, does this imply greater importance? How should brightness of a region be quantified? Should a bounding box be drawn and the mean saliency value in the region be computed? \n\n\n[1] - “Axiomatic Attribution for Deep Networks”, Sundararajan et. al. \n\nReviewer Comment:“How practical is this method? Is it a significant burden for users to provide 100-500 images per concept? Are the top 100 or so images from a search engine good enough to specify a CAV?”\n\nAuthor’s response: We acknowledge that this method does require some effort from the users to curate a dataset.\nFor example, we believe taking the top 100 images from a search engine with some manual curation to remove irrelevant results is sufficient. In fact, the “arms” concept presented in the paper is curated in that way, using only 33 images. We believe this extra effort is well worth the customizability that the TCAV method provides. In our experience, end-users always have different concerns/hypothesis in mind that they are eager to test based on their domain expertise. If the end-user does not have a particular hypothesis in mind, a simple way to start is to collect a set of data points with the same feature (e.g., in case of categorical data, collect all data points that has the same feature values for a set of features of interest), and conduct hypothesis testing on each set as a concept. \n\nReviewer Comment:“The second use of CAVs, to test relative importance of concepts, is basically an improved saliency method. It's advantages over other saliency methods are stated clearly in 2.1, but it should not be portrayed as fundamentally different.”\n\nAuthor’s response: Thank you for pointing out the relationship between saliency methods and this work, it is an important connection to make and we will rewrite 2.1 to make this connection clear. Also thank you for the link to directional derivatives, we will also specify this connection and the updated paper will contain experiments which computes I_w^up formally as a directional derivative. \n\n\nReviewer Comment:“Maybe consider some evaluation options from section 3 of Doshi-Velez and Kim 2017 (cited in the paper).”\n\nAuthor’s response: Great point, we plan to add a small scale study in the final version of the paper. \n\n", "Reviewer Comment:“1) analyzing the relationship between concepts vectors and their effect of class probability need some minimal domain knowledge, therefore this claim should be mitigated”\n\nAuthor’s response: We believe that any users using TCAV will have some representative domain knowledge about the data they are working with; we did not intend to indicate otherwise. However, explanations should preferably not require any understanding of the decision-procedure mechanisms or Machine Learning. As our experiments show, TCAV can meet this bar by highlighting relationships and correlations of user-defined concepts to the model’s prediction.\n\n(To be clear: We think if the user does not even have minimal domain knowledge, any explanation method would fail. For example, if someone is trying to explain how a biokinematics decision procedure works, the audience has to understand the basics of biokinematics to assess the explanatory data being presented, and how it is correlated to decisions.)\n\nReviewer Comment:“ 2) Although some experiment demonstrates the relationship between different colors or properties of the object wearing a bikini can shed a light in fairness of the model, it is still unclear that how this approach can indicates the biases of training data that is learned in the model. In case of limited train data, the model is incapable of generalize well in capturing the relationship between all general concepts that does not exist in the training data. Therefore, a more rigorous analysis is required.”\n\nAuthor’s response: Could you please articulate? It seems to us that this comment indicates that the results in this paper does shed a light in exposing biases, but you have particular analysis in mind that would further strengthen the paper. We would love to hear it. \n\nReviewer Comment:“3) compare to deepdream that involved an optimization step to find the image maximizing a neuron activation, this is correct. However, guided back propagation or grad-cam method also does not need any retraining or model tweaking.”\n\nAuthor’s response: We mention in section 2.1 that the saliency map methods do satisfy criteria 3). In fact, deep dream also satisfies 3), as noted in our section 2.2; the deep dream technique does not require retraining the model. The argued advantage of TCAV is that it simultaneously satisfies all 4 desiderata whereas previous techniques do not. \n", "As we reference this work (ND paper) in our paper (Section 3.2), we also note that this work is a great step towards understanding NNs. Despite the similarities in searching for human-relatable concepts in layers, the two works are not directly comparable. The ND paper focuses on identifying individual units that detect a concept, while this work focuses on a direction in the entire layer that represents a concept. Finding directions in a layer is strictly more general than finding individual neurons as the learned CAV could in general be sparse. The ND paper improves the scientific understanding of CNNs (i.e., the relationship between interpretability and discriminative power in the paper’s Figure 4), whereas the goal of this work is to offer quantitative explanations of the relative importance of each concept via z-testing. \n\n", "It seems that the proposed approach is similar to recent Network Dissection paper by David Bau et al. presented at CVPR 2017. Did authors think about comparison?" ]
[ 4, 4, 5, 3, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 2, 5, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1viikbCW", "iclr_2018_S1viikbCW", "iclr_2018_S1viikbCW", "iclr_2018_S1viikbCW", "SyqwWceMM", "iclr_2018_S1viikbCW", "r1C4WcxzM", "BykC2IA-G", "H18MkJAWf", "iclr_2018_S1viikbCW" ]
iclr_2018_ryZ3KCy0W
Link Weight Prediction with Node Embeddings
Application of deep learning has been successful in various domains such as im- age recognition, speech recognition and natural language processing. However, the research on its application in graph mining is still in an early stage. Here we present the first generic deep learning approach to the graph link weight prediction problem based on node embeddings. We evaluate this approach with three differ- ent node embedding techniques experimentally and compare its performance with two state-of-the-art non deep learning baseline approaches. Our experiment re- sults suggest that this deep learning approach outperforms the baselines by up to 70% depending on the dataset and embedding technique applied. This approach shows that deep learning can be successfully applied to link weight prediction to improve prediction accuracy.
rejected-papers
This paper does not meet the acceptance bar this year, and thus I must recommend it for rejection.
train
[ "SJFHMtueG", "H1Q3f2_ef", "BJ-Rv1neG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Although this paper aims at an interesting and important task, the reviewer does not feel it is ready to be published.\nBelow are some detailed comments:\n\nPros\n- Numerous public datasets are used for the experiments\n- Good introductions for some of the existing methods.\nCons\n- The novelty is limited. The basic idea of the proposed method is to simply concatenate the embeddings of two nodes (via activation separately) from both side of edges, which is straightforward and produces only marginal improvement over existing methods (the comparison of Figure 1 and Figure 3 would suggest this fact). The optimization algorithm is not novel either.\n- Lack of detailed description and analysis for the proposed model S. In Section 5.2, only brief descriptions are given for the proposed approach.\n- The selected baseline methods are too weak as competitors, some important relevant methods are also missing in the comparisons. For the graph embedding learning task, one of the state-of-the-art approach is conducting Graph Convolutional Networks (GCNs), and GCNs seem to be able to tackle this problem as well. Moreover, the target task of this paper is mathematically identical to the rating prediction problem (if we treat the weight matrix of the graph as the rating matrix, and consider the nodes as users, for example), which can be loved by a classic collaborative filtering solution such as matrix factorization. The authors probably need to survey and compared against the proposed approach.", "The authors propose to use pretrained node embeddings in a deep learning model for link weight prediction in graphs. \nThe embedding of the source node and the destination node are concatenated and fed into a fully connected neural network which produces the link weight as its output.\nExisting work by Hou and Holder 2017 trains the same architecture, but the node embeddings are learned together with the weights of the neural network. In my professional opinion, the idea of using pretrained node embeddings and training only the neural network is not enough of a contribution.\n\nSince the proposed method does not build on the SBM or pWSBM the detailed equations on page 2 are not necessary. Also, Figure 1, 2, and 3 are not necessary. Fully connected neural networks are widely used and can be explained briefly without drawing the architecture. \n\nPros:\n+ interesting problem\n+ future work. evaluation of embeddings is indeed a hard problem worth solving.\n\nCons:\n- not novel", "The paper presents a generic approach to graph link weight prediction problems based on node enbeddings. After introducing several existing methods, the paper proposes a \"generic\" link weight prediction approach that uses the node embedding produced by any node embedding techniques. Six datasets are used for evaluation. \n\nOverall, the difference to the existing method [1] is minor. I don't think there is much novelty in the \"generic\" approach. More essential abstraction and comprehensive analysis is needed for a strong ICLR paper. \n\n[1] Yuchen Hou and Lawrence B Holder. Deep learning approach to link weight prediction. In Neural\nNetworks (IJCNN), 2017 International Joint Conference on, pp. 1855–1862. IEEE, 2017.\n\n" ]
[ 3, 3, 4 ]
[ 4, 5, 3 ]
[ "iclr_2018_ryZ3KCy0W", "iclr_2018_ryZ3KCy0W", "iclr_2018_ryZ3KCy0W" ]
iclr_2018_BJhxcGZCW
Generative Discovery of Relational Medical Entity Pairs
Online healthcare services can provide the general public with ubiquitous access to medical knowledge and reduce the information access cost for both individuals and societies. To promote these benefits, it is desired to effectively expand the scale of high-quality yet novel relational medical entity pairs that embody rich medical knowledge in a structured form. To fulfill this goal, we introduce a generative model called Conditional Relationship Variational Autoencoder (CRVAE), which can discover meaningful and novel relational medical entity pairs without the requirement of additional external knowledge. Rather than discriminatively identifying the relationship between two given medical entities in a free-text corpus, we directly model and understand medical relationships from diversely expressed medical entity pairs. The proposed model introduces the generative modeling capacity of variational autoencoder to entity pairs, and has the ability to discover new relational medical entity pairs solely based on the existing entity pairs. Beside entity pairs, relationship-enhanced entity representations are obtained as another appealing benefit of the proposed method. Both quantitative and qualitative evaluations on real-world medical datasets demonstrate the effectiveness of the proposed method in generating relational medical entity pairs that are meaningful and novel.
rejected-papers
The authors seem to miss important related literature for their comparison. They also tuned hyperparameters and tested on the same validation set. They should split between train/validation/test. Reviews are just too low across the board to accept.
train
[ "Bym8Y7aXf", "r1hITk7ez", "H1_4279lf", "SJyOXNclf", "S1Gt1m6mM", "HJ-9qfp7M" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "Thanks for your review. \n\n1.\tThe medical entity pairs generated by proposed model can be used to expand an existing knowledge graph with new entities as vertexes and relations as edges in a generative fashion. However, the KB completion task and the proposed entity pair discovery task share different objectives, and adopt totally different approaches:\n a)\tIn the medical domain, it is difficult to obtain a full spectrum of free-text where all kinds of relational medical entity pairs are co-occurred. It is efficient to learn the intrinsic medical relations from existing entity pairs directly and generate unseen entity pairs in a generative fashion. Although both tasks provide additional entity pairs as the output, we approach this problem from a novel, generative perspective that significantly lowers the data requirements during training. Table 3 shows that our model works well even when all the training entity pairs have the same relationship. This can not be achieved by discriminatively trained KB completion methods. KB completion methods like Trans-E relies on entity pairs having different relations and learns to distinguish one from another; otherwise negative entity pairs with no semantics meanings are used.\n b)\tThe generative discovery model is supposed to only generate rational entity pairs. Moreover, it is shown to have the ability to generate entity pairs having a pre-assigned relationship type, aka conditional inference, without the requirement of further domain knowledge. In the KB completion task, the rational entity pairs cannot be even obtained when there is no high-quality test set that contains entity pairs having that relationship. Otherwise, additional expert knowledge may be involved (e.g. to make sure that there exists a sentence that mentions two new entities having a certain relationship). Even then, the KB completion model needs to successfully classify the relationship for each test sample. The proposed model makes the conditional inference possible and efficient.\n c)\tLast but not least, it is unfair to simply evaluate the rational entity pairs generated by the proposed model against a discriminatively trained KB completion model that learns to tell the rational relation from other relations (or simply from a negative relation) when candidate entity pairs are already given for evaluation. We genuinely believe that it is way more challenging to understand what an apple is in order to create a new apple with a different look, than simply trained to distinguish an apple from a banana. \n\n2.\tIn relation extraction methods where the objective is to detect whether or not a certain relation exists in a sentence, some words in the sentence serve as indicators. For example, for the \"born in\" relationship between a person and a place, words like \"born\", \"from\" are crucial. In the medical domain, free-text that contain a full spectrum of sentences that cover all medical entity pairs are hard to obtain, let alone domain-specific indicator words that are available to use. Without such text data as additional contexts, the proposed model is still able to generate novel entity pairs, which we consider as a major contribution.\n\n3.\tFor our generative approach, \"nearest neighbor search\" is only performed as the last step of the decoder during evaluation to get natural language entities from the generated embeddings. Such operation is only performed on the generated rational entity pairs: it is not required at all during the training process. In many classic \"discriminatively-trained\" KB completion models, such search is usually used to trim candidate entity pairs that are not worth evaluating.\n\n4.\tThe medical dataset has unique properties that other datasets do not have, which make it suitable for our generative entity pair discovery task. \n\ta)\tFirst, the medical entity pairs contain clear and unambiguous relational semantics. This allows the model to directly encode two entities into the latent space without incorporating free-text contexts in which two medical entities are co-occurred. For example, the entity pairs <urethritis, urethra itching> and <radial nerve palsy, upper extremity weakness> can be used to learn the medical relationship from a disease to a symptom which it may cause. On the contrary, the entity pair <Obama, USA> in datasets, such as FB15K-237, possesses multiple relationships such as \"born in\", \"president of\", and \"live in\".\n\tb)\tSecond, different medical relationships used in this work are closely correlated with each other. For example, disease->disease, disease->symptom and symptom->symptom relationships share common entities, which is not frequently observed in other datasets. The proposed method is able to benefit from such property when solely learning from entity pairs. As shown in Table 3, quality and novelty are consistently improved when multiple correlated medical relationships are trained together, other than trained separately.", "SUMMARY.\n\nThe paper presents a variational autoencoder for generating entity pairs given a relation in a medical setting.\nThe model strictly follows the standard VAE architecture with an encoder that takes as input an entity pair and a relation between the entities.\nThe encoder maps the input to a probabilistic latent space.\nThe latent variables plus a one-hot-encoding representation of the relation is used to reconstruct the input entities.\nFinally, a generator is used to generate entity pairs give a relation.\n\n----------\n\nOVERALL JUDGMENT\nThe paper presents a clever use of VAEs for generating entity pairs conditioning on relations.\nMy main concern about the paper is that it seems that the authors have tuned the hyperparameters and tested on the same validation set.\nIf this is the case, all the analysis and results obtained are almost meaningless.\nI suggest the authors make clear if they used the split training, validation, test.\nUntil then it is not possible to draw any conclusion from this work.\n\nAssuming the experimental setting is correct, it is not clear to me the reason of having the representation of r (one-hot-vector of the relation) also in the decoding/generation part.\nThe hidden representation obtained by the encoder should already capture information about the relation.\nIs there a specific reason for doing so?\n\n", "The authors suggest using a variational autoencoder to infer binary relationships between medical entities. The model is quite simple and intuitive and the authors demonstrate that it can generate meaningful relationships between pairs of entities that were not observed before. \nWhile the paper is very well-written I have certain concerns regarding the motivation, model, and evaluation methodology followed:\n\n1) A stronger motivation for this model is required. Having a generative model for causal relationships between symptoms and diseases is \"intriguing\" yet I am really struggling with the motivation of getting such a model from word co-occurences in a medical corpus. I can totally buy the use of the proposed model as means to generate additional training data for a discriminative model used for information extraction but the authors need to do a better job at explaining the downstream applications of their model. \n\n2) The word embeddings used seem to be sufficient to capture the \"knowledge\" included in the corpus. An ablation study of the impact of word embeddings on this model is required. \n\n3) The authors do not describe how the data from xywy.com were annotated. Were they annotated by experts in the medical domain or random users?\n\n4) The metric of quality is particularly ad-hoc. Meaningful relationships in a medical domain and evaluation using random amazon mechanical turk workers do not seem to go well together. \n\n5) How does the proposed methods compare against a simple trained extractor? For instance one can automatically extract several linguistic features of the sentences two known related entities appeared with and learn how to extract data. The authors need to compare against such baselines or justify why they cannot be used.\n", "In the medical context, this paper describes the classic problem of \"knowledge base completion\" from structured data only (no text). The authors argue for the advantages of a generative VAE approach (but without being convincing). They do not cite the extensive literature on KB completion. They present experimental results on their own data set, evaluating only against simpler baselines of their own VAE approach, not the pre-existing KB methods.\n\nThe authors seem unaware of a large literature on \"knowledge base completion.\" E.g. [Bordes, Weston, Collobert, Bengio, AAAI, 2011], [Socher et al 2013 NIPS], [Wang, Wang, Guo 2015 IJCAI], [Gardner, Mitchell 2015 EMNLP], [Lin, Liu, Sun, Liu, Zhu AAAI 2015], [Neelakantan, Roth, McCallum 2015], \n\nThe paper claims that operating on pre-structured data only (without using text) is an advantage. I don't find the argument convincing. There are many methods that can operate on pre-structured data only, but also have the ability to incorporate text data when available, e.g. \"universal schema\" [Riedel et al, 2014].\n\nThe paper claims that \"discriminative approaches\" need to iterate over all possible entity pairs to make predictions. In their generative approach they say they find outputs by \"nearest neighbor search.\" But the same efficient search is possible in many of the classic \"discriminatively-trained\" KB completion models also.\n\nIt is admirable that the authors use an interesting (and to my knowledge novel) data set. But the method should also be evaluated on multiple now-standard data sets, such as FB15K-237 or NELL-995. The method is evaluated only against their own VAE-based alternatives. It should be evaluated against multiple other standard KB completion methods from the literature, such as Jason Weston's Trans-E, Richard Socher's Tensor Neural Nets, and Neelakantan's RNNs.\n", "Thanks a lot for your review. \n\n1.\tIn the medical domain, it is difficult to obtain a full spectrum of free-text in which all the relational medical entity pairs are co-occurred so that they can be further extracted in a discriminative fashion. The proposed generative method significantly lowers the data requirement for rational, novel medical entity pair discovery. It learns the intrinsic medical relations directly from the existing entity pairs without incorporating additional medical corpus in which two entities are co-occurred. As indicated in the review, the newly discovered entity pairs are definitely helpful in many ways: an intuitive downstream application is to provide more training samples for supervised learning models. Clustering could also benefit from the newly discovered entity pairs as a form of oversampling technique. \n\n2.\tWe agree that the word embedding captures medical knowledge and embodies rich semantic information from the diversely expressed entity pairs. However, the word embedding cannot be removed for ablation study. It does not only build the backbone for entity pair representations and accelerate the model convergence, more importantly, the pre-trained word embeddings are necessary when decoding the generated word embeddings of the entity pairs into natural language entities. Without the word embedding, evaluation cannot be performed as we only obtain the generated embeddings, not entity pairs that are interpretable in the natural language for human annotation. Furthermore, the vocabulary of pre-trained word embedding is way larger than the number of unique entities in the labeled entity pairs. Using the word embedding may allow the model to decode unseen entities that exist in the vocabulary, but not in the training data.\n\n3.\tThe relational medical entity pairs obtained from xywy.com are annotated manually by domain-experts. \n\n4.\tThe generated relational medical entity pairs are evaluated both qualitatively and quantitatively. As far as we know, there is no existing quantitative metric for quality evaluation of the generated medical entity pairs. Therefore, human quality evaluation is conducted by Amazon Mechanical Turk workers. Instructions and requirements for workers are shown in Appendix C.\n\n5.\tThe discriminative relation extraction from free-text and the generative entity pair discovery are two different tasks. The extractor is not explicitly evaluated in this work for the following reasons:\n a.\tDifferent training schema: the traditional extractor is trained discriminatively. It relies on the difference between entity pairs of different relationships and learns a decision boundary to distinguish one relation from another. The extractor fails to work in the case where all the training entity pairs belong to the same medical relation. Our generative setting solely learns from the existing entity pairs, no matter they belong to the same relationship or not. As shown in Table 3, our generative model works well when trained with entity pairs that all belong to the same relation, and works even better when entity pairs with different relations are trained together.\n b.\tDifferent testing schema: a large number of candidate entity pairs need to be provided and evaluated by the extractor in order to get the final, rational entity pairs. The choice of candidates sometimes involves additional expert knowledge; otherwise, any pairwise entities need to be fed to and tested by the extractor model. Our generative model learns to only generate rational medical entity pairs just given the type of relationship. When testing, we genuinely believe that it is way more challenging to understand what an apple is in order to create a new apple with a different look, than simply trained to discriminate an apple from a banana. Thus it is unfair to simply compare their results. \n c.\tUse of data: Our model does not need external documents in both training/testing phase. It only requires labeled data and pre-trained word embeddings. The extractor suffers from the data sparsity problem during training: it is hard to obtain a full spectrum of documents where two medical entity pairs are not only mentioned simultaneously in a single sentence but also pertain a specific medical relationship in that sentence. Also, the extractor relies on keywords or indicators in a single sentence to determine the existence of a certain relation, which is not required by our model. \n", "Thanks a lot for your review.\n\n1.\tThe testing is not conducted on the validation set. The validation set is only used for hyperparameter tuning. We split the labeled entity pairs into training (70%) and validation (30%) set (described in the first paragraph of Section 3.1). As described in Appendix B, a hyperparameter analysis is conducted to show the validation losses when the model is trained with a wide range of hyperparameter settings, where the hyperparameter setting with the lowest validation loss is adopted.\nDuring testing, the proposed CRVAE model is able to generate unseen, meaningful entity pairs for a given medical relationship. The generator of the proposed model samples from the latent space according to the relationship of new entity pairs we want to obtain and then decodes the sampled vector, along with the relationship indicator, into entity pairs that are evaluated separately without the use of the validation set. The quantitative evaluation results are shown in Table 2 and Table 3, where three measurements are used: quality, support, and novelty. For qualitative evaluation, additional case studies and visualizations are provided in Section 3.4.2-3.4.4.\n\n2.\tWe want to have a more controllable generation process in terms of which relationship of entity pairs we want to generate. The representation of r in the generation part enables the conditional inference: it guides the model to generate entity pairs having a certain relationship (instead of using a random noise to generate entity pairs having arbitrary relationships), which is one of our key contributions. As shown in Figure 2 in Section 2.4, the representation of r is fed to the generator in two stages: 1) when generating the latent vector $\\hat z$ from the latent space 2) when decoding the sampled vector $\\hat z$.\n\n Another reason for introducing the representation of r into the generation process is that the latent space itself does not capture clear enough information without the use of the representation of r. We’ve introduced a baseline model RVAE (without incorporating r) and illustrated our observations in Figure 4. We color the labeled validation samples in the latent space, from which we can find that the baseline model RVAE (without incorporating r) is able to map entity pairs with different relationships vaguely into different regions in the latent space. However, since the label r is not used in RVAE, it is still hard to draw a clear enough boundary for each relationship, so as to sample accordingly and generate entity pairs having that relationship. This motivates us to incorporate the representation of r into the generation process. As shown in the right part of Figure 4, when r is given to the generator, the categorical information it provides naturally allows the generator to sample differently when the relationship varies. For example, if we want to generate entity pairs with symptom->disease relation, we will feed both the one-hot vector r indicating the symptom->disease relationship, as well as a latent value $\\hat z$ sampled from the latent space that is conditioned on the symptom->disease relation, to the generator in order to get entity pairs having the symptom->disease relationship.\n" ]
[ -1, 4, 4, 2, -1, -1 ]
[ -1, 3, 4, 5, -1, -1 ]
[ "SJyOXNclf", "iclr_2018_BJhxcGZCW", "iclr_2018_BJhxcGZCW", "iclr_2018_BJhxcGZCW", "H1_4279lf", "r1hITk7ez" ]
iclr_2018_ryA-jdlA-
A closer look at the word analogy problem
Although word analogy problems have become a standard tool for evaluating word vectors, little is known about why word vectors are so good at solving these problems. In this paper, I attempt to further our understanding of the subject, by developing a simple, but highly accurate generative approach to solve the word analogy problem for the case when all terms involved in the problem are nouns. My results demonstrate the ambiguities associated with learning the relationship between a word pair, and the role of the training dataset in determining the relationship which gets most highlighted. Furthermore, my results show that the ability of a model to accurately solve the word analogy problem may not be indicative of a model’s ability to learn the relationship between a word pair the way a human does.
rejected-papers
This paper does not meet the acceptance bar this year, and thus I must recommend it for rejection.
train
[ "Hkkq0dDlM", "B1oFM1FeG", "ByWUtfoef" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new method for solving the analogy task, which can potentially provide some insight as to why word2vec recovers word analogies.\n\nIn my view, there are three main issues with this paper: (1) the assumptions it makes about our understanding of the analogy phenomenon; (2) the authors' understanding of the proposed method, what it models, and its relation to prior art; (3) the very selective subset of analogies that the author used for evaluation.\n\n\nASSUMPTIONS\nThe author assumes that there the community does not understand why word embedding methods such as word2vec recover analogies. I believe that, in fact, we do have a good understanding of this phenomena. Levy & Goldberg [1] showed that optimizing x for\nargmax(cos(x, A - B + C))\nis equivalent to optimizing x for\nargmax(cos(x, A) - cos(x, B) + cos(x, C))\nwhich means that one can interpret this objective as searching for a word x that is similar to A, similar to C, and dis-similar to B. Linzen [2] cemented this explanation by removing the negative term (-cos(x, B)) and showing that for a wide variety of analogies, the method still works. Drozd et al [3] and Rogers et al [4] also argue that the original datasets used by Mikolov et al were too easy because they focused on encyclopedic facts, and expand these datasets to other non-encyclopedic relations, which are significantly more difficult to solve using simple vector arithmetic.\n\nIn other words, we know why word2vec is able to solve analogies via vector arithmetic: because many analogies (like those in Mikolov et al's original dataset) are \"gameable\", and can be solved by finding a term that is similar to A and similar to C at the same time. For example, if A=\"woman\" and C=\"king\", then x=\"queen\" fits the bill.\n\n\nMETHOD\nFrom what I can understand, the proposed method models the 3-way co-occurrence between A, B, and a context noun (let's call it R). Leveraging the distribution of (X, R, Y) for solving problems in lexical semantics has been studied quite a bit in the past, e.g. Latent Relational Analysis [5] and even Hearst patterns. I think the current description overlooks this major deviation from word2vec and other distributional methods, which only model the 2-way co-occurrence (X, R). This is a much more profound difference than just filtering non-nouns. I think the proposed method should be redescribed in these terms, and compared to other work that modeled 3-way co-occurrences.\n\n\nEVALUATION DATA\nThe author evaluates their method on a subset of the original analogy task, which is very limited. I would like to see an evaluation on (A) the original two datasets of Mikolov et al (without non-nouns), and (B) the larger datasets provided by Drozd et al [3] and Rogers et al [4].\n\nIn addition, since I think the analogy phenomenon is well understood, I would like to see some demonstration that this method has added value beyond the analogy benchmark.\n\n\nMISCELLANEOUS COMMENTS\n* The author does not state the important fact that when searching for the closest x to A - B + C, the search omits A, B, and C. It is often the case that the result is A or C without this omission.\n* The paper is partially de-anonymized (e.g. links and acknowledgements).\n* One of the problems with modeling 3-way co-occurrences (as opposed to 2-way co-occurrences) is that they are much sparser. I think this is a more precise explanation for why the currency relation is particularly hard to capture with this method.\n\n[1] http://www.aclweb.org/anthology/W14-1618 \n[2] http://anthology.aclweb.org/W16-2503 \n[3] http://aclweb.org/anthology/C/C16/C16-1332.pdf \n[4] http://www.aclweb.org/anthology/S17-1017 \n[5] https://arxiv.org/pdf/cs/0508053.pdf \n", "This paper presents, and analyzes, a method for learning word relationships based on co-occurrence. In the method, relationships between pairs of words (A, B) are represented by the terms that tend to occur around co-mentions of A and B in text. The paper shows the start of some interesting ideas, but needs revisions and much more extensive experiments.\n\nOn the plus side, the method proposed here does perform relatively well (Table 1) and probably merits further investigation. The experiments in Table 1 can only be considered preliminary, however. They only evaluate over a small number of relationships (three) -- looking at 20 or so different relationships would greatly improve confidence in the conclusions.\n\nBeyond Table 1 the paper makes a number of claims that are not supported or weakly supported (the paper uses only a handful of examples as evidence). An attempt to explain what Word2Vec is doing should be made with careful experiments over many relations and hundreds of examples, whereas this paper presents only a handful of examples for most of its claims. Further, whether the behavior of the proposed algorithm actually reflects what word2vec is doing is left as a significant open question.\n\nI appreciate the clarity of Assumption 1 and Proposition 1, but ultimately this formalism is not used and because Assumption 1 about which nouns are \"semantically related\" to which other nouns attempts to trivialize a complex notion (semantics) and is clearly way too strong -- the paper would be better off without it. Also Assumption 1 does not actually claim what the text says it claims (the text says words outside the window are *not* semantically related, but the assumption does not actually say this) and furthermore is soon discarded and only the frequency of noun occurrences around co-mentions is used. I think the description of the algorithm could be retained without including Assumption 1.\n\nminor:\n\nReferences to numbered algorithms or assumptions should be capitalized in the text.\n\nwhat the introduction means about the \"dynamics\" of the vector equation is a little unclear\n\nA submission shouldn't have acknowledgments, and in particular with names that undermine anonymity\n\nMLE has a particular technical meaning that is not utilized here, I would just refer to the most frequent words as \"most related nouns\" or similar\n\nIn Table 1, are the \"same dataset\" results with w2v for the nouns-only corpus, or with all the other words?\n\nThe argument made assuming a perfect Zipf distribution (with exponent equal to one) should be made with data.\n\nwill likely by observed -> will likely be observed\n\nlions:dolphins probably ends up that way because of \"sea lions\"\n\nTable 4 caption: frequencies -> currencies\n\nTable 2 -- claim is that improvements from k=10 to k=20 are 'nominal' but they look non-negligible to me\n\nI did not understand how POS lying in the same subspace means that Vec(D) has to be in the span of Vecs A-C.", "This paper proposes a method to solve the 'word analogy problem', which was proposed as a way of understanding and evaluating word embeddings by Mikolov et al. There are some nice analyses in the paper which, if better organised, could lead to an improved understanding of semantic word spaces in neural nets. \n\ncomments: \n\nThe word analogy task was developed as an interesting way to analyse and understand word embedding spaces, but motivation for learning word embeddings was as general-purpose representations for language processing tasks (as in collobert et al, 2011), not as a way of resolving analogy questions. The authors develop a specialist method for resolving analogies, and it works (mostly) better than using the additive geometry of word-embedding spaces. But I don't think that comparison is 'fair' - the analogy thing is just a side-effect of word embedding spaces. \n\nGiven that the authors focus on the word-analogy problem as an end in itself, I think there should be much more justification of why this is a useful problem to solve. Analogy seems to be fundamental to human cognition and reasoning, so maybe that is part of the reason, but it's not clear to the reader. \n\nThe algorithm seems to be simple and intuitive, but the presentation is overly formal and unclear. It would be much easier for the reader to simply put into plain terms what the algorithm does.\n\nUsing a POS-tagger to strip out nouns is a form of supervision (the pos-tagger was trained on labelled data) that word-embedding methods do not use, which should at least be acknowledged when making a comparison. Similarly, it is nice that the present method works on less data, but the beauty of word embeddings is that they can be trained on any text - i.e. data is not a problem, and 'work' for any word type. Stripping away everything but nouns clearly allows co-occurrence semantic patterns to emerge from less data, but at the cost of the supervision mentioned above. Moreover, I suspect that the use of wikipedia is important for the proposed algorithm, as the pertinent relations are often explicit in the first sentence of articles \"Paris the largest city and capital of France...\". Would the same method work on any text? I would expect this question to be explored, even if the answer is negative. \n\nThe goal of understanding word2vec and embedding spaces in general (section 5) is a really important one (as it can tell us a lot about how language and meaning is encoded in deep learning models in general), and I think that's one of the strongest aspects of this work. However, the conclusions from this section (and other related conclusions in other sections) are a little unclear to me. Perhaps that is because I don't quite get algorithm 3, which would be mitigated by an intuitive explanation to complement the pseudocode. I'm also confused by the assertion that Vec(A) - Vec(B) conveys the 'common information' in A and B. How can a non-symmetric operation convey 'common information'. Surely it conveys something about the relationship between A and B?\n\nMinor point:\n\"may not the be indicative of the model's ability to learn the relationship between a word pair the way a human does\" (Abstract)\n- I'm not sure we know how humans learn the relationships between word pairs. Are you referring to formal semantic relations i.e. in taxonomies in WordNet? This sentence seems dangerous, and the claim about humans is not really treated in the article itself. \n\nThe a+cknowledgements compromise the anonymity of the authors. " ]
[ 2, 3, 3 ]
[ 5, 4, 4 ]
[ "iclr_2018_ryA-jdlA-", "iclr_2018_ryA-jdlA-", "iclr_2018_ryA-jdlA-" ]