id
stringlengths
11
20
paper_text
stringlengths
29
163k
review
stringlengths
666
24.3k
iclr_2018_BJubPWZRW
We present Cross-View Training (CVT), a simple but effective method for deep semi-supervised learning. On labeled examples, the model is trained with standard cross-entropy loss. On an unlabeled example, the model first performs inference (acting as a "teacher") to produce soft targets. The model then learns from these soft targets (acting as a "student"). We deviate from prior work by adding multiple auxiliary student prediction layers to the model. The input to each auxiliary student layer is a sub-network of the full model that has a restricted view of the input (e.g., only seeing one region of an image). The students can learn from the teacher (the full model) because the teacher sees more of each example. Concurrently, the students improve the quality of the representations used by the teacher as they learn to make predictions with limited data. When combined with Virtual Adversarial Training, CVT improves upon the current state-of-the-art on semi-supervised CIFAR-10 and semi-supervised SVHN. We also apply CVT to train models on five natural language processing tasks using hundreds of millions of sentences of unlabeled data. On all tasks CVT substantially outperforms supervised learning alone, resulting in models that improve upon or are competitive with the current state-of-the-art.
This paper presents a so-called cross-view training for semi-supervised deep models. Experiments were conducted on various data sets and experimental results were reported. Pros: * Studying semi-supervised learning techniques for deep models is of practical significance. Cons: * The novelty of this paper is marginal. The use of unlabeled data is in fact a self-training process. Leveraging the sub-regions of the image to improve performance is not new and has been widely-studied in image classification and retrieval. * The proposed approach suffers from a technical weakness or flaw. For the self-labeled data, the prediction of each view is enforced to be same as the assigned self-labeling. However, since each view related to a sub-region of the image (especially when the model is not so deep), it is less likely for this region to contain the representation of the concepts (e.g., some local region of an image with a horse may exhibit only grass); enforcing the prediction of this view to be the same self-labeled concepts (e.g,“horse”) may drive the prediction away from what it should be ( e..g, it will make the network to predict grass as horse). Such a flaw may affect the final performance of the proposed approach. * The word “view” in this paper is misleading. The “view” in this paper is corresponding to actually sub-regions in the images * The experimental results indicate that the proposed approach fails to perform better than the compared baselines in table 2, which reduces the practical significance of the proposed approach.
iclr_2018_HyiAuyb0b
Published as a conference paper at ICLR 2018 TD OR NOT TD: ANALYZING THE ROLE OF TEMPORAL DIFFERENCING IN DEEP REINFORCEMENT LEARNING Our understanding of reinforcement learning (RL) has been shaped by theoretical and empirical results that were obtained decades ago using tabular representations and linear function approximators. These results suggest that RL methods that use temporal differencing (TD) are superior to direct Monte Carlo estimation (MC). How do these results hold up in deep RL, which deals with perceptually complex environments and deep nonlinear models? In this paper, we re-examine the role of TD in modern deep RL, using specially designed environments that control for specific factors that affect performance, such as reward sparsity, reward delay, and the perceptual complexity of the task. When comparing TD with infinite-horizon MC, we are able to reproduce classic results in modern settings. Yet we also find that finite-horizon MC is not inferior to TD, even when rewards are sparse or delayed. This makes MC a viable alternative to TD in deep RL.
This paper includes several controlled empirical studies comparing MC and TD methods in predicting of value function with complex DNN function approximators. Such comparison has been carried out both in theory and practice for simple low dimensional environments with linear (and RKHS) value function approximation showing how TD methods can have much better sample complexity and overall performance compared to pure MC methods. This paper shows some results to the contrary when applying RL to complex perceptual observation space. The main results include: (1) In a rollout update a mix of MC and TD update (i.e. a rollout of > 1 and < horizon) outperforms either extreme. This is inline with TD-lambda analysis in previous work. (2) Pure MC methods can outperform TD methods when the rewards becomes noisy. (3) TD methods can outperform pure MC methods when the return is mostly dominated by the reward in the terminal state. (4) MC methods tend to degrade less when the reward signal is delayed. (5) Somewhat surprising: MC methods seems to be on-par with TD methods when the reward is sparse and even longer than the rollout horizon. (6) MC methods can outperform TD methods with more complex and high dimensional perceptual inputs. The authors conjecture that several of the above observations can be explained by the fact that the training target in MC methods is "ground truth" and do not rely on bootstrapping from the current estimates as is done in a TD rollout. They suggest that training on such signal can be beneficial when training deep models on complex perceptual input spaces. The contributions of the paper are in parts surprising and overall interesting. I believe there are far more caveats in this analysis than what is suggested in the paper and the authors should avoid over-generalizing the results based on a few domains and the analysis of a small set of algorithms. Nonetheless I find the results interesting to the RL community and a starting point to further analysis of the MC methods (or adaptations of TD methods) that work better with image observation spaces. Publishing the code, as the authors mentioned, would certainly help with that. Notes: - I find the description of the Q_MC method presented in the paper very confusing and had to consult the reference to understand the details. Adding a couple of equations on this would improve the readability of the paper. - The first mention of partial observability can be moved to the introduction. - Adding results for m=3 to table 2 would bring further insight to the comparison. - The results for the perceptual complexity experiment seem contradictory and inconclusive. One would expect Q_MC to work well in Grid Map domain if the conjecture put forth by the authors was to hold universally. - In the study on reward sparsity, although a prediction horizon of 32 is less than the average steps needed to get to a rewarding state, a blind random walk might be enough to take the RL agent to a close-enough neighbourhood from which a greedy MC-based policy has a direct path to the goal. What is missing from this picture is when a blind walk cannot reach such a state, e.g. when a narrow corridor is present in the environment. Such a case cannot be resolved by a short horizon MC method. In other words, a sparse reward setting is only "difficult" if getting into a good neighbourhood requires long term planning and cannot be resolved by a (pseudo) blind random walk. - The extrapolation of the value function approximator can also contribute to why the limited horizon MC method can see beyond its horizon in a sparse reward setting. That is, even if there is no way to reach a reward state in 32 steps, an MC value function approximation with horizon 32 can extrapolate from similar looking observed states that have a short path to a rewarding state, enough to be better than a blind random walk. It would have been nice to experiment with increasing model complexity to study such effect.
iclr_2018_SJyVzQ-C-
Published as a conference paper at ICLR 2018 FRATERNAL DROPOUT Recurrent neural networks (RNNs) form an important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets -Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks.
The authors present Fraternal dropout as an improvement over Expectation-linear dropout (ELD) in terms of convergence and demonstrate the utility of Fraternal dropout on a number of tasks and datasets. At test time, more often than not, people apply dropout in deterministic mode while at training time masks are sampled randomly. The paper addresses this issue by trying to reduce the gap. I have 1.5 high level comments: - Dropout can be applied by averaging results corresponding to randomly sampled masks ('MC eval'). This should not be ignored, and preferrably included in the evaluation. - It could be made clearer why the proposed regularization would make the aforementioned gap smaller. Intuitively, the bias of the deterministic approximation (compared to the MC eval) should also play a role. It may be worth asking whether the bias changes? A possibility is that MC and deterministic evaluations meet halfway and with fraternal dropout MC eval is worse than without. Details: - The notation is confusing: p() looks like a probability distribution, z looks like a latent variable, p^t and l^t have superscripts instead of Y having a subscript, z^t is a function of X. Wouldn't f(X_t) be preferrable to p^t(z_t)? - The experiments are set up and executed with care, but section 4 could be improved by providings details (as much as in section 5). The results on PTB and Wikitext-2 are really good. However, why not compare to ELD here? Section 5 leads the reader to believe that ELD would be equally good. - Section 5 could be the most interesting part of the paper. This is where different regularization methods are compared (by the way, this is not "ablation"). It is somewhat unfortunate that due to lack of computational resources the comparisons are made at a single hyperparameter setting. All in all, the results of section 4 are clearly good, but are they better than those of ELD? Evaluation and interpretation of results in section 5 is made difficult by the omission of the most informative quantity which Fraternal dropout is supposed to be approximating.
iclr_2018_HkwrqtlR-
GANs have shown how deep neural networks can be used for generative modeling, aiming at achieving the same impact that they brought for discriminative modeling. The first results were impressive, GANs were shown to be able to generate samples in high dimensional structured spaces, like images and text, that were no copies of the training data. But generative and discriminative learning are quite different. Discriminative learning has a clear end, while generative modeling is an intermediate step to understand the data or generate hypothesis. The quality of implicit density estimation is hard to evaluate, because we cannot tell how well a data is represented by the model. How can we certainly say that a generative process is generating natural images with the same distribution as we do? In this paper, we noticed that even though GANs might not be able to generate samples from the underlying distribution (or we cannot tell at least), they are capturing some structure of the data in that high dimensional space. It is therefore needed to address how we can leverage those estimates produced by GANs in the same way we are able to use other generative modeling algorithms.
The main take-away messages of this paper seem to be: 1. GANs don't really match the target distribution. Some previous theory supports this, and some experiments are provided here demonstrating that the failure seems to be largely in under-sampling the tails, and sometimes perhaps in introducing spurious modes. 2. Even if GANs don't exactly match the target distribution, their outputs might still be useful for some tasks. (I wouldn't be surprised if you disagree with what the main takeaways are; I found the flow of the paper somewhat disjointed, and had something of a hard time identifying what the "point" was.) Mode-dropping being a primary failure mode of GANs is already a fairly accepted hypothesis in the community (see, e.g. Mode Regularized GANs, Che et al ICLR 2017, among others), though some extra empirical evidence is provided here. The second point is, in my opinion, simultaneously (i) an important point that more GAN research should take to heart, (ii) relatively obvious, and (iii) barely explored in this paper. The only example in the paper of using a GAN for something other than directly matching the target distribution is PassGAN, and even that is barely explored beyond saying that some of the spurious modes seem like reasonable-ish passwords. Thus though this paper has some interesting aspects to it, I do not think its contributions rise to the level required for an ICLR paper. Some more specifics: Section 2.1 discusses four previous theoretical results about the convergence of GANs to the true density. This overview is mostly reasonable, and the discussion of Arora et al. (2017) and Liu et al. (2017) do at least vaguely support the conclusion in the last section of this paragraph. But this section is glaringly missing an important paper in this area: Arjovsky and Bottou (2017), cited here only in passing in the introduction, who proved that typical GAN architectures *cannot* exactly match the data distribution. Thus the question of metrics for convergence is of central importance, which it seems should be important to the topic of the present paper. (Figure 3 of Danihelka et al. https://arxiv.org/abs/1705.05263 gives a particularly vivid example of how optimizing different metrics can lead to very different results.) Presumably different metrics lead to models that are useful for different final tasks. Also, although they do not quite fit into the framing of this section, Nowozin et al.'s local convergence proof and especially the convergence to a Nash equilibrium argument of Heusel et al. (NIPS 2017, https://arxiv.org/abs/1706.08500) should probably be mentioned here. The two sample testing section of this paper, discussed in Section 2.2 and then implemented in Section 3.1.1, seems to be essentially a special case of what was previously done by Sutherland et al. (2017), except that it was run on CIFAR-10 as well. However, the bottom half of Table 1 demonstrates that something is seriously wrong with the implementation of your tests: using 1000 bootstrap samples, you should reject H_0 at approximately the nominal rate of 5%, not about 50%! To double-check, I ran a median-heuristic RBF kernel MMD myself on the MNIST test set with N_test = 100, repeating 1000 times, and rejected the null 4.8% of the time. My code is available at https://gist.github.com/anonymous/2993a16fbc28a424a0e79b1c8ff31d24 if you want to use it to help find the difference from what you did. Although Table 1 does indicate that the GAN distribution is more different from the test set than the test set is from itself, the apparent serious flaw in your procedure makes those results questionable. (Also, it seems that your entry labeled "MMD" in the table is probably n * MMD_b^2, which is what is computed by the code linked to in footnote 2.) The appendix gives a further study of what went wrong with the MNIST GAN model, arguing based on nearest-neighbors that the GAN model is over-representing modes and under-representing the tails. This is fairly interesting; certainly more interesting than the rehash of running MMD tests on GAN outputs, in my opinion. Minor: In 3.1.1, you say "ideally the null hypothesis H0 should never be rejected" – it should be rejected at most an alpha portion of the time. In the description of section 3.2, you should clarify whether the train-test split was done such that unique passwords were assigned to a single fold or not: did 123456 appear in both folds? (It is not entirely clear whether it should or not; both schemes have possible advantages for evaluation.)
iclr_2018_r1RF3ExCb
The fundamental task of general density estimation has been of keen interest to machine learning. Recent advances in density estimation have either: a) proposed a flexible model to estimate the conditional factors of the chain rule, p(x i x i−1 , . . .); or b) used flexible, non-linear transformations of variables of a simple base distribution. Instead, this work jointly leverages transformations of variables and autoregressive conditional models, and proposes novel methods for both. We provide a deeper understanding of our methods, showing a considerable improvement through a comprehensive study over both real world and synthetic data. Moreover, we illustrate the use of our models in outlier detection and image modeling tasks.
This paper is well constructed and written. It consists of a number of broad ideas regarding density estimation using transformations of autoregressive networks. Specifically, the authors examine models involving linear maps from past states (LAM) and recurrence relationships (RAM). The critical insight is that the hidden states in the LAM are not coupled allowing considerable flexibility between consecutive conditional distributions. This is at the expense of an increased number of parameters and a lack of information sharing. In contrast, the RAM transfers information between conditional densities via the coupled hidden states allowing for more constrained smooth transitions. The authors then explored a variety of transformations designed to increase the expressiveness of LAM and RAM. The authors importantly note that one important restriction on the class of transformations is the ability to evaluate the Jacobian of the transformation efficiently. A composite of transformations coupled with the LAM/RAM networks provides a highly expressive model for modelling arbitrary joint densities but retaining interpretable conditional structure. There is a rich variety of synthetic and real data studies which demonstrate that LAM and RAM consistently rank amongst the top models demonstrating potential utility for this class of models. Whilst the paper provides no definitive solutions, this is not the point of the work which seeks to provide a description of a general class of potentially useful models.
iclr_2018_ByOExmWAb
MASKGAN: BETTER TEXT GENERATION VIA FILLING IN THE Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maximum likelihood and teacher forcing. These methods are well-suited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.
Generating high-quality sentences/paragraphs is an open research problem that is receiving a lot of attention. This text generation task is traditionally done using recurrent neural networks. This paper proposes to generate text using GANs. GANs are notorious for drawing images of high quality but they have a hard time dealing with text due to its discrete nature. This paper's approach is to use an actor-critic to train the generator of the GAN and use the usual maximum likelihood with SGD to train the discriminator. The whole network is trained on the "fill-in-the-blank" task using the sequence-to-sequence architecture for both the generator and the discriminator. At training time, the generator's encoder computes a context representation using the masked sequence. This context is conditioned upon to generate missing words. The discriminator is similar and conditions on the generator's output and the masked sequence to output the probability of a word in the generator's output being fake or real. With this approach, one can generate text at test time by setting all inputs to blanks. Pros and positive remarks: --I liked the idea behind this paper. I find it nice how they benefited from context (left context and right context) by solving a "fill-in-the-blank" task at training time and translating this into text generation at test time. --The experiments were well carried through and very thorough. --I second the decision of passing the masked sequence to the generator's encoder instead of the unmasked sequence. I first thought that performance would be better when the generator's encoder uses the unmasked sequence. Passing the masked sequence is the right thing to do to avoid the mismatch between training time and test time. Cons and negative remarks: --There is a lot of pre-training required for the proposed architecture. There is too much pre-training. I find this less elegant. --There were some unanswered questions: (1) was pre-training done for the baseline as well? (2) how was the masking done? how did you decide on the words to mask? was this at random? (3) it was not made very clear whether the discriminator also conditions on the unmasked sequence. It needs to but that was not explicit in the paper. --Very minor: although it is similar to the generator, it would have been nice to see the architecture of the discriminator with example input and output as well. Suggestion: for the IMDB dataset, it would be interesting to see if you generate better sentences by conditioning on the sentiment as well.
iclr_2018_HkwVAXyCW
Published as a conference paper at ICLR 2018 SKIP RNN: LEARNING TO SKIP STATE UPDATES IN RECURRENT NEURAL NETWORKS Recurrent Neural Networks (RNNs) continue to show outstanding performance in sequence modeling tasks. However, training RNNs on long sequences often face challenges like slow inference, vanishing gradients and difficulty in capturing long term dependencies. In backpropagation through time settings, these issues are tightly coupled with the large, sequential computational graph resulting from unfolding the RNN in time. We introduce the Skip RNN model which extends existing RNN models by learning to skip state updates and shortens the effective size of the computational graph. This model can also be encouraged to perform fewer state updates through a budget constraint. We evaluate the proposed model on various tasks and show how it can reduce the number of required RNN updates while preserving, and sometimes even improving, the performance of the baseline RNN models. Source code is publicly available at https://imatge-upc. github.io/skiprnn-2017-telecombcn/.
UPDATE: Following the author's response I've increased my score from 5 to 6. The revised paper includes many of the additional references that I suggested, and the author response clarified my confusion over the Charades experiments; their results are indeed close to state-of-the-art on Charades activity localization (slightly outperformed by [6]), which I had mistakenly confused with activity classification (from [5]). The paper proposes the Skip RNN model which allows a recurrent network to selectively skip updating its hidden state for some inputs, leading to reduced computation at test-time. At each timestep the model emits an update probability; if this probability is over a threshold then the next input and state update will be skipped. The use of a straight-through estimator allows the model to be trained with standard backpropagation. The number of state updates that the model learns to use can be controlled with an auxiliary loss function. Experiments are performed on a variety of tasks, demonstrating that the Skip-RNN compares as well or better than baselines even when skipping nearly half its state updates. Pros: - Task of reducing computation by skipping inputs is interesting - Model is novel and interesting - Experiments on multiple tasks and datasets confirm the efficacy of the method - Skipping behavior can be controlled via an auxiliary loss term - Paper is clearly written Cons: - Missing comparison to prior work on sequential MNIST - Low performance on Charades dataset, no comparison to prior work - No comparison to prior work on IMDB Sentiment Analysis or UCF-101 activity classification The task of reducing computation by skipping RNN inputs is interesting, and the proposed method is novel, interesting, and clearly explained. Experimental results across a variety of tasks are convincing; in all tasks the Skip-RNNs achieve their goal of performing as well or better than equivalent non-skipping variants. The use of an auxiliary loss to control the number of state updates is interesting; since it sometimes improves performance it appears to have some regularizing effect on the model in addition to controlling the trade-off between speed and accuracy. However, where possible experiments should compare directly with prior published results on these tasks; none of the experiments from the main paper or supplementary material report any numbers from any other published work. On permuted MNIST, Table 2 could include results from [1-4]. Of particular interest is [3], which reports 98.9% accuracy with a 100-unit LSTM initialized with orthogonal and identity weight matrices; this is significantly higher than all reported results for the sequential MNIST task. For Charades, all reported results appear significantly lower than the baseline methods reported in [5] and [6] with no explanation. All methods work on “fc7 features from the RGB stream of a two-stream CNN provided by the organizers of the [Charades] challenge”, and the best-performing method (Skip GRU) achieves 9.02 mAP. This is significantly lower than the two-stream results from [5] (11.9 mAP and 14.3 mAP) and also lower than pretrained AlexNet features averaged over 30 frames and classified with a linear SVM, which [5] reports as achieving 11.3 mAP. I don’t expect to see state-of-the-art performance on Charades; the point of the experiment is to demonstrate that Skip-RNNs perform as well or better than their non-skipping counterparts, which it does. However I am surprised at the low absolute performance of all reported results, and would appreciate if the authors could help to clarify whether this is due to differences in experimental setup or something else. In a similar vein, from the supplementary material, sentiment analysis on IMDB and action classification on UCF-101 are well-studied problems, but the authors do not compare with any previously published results on these tasks. Though experiments may not show show state-of-the-art performance, I think that they still serve to demonstrate the utility of the Skip-RNN architecture when compared side-by-side with a similarly tuned non-skipping baseline. However I feel that the authors should include some discussion of other published results. On the whole I believe that the task and method are interesting, and experiments convincingly demonstrate the utility of Skip-RNNs compared to the author’s own baselines. I will happily upgrade my rating of the paper if the authors can address my concerns over prior work in the experiments. References [1] Le et al, “A Simple Way to Initialize Recurrent Networks of Rectified Linear Units”, arXiv 2015 [2] Arjovsky et al, “Unitary Evolution Recurrent Neural Networks”, ICML 2016 [3] Cooijmans et al, “Recurrent Batch Normalization”, ICLR 2017 [4] Zhang et al, “Architectural Complexity Measures of Recurrent Neural Networks”, NIPS 2016 [5] Sigurdsson et al, “Hollywood in homes: Crowdsourcing data collection for activity understanding”, ECCV 2016 [6] Sigurdsson et al, “Asynchronous temporal fields for action recognition”, CVPR 2017
iclr_2018_SyuWNMZ0W
The maximum mean discrepancy (MMD) between two probability measures P and Q is a metric that is zero if and only if all moments of the two measures are equal, making it an appealing statistic for two-sample tests. Given i.i.d. samples from P and Q, Gretton et al. (2012) show that we can construct an unbiased estimator for the square of the MMD between the two distributions. If P is a distribution of interest and Q is the distribution implied by a generative neural network with stochastic inputs, we can use this estimator to train our neural network. However, in practice we do not always have i.i.d. samples from our target of interest. Data sets often exhibit biases-for example, under-representation of certain demographics-and if we ignore this fact our machine learning algorithms will propagate these biases. Alternatively, it may be useful to assume our data has been gathered via a biased sample selection mechanism in order to manipulate properties of the estimating distribution Q. In this paper, we construct an estimator for the MMD between P and Q when we only have access to P via some biased sample selection mechanism, and suggest methods for estimating this sample selection mechanism when it is not already known. We show that this estimator can be used to train generative neural networks on a biased data sample, to give a simulator that reverses the effect of that bias.
This paper proposes an importance-weighted estimator of the MMD, in order to estimate the MMD between distributions based on samples biased according to a known scheme. It then discusses how to estimate the scheme when it is unknown, and further proposes using it in either the MMD-based generative models of Y. Li et al. (2015) / Dziugaite et al. (2015), or in the MMD GAN of C.-L. Li et al. (2017). The estimator itself is natural (and relatively obvious), though it has some drawbacks that aren't fully discussed (below). The application to GAN-type learning is reasonable, and topical. The first, univariate, experiment shows that the scheme is at least plausible. But the second experiment, involving a simple T ratio based on whether an MNIST digit is a 0 or a 1, doesn't even really work! (The best model only gets the underrepresented class from 20% up to less than 40%, rather than the desired 50%, and the "more realistic" setting only to 33%.) It would be helpful to debug whether this is due to the classifier being incorrect, estimator inaccuracies, or what. In particular, I would try using T based on a pretrained convnet independent of the autoencoder representation in the MMD GAN, to help diagnose where the failure mode comes from. Without at least a working should-be-easy example like this, and with the rest of the paper's technical contribution so small, I just don't think this paper is ready for ICLR. It's also worth noting that the equivalent algorithm for either vanilla GANs or Wasserstein GANs would be equally obvious. Estimator: In the discussion about (2): where does the 1/m bias come from? This doesn't seem to be in Robert and Casella section 3.3.2, which is the part of the book that I assume you're referring to (incidentally, you should specify that rather than just citing a 600-page textbook). Moreover, it is worth noting that Robert and Cassela emphasize that if E[1 / \tilde T] is infinite, the importance sampling estimator can be quite bad (for example, the estimator may have infinite variance). This happens when \tilde T puts mass in a neighborhood around 0, i.e. when the thinned distribution doesn't have support at any place that P does. In the biased-observations case, this is in some sense unsurprising: if we don't see *any* data in a particular class of inputs, then our estimates can be quite bad (since we know nothing about a group of inputs that might strongly affect the results). In the modulating case, the equivalent situation is when F(x) lacks a mean, which seems less likely. Thus although this is probably not a huge problem for your case, it's worth at least mentioning. (See also the following relevant blog posts: https://radfordneal.wordpress.com/2008/08/17/the-harmonic-mean-of-the-likelihood-worst-monte-carlo-method-ever/ and https://xianblog.wordpress.com/2012/03/12/is-vs-self-normalised-is/ .) The paper might be improved by stating (and proving) a theorem with expressions for the rate of convergence of the estimator, and how they depend on T. Minor: Another piece of somewhat-related work is Xiong and Schneider, Learning from Point Sets with Observational Bias, UAI 2014. Sutherland et al. 2016 and 2017, often referenced in the same block of citations, are the same paper. On page 3, above (1): "Since we have projected the distributons into an infinite-dimensional space, the distance between the two distributions is zero if and only if all their moments are the same." An infinite-dimensional space isn't enough; the kernel must further be characteristic, as you mention. See e.g. Sriperumbuder et al. (AISTATS 2010) for more details. Figure 1(b) seems to be plotting only the first term of \tilde T, without the + 0.5.
iclr_2018_S1EwLkW0W
The ADAM optimizer is exceedingly popular in the deep learning community. Often it works very well, sometimes it doesn't. Why? We interpret ADAM as a combination of two aspects: for each weight, the update direction is determined by the sign of the stochastic gradient, whereas the update magnitude is solely determined by an estimate of its relative variance. We disentangle these two aspects and analyze them in isolation, shedding light on ADAM's inner workings. Transferring the "variance adaptation" to momentum-SGD gives rise to a novel method, completing the practitioner's toolbox for problems where ADAM fails.
Summary: The paper is trying to improve Adam based on variance adaption with momentum. Two algorithms are proposed, M-SSD (Stochastic Sign Descent with Momentum) and M-SVAG (Stochastic Variance-Adapted Gradient with Momentum) to solve finite sum minimization problem. The convergence analysis is provided for SVAG for strongly convex case. Numerical experiments are provided for some standard neural network structures with three common datasets MNIST, CIFAR10 and CIFAR100 compared the performance of M-SSD and M-SVAG to two existing algorithms: SGD momentum and Adam. Comments: Page 4, line 5: You should define \nu clearly. Theorem 1: In the strongly convex case, assumption E ||g_t ||^2 \leq G^2 (if G is a constant) is too strong. In this case, G could be equal to infinity. If G is not infinity, you already assume that your algorithm converges, that is the reason why this assumption is not so good for strongly convex. If G is infinity (this is really possible for strongly convex), your proof would get a trouble as eq. (40) is not valid anymore. Also, to compute \gamma_{t,i}, it requires to compute \nabla f_{t,i}, which is full gradient. By doing this, the computational cost should add the dependence of M, which is very large as you mentioned in the introduction. According to your rate O(1/t), the complexity is worse than that of gradient descent and SGD as well. As I understand, there is no theoretical results for M-SSG and M-SVAG, but only the result for SVAG with exact \eta_i^2 in the strongly convex case. Also, theoretical results are not strong enough. Hence, the experiments need to make more convincingly, at least for some different complicated architecture of deep neural network. As I see, in some dataset, Adam performs better than M-SSD, some another dataset, Adam performs better than M-SVAG. Same situation for M-SGD. My question is that: When should we use M-SSD or M-SVAG? For a given dataset, why should we not use Adam or M-SGD (or other existing algorithms such as Adagrad, RMSprop), but your algorithms? You should do more experiments to various dataset and architectures to be more convincing since theoretical results are not strong enough. Would you think to try to use VGG or ResNet to ImageNet? I like the idea of the paper but I would love if the author(s) could improve more theoretical results to convince people. Otherwise, the results in this paper could not be considered as good enough. At this moment, I think the paper is still not ready for the publication. Minor comments: Page 2, in eq. (6): You should mention that “1” is a vector. Page 4, line 4: Q in R^{d} => Q in R^{d x d} Page 6, Theorem 1: You should define the finite sum optimization problem with f since you have not used it before. Page 6, Theorem 1: You should use another notation for “\mu”-strongly convex parameter since you have another “\mu”-momentum parameter in section 3.4 Page 4, Page 7: Be careful with the case when c = 0 (page 4) and mu = 1 (page 7-8) with dividing by 0.
iclr_2018_SkYXvCR6W
This paper puts forward a new text to tensor representation that relies on information compression techniques to assign shorter codes to the most frequently used characters. This representation is language-independent with no need of pretraining and produces an encoding with no information loss. It provides an adequate description of the morphology of text, as it is able to represent prefixes, declensions, and inflections with similar vectors and are able to represent even unseen words on the training dataset. Similarly, as it is compact yet sparse, is ideal for speed up training times using tensor processing libraries. As part of this paper, we show that this technique is especially effective when coupled with convolutional neural networks (CNNs) for text classification at character-level. We apply two variants of CNN coupled with it. Experimental results show that it drastically reduces the number of parameters to be optimized, resulting in competitive classification accuracy values in only a fraction of the time spent by one-hot encoding representations, thus enabling training in commodity hardware.
The manuscript proposed to use prefix codes to compress the input to a neural network for text classification. It builds upon the work by Zhang & LeCun (2015) where the same tasks are used. There are several issues with the paper and I cannot recommend acceptance of the paper in the current state. - It looks like it is not finished. - the datasets are not described properly. - It is not clear to me where the baseline results come from. They do not match up to the Zhang paper (I have tried to find the matching accuracies there). - It is not clear to me what the baselines actually are or how I can found more info on those. - the results are not remarkable. Because of this, the paper needs to be updated and cleaned up before it can be properly reviewed. On top of this, I do not enjoy the style the paper is written in, the language is convoluted. For example: “The effort to use Neural Convolution Networks for text classification tasks is justified by the possibility of appropriating tools from the recent developments of techniques, libraries and hardware used especially in the image classification “ I do not know which message the paper tries to get across here. As a reviewer my impression (which is subjective) is that the authors used difficult language to make the manuscript look more impressive. The acknowledgements should not be included here either.
iclr_2018_Hkc-TeZ0W
A HIERARCHICAL MODEL FOR DEVICE PLACEMENT We introduce a hierarchical model for efficient placement of computational graphs onto hardware devices, especially in heterogeneous environments with a mixture of CPUs, GPUs, and other computational devices. Our method learns to assign graph operations to groups and to allocate those groups to available devices. The grouping and device allocations are learned jointly. The proposed method is trained with policy gradient and requires no human intervention. Experiments with widely-used computer vision and natural language models show that our algorithm can find optimized, non-trivial placements for TensorFlow computational graphs with over 80,000 operations. In addition, our approach outperforms placements by human experts as well as a previous state-of-the-art placement method based on deep reinforcement learning. Our method achieves runtime reductions of up to 60.6% per training step when applied to models such as Neural Machine Translation.
The paper seems clear enough and original enough. The idea of jointly forming groups of operations to colocate and figure out placement on devices seems to hold merit. Where the paper falls short is motivating the problem setting. Traditionally, for determining optimal execution plans, one may resort to cost-based optimization (e.g., database management systems). This paper's introduction provides precisely 1 statement to suggest that may not work for deep learning. Here's the relevant phrase: "the cost function is typically non-stationary due to the interactions between multiple devices". Unfortunately, this statement raises more questions than it answers. Why are the cost functions non-stationary? What exactly makes them dynamic? Are we talking about a multi-tenancy setting where multiple processes execute on the same device? Unlikely, because GPUs are involved. Without a proper motivation, its difficult to appreciate the methods devised. Pros: - Jointly optimizing forming of groups and placing these seems to have merit - Experiments show improvements over placement by human "experts" - Targets an important problem Cons: - Related work seems inadequately referenced. There exist other linear/tensor algebra engines/systems that perform such optimization including placing operations on devices in a distributed setting. This paper should at least cite those papers and qualitatively compare against those approaches. Here's one reference (others should be easy to find): "SystemML's Optimizer: Plan Generation for Large-Scale Machine Learning Programs" by Boehm et al, IEEE Data Engineering Bulletin, 2014. - The methods are not well motivated. There are many approaches to devising optimal execution plans, e.g., rule-based, cost-based, learning-based. In particular, what makes cost-based optimization inapplicable? Also, please provide some reasoning behind your hypothesis which seems to be that while costs may be dynamic, optimally forming groups and placing them is learn-able. - The template seems off. I don't see the usual two lines under the title ("Anonymous authors", "Paper under double-blind review"). - The title seems misleading. ".... Device Placement" seems to suggest that one is placing devices when in fact, the operators are being placed.
iclr_2018_BJjquybCW
We analyze the expressiveness and loss surface of practical deep convolutional neural networks (CNNs) with shared weights and max pooling layers. We show that such CNNs produce linearly independent features at a "wide" layer which has more neurons than the number of training samples. This condition holds e.g. for the VGG network. Furthermore, we provide for such wide CNNs necessary and sufficient conditions for global minima with zero training error. For the case where the wide layer is followed by a fully connected layer we show that almost every critical point of the empirical loss is a global minimum with zero training error. Our analysis suggests that both depth and width are very important in deep learning. While depth brings more representational power and allows the network to learn high level features, width smoothes the optimization landscape of the loss function in the sense that a sufficiently wide network has a well-behaved loss surface with almost no bad local minima.
This paper presents an analysis of convolutional neural networks from the perspective of how the rank of the features is affected by the kinds of layers found in the most popular networks. Their analysis leads to the formulation of a certain theorem about the global minima with respect to parameters in the latter portion of the network. The authors ask important questions, but I am not sure that they obtain important answers. On the plus side, I'm glad that people are trying to further our understanding our neural networks, and I think that their investigation is worthy of being published. They present a collection of assumptions, lemmas, and theorems. They have no choice but to have assumptions, because they want to abstract away the "data" part of the analysis while still being able to use certain properties about the rank of the features at certain layers. Most of my doubts about this paper come from the feeling that equivalent results could be obtained with a more elegant argument about perturbation theory, instead of something like the proof of Lemma A1. That being said, it's easy to voice such concerns, and I'm willing to believe that there might not exist a simple way to derive the same results with an approach more along the line of "whatever your data, pick whatever small epsilon, and you can always have the desired properties by perturbing your data by that small epsilon in a random direction". Have the authors tried this ? I'm not sure if the authors were the first to present this approach of analyzing the effects of convolutions from a "patch perspective", but I think this is a clever approach. It simplifies the statement of some of their results. I also like the idea of factoring the argument along the concept of some critical "wide layer". Good review of the literature. I wished the paper was easier to read. Some of the concepts could have been illustrated to give the reader some way to visualize the intuitive notions. For example, maybe it would have been interesting to plot the rank of features a every layer for LeNet+MNIST ? At the end of the day, if a friend asked me to summarize the paper, I would tell them : "Features are basically full rank. Then they use a square loss and end up with an over-parametrized system, so they can achieve loss zero (i.e. global minimum) with a multitude of parameters values." Nitpicking : "This paper is one of the first ones, which studies CNNs." This sentence is strange to read, but I can understand what the authors mean. "This is true even if the bottom layers (from input to the wide layer) and chosen randomly with probability one." There's a certain meaning to "with probability one" when it comes to measure theory. The authors are using it correctly in the rest of the paper, but in this sentence I think they simply mean that something holds if "all" the bottom layers have random features.
iclr_2018_HkjL6MiTb
Survival Analysis (time-to-event analysis) in the presence of multiple possible adverse events, i.e., competing risks, is a challenging, yet very important problem in medicine, finance, manufacturing, etc. Extending classical survival analysis to competing risks is not trivial since only one event (e.g. one cause of death) is observed and hence, the incidence of an event of interest is often obscured by other related competing events. This leads to the nonidentifiability of the event times distribution parameters, which makes the problem significantly more challenging. In this work we introduce Siamese Survival Prognosis Network, a novel Siamese Deep Neural Network architecture that is able to effectively learn from data in the presence of multiple adverse events. The Siamese Survival Network is especially crafted to issue pairwise concordant time-dependent risks, in which longer event times are assigned lower risks. Furthermore, our architecture is able to directly optimize an approximation to the C-discrimination index, rather than relying on well-known metrics of cross-entropy etc., and which are not able to capture the unique requirements of survival analysis with competing risks. Our results show consistent performance improvements on a number of publicly available medical datasets over both statistical and deep learning state-of-the-art methods.
The authors tackle the problem of estimating risk in a survival analysis setting with competing risks. They propose directly optimizing the time-dependent discrimination index using a siamese survival network. Experiments on several real-world dataset reveal modest gains in comparison with the state of the art. - The authors should clearly highlight what is their main technical contribution. For example, Eqs. 1-6 appear to be background material since the time-dependent discrimination index is taken from the literature, as the authors point out earlier. However, this is unclear from the writing. - One of the main motivations of the authors is to propose a model that is specially design to avoid the nonidentifiability issue in an scenario with competing risks. It is unclear why the authors solution is able to solve such an issue, specially given the modest reported gains in comparison with several competitive baselines. In other words, the authors oversell their own work, specially in comparison with the state of the art. - The authors use off-the-shelf siamese networks for their settting and thus it is questionable there is any novelty there. The application/setting may be novel, but not the architecture of choice. - From Eq. 4 to Eq. 5, the authors argue that the denominator does not depend on the model parameters and can be ignored. However, afterwards the objective does combine time-dependent discrimination indices of several competing risks, with different denominator values. This could be problematic if the risks are unbalanced. - The competitive gain of the authors method in comparison with other competing methods is minor. - The authors introduce F(t, D | x) as cumulative incidence function (CDF) at the beginning of section 2, however, afterwards they use R^m(t, x), which they define as risk of the subject experiencing event m before t. Is the latter a proxy for the former? How are they related?
iclr_2018_B1DmUzWAW
A SIMPLE NEURAL ATTENTIVE META-LEARNER Deep neural networks excel in regimes with large amounts of data, but tend to struggle when data is scarce or when they need to adapt quickly to changes in the task. In response, recent work in meta-learning proposes training a meta-learner on a distribution of similar tasks, in the hopes of generalization to novel but related tasks by learning a high-level strategy that captures the essence of the problem it is asked to solve. However, many recent meta-learning approaches are extensively hand-designed, either using architectures specialized to a particular application, or hard-coding algorithmic components that constrain how the meta-learner solves the task. We propose a class of simple and generic meta-learner architectures that use a novel combination of temporal convolutions and soft attention; the former to aggregate information from past experience and the latter to pinpoint specific pieces of information. In the most extensive set of meta-learning experiments to date, we evaluate the resulting Simple Neural AttentIve Learner (or SNAIL) on several heavily-benchmarked tasks. On all tasks, in both supervised and reinforcement learning, SNAIL attains state-of-the-art performance by significant margins.
The paper proposes a general neural network structure that includes TC (temporal convolution) blocks and Attention blocks for meta-learning, specifically, for episodic task learning. Through intensive experiments on various settings including few-shot image classification on Omniglot and Mini-ImageNet, and four reinforcement learning applications, the authors show that the proposed structure can achieve highly comparable performance wrt the corresponding specially designed state-of-the-art methods. The experiment results seem solid and the proposed structure is with simple design and highly generalizable. The concern is that the contribution is quite incremental from the theoretical side though it involves large amount of experimental efforts, which could be impactful. Please see the major comment below. One major comment: - Despite that the work is more application oriented, the paper would have been stronger and more impactful if it includes more work on the theoretical side. Specifically, for two folds: (1) in general, some more work in investigating the task space would be nice. The paper assumes the tasks are “related” or “similar” and thus transferrable; also particularly in Section 2, the authors define that the tasks follow the same distribution. But what exactly should the distribution be like to be learnable and how to quantify such “related” or “similar” relationship across tasks? (2) in particular, for each of the experiments that the authors conduct, it would be nice to investigate some more on when the proposed TC + Attention network would work better and thus should be used by the community; some questions to answer include: when should we prefer the proposed combination of TC + attention blocks over the other methods? The result from the paper seems to answer with “in all cases” but then that always brings the issue of “overfitting” or parameter tuning issue. I believe the paper would have been much stronger if either of the two above are further investigated. More detailed comments: - On Page 1, “the optimal strategy for an arbitrary range of tasks” lacks definition of “range”; also, in the setting in this paper, these tasks should share “similarity” or follow the same “distribution” and thus such “arbitrariness” is actually constrained. - On Page 2, the notation and formulation for the meta-learning could be more mathematically rigid; the distribution over tasks is not defined. It is understandable that the authors try to make the paradigm very generalizable; but the ambiguity or the abstraction over the “task distribution” is too large to be meaningful. One suggestion would be to split into two sections, one for supervised learning and one for reinforcement learning; but both share the same design paradigm, which is generalizable. - For results in Table 1 and Table 2, how are the confidence intervals computed? Is it over multiple runs or within the same run? It would be nice to make clear; in addition, I personally prefer either reporting raw standard deviations or conduct hypothesis testing with specified tests. The confidence intervals may not be clear without elaboration; such is also concerning in the caption for Table 3 about claiming “not statistically-significantly different” because no significance test is reported. - At last, some more details in implementation would be nice (package availability, run time analysis); I suppose the package or the source code would be publicly available afterwards?
iclr_2018_H13WofbAb
Distributed training of deep learning is widely conducted with large neural networks and large datasets. Besides asynchronous stochastic gradient descent (SGD), synchronous SGD is a reasonable alternative with better convergence guarantees. However, synchronous SGD suffers from stragglers. To make things worse, although there are some strategies dealing with slow workers, the issue of slow servers is commonly ignored. In this paper, we propose a new parameter server (PS) framework dealing with not only slow workers, but also slow servers by weakening the synchronization criterion. The empirical results show good performance when there are stragglers.
This paper introduces a parameter server architecture to improve distributed training of CNNs in the presence of stragglers. Specifically, the paper proposes partial pulling where a worker only waits for first b blocks rather than all the blocks of the parameters. This technique is combined with existing methods such as partial pushing (Pan et. al. 2017) for a partial synchronous SGD method. The method is evaluated with Resnet -50 using synthetic delays. Comments for the author: The paper is well-written and easy to follow. The problem of synchronization costs being addressed is important but it is unclear how much of this is arising due to large blocks. 1) The partial pushing method (Pan et. al. 2017, section 3.1) shows a clear evidence for the problem using a real workload with a large number of workers. Unfortunately, in your Figure 2, this is not as obvious and not real since it is using simulated delays. More specifically, it is not clear how the workers behave in a real environment and whether you get a clear benefit from using a partial number of blocks as opposed to sending all of them. 2) Did you modify your code to support block-wise sending of gradients (some description of how the framework was modified will be helpful)? The idea is to send partial parameter blocks and when 'b' blocks are received, compute the gradients. I feel that, with such a design, you may actually end up hurting the performance by sending a large number of small packets in the no failure case. For real, large data centers, this may cause a packet storm and subsequent throughput collapse (e.g. the incast problem). You need to show the evidence that you do not hurt the failure-free case for a large number of workers. 3) The evaluation is on fairly small workloads (CIFAR-10). Again, evaluating over Imagenet and demonstrating a clear speedup over existing sync methods will be helpful. Furthermore, a clear description of your “pull” configuration (such as in Figure 1) i.e. how many actual bytes or blocks are sent and what is the threshold will be helpful (beyond a vague 90%). 4) Another concern with partial synchronization methods that I have is that how do you pick these configurations (pull 0.75 etc). These appear to be dataset specific and finding the optimal configuration here requires significant experimentation that takes significantly more time than just running the baseline. Overall, I feel there is not enough evidence for the problem specifically generating large blocks of gradients and this needs to be clearly shown. To propose a solution for stragglers, evaluation should be done in a datacenter environment with the presence of stragglers (and not small workloads with synthetic delays). Furthermore, the proposed technique despite the simplicity appears as a rather incremental contribution.
iclr_2018_Hkp3uhxCW
In this work we explore a straightforward variational Bayes scheme for Recurrent Neural Networks. Firstly, we show that a simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only a small extra computational cost during training, also reducing the amount of parameters by 80%. Secondly, we demonstrate how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs. We incorporate local gradient information into the approximate posterior to sharpen it around the current batch statistics. We show how this technique is not exclusive to recurrent neural networks and can be applied more widely to train Bayesian neural networks. We also empirically demonstrate how Bayesian RNNs are superior to traditional RNNs on a language modelling benchmark and an image captioning task, as well as showing how each of these methods improve our model over a variety of other schemes for training them. We also introduce a new benchmark for studying uncertainty for language models so future methods can be easily compared.
*Summary* The paper applies variational inference (VI) with the 'reparameterisation' trick for Bayesian recurrent neural networks (BRNNs). The paper first considers the "Bayes by Backprop" approach of Blundell et al. (2015) and then modifies the BRNN model with a hierarchical prior over the network parameters, which then requires a hierarchical variational approximation with a simple linear recognition model. Several experiments demonstrate the quality of the prediction and the uncertainty over dropout. *Originality + significance* To my knowledge, there is no other previous work on VI with the reparameterisation trick for BRNNs. However, one could say that this paper is, on careful examination, an application of reparameterisation gradient VI for a specific application. Nevertheless, the parameterisation of the conditional variational distribution q(\theta | \phi, (x, y)) using recognition model is interesting and could be useful in other models. However, this has not been tested or concretely shown in this paper. The idea of modifying the model by introducing variables to obtain a looser bound which can accommodate a richer variational family is also not new, see: hierarchical variational model (Ranganath et al., 2016) for example. *Clarity* The paper is, in general, well-written. However, the presentation in 4 is hard to follow. I would prefer if appendix A3 was moved up front -- in this case, it would make it clear that the model is modified to contain \phi, a variational approximation over both \theta and \phi is needed, and a q that couples \theta, \phi and and the gradient of the log likelihood term wrt \phi is chosen. Additional comments: Why is the variational approximation called "sharpened"? At test time, normal VI just uses the fixed q(\theta) after training. It's not clear to me how prediction is done when using 'posterior sharpening' -- how is q(\theta | \phi, x) in eqs. 19-20 parameterised? The first paragraph of page 5 uses q(\theta | \phi, (x, y)), but y is not known at test time. What is C in eq. 9? This comment "variational typically underestimate the uncertainty in the posterior...whereas expectation propagation methods are mode averaging and so tend to overestimate uncertainty..." is not precise. EP can do mode averaging as well as mode seeking, depending on the underlying and approximate factor graphs. In the Bayesian neural network setting when the likelihood is factorised point-wise and there is one factor for each likelihood, EP is just as mode-seeking as variational. On the other hand, variational methods can avoid modes too, see the mixture of Gaussians example in the "Two problems with variational EM... " paper by Turner and Sahani (2010). There are also many hyperparameters that need to be chosen -- what would happen if these are optimised using the free-energy? Was there any KL reweighting scheduling as done in the original BBB paper? What is the significance of the difference between BBB and BBB with sharpening in the language modelling task? Was sharpening used in the image caption generation task? What is the computational complexity of BBB with posterior sharpening? Twice that BBB? If this is the case, would BBB get to the same performance if we optimise it for longer? Would be interesting to see the time/accuracy frontier.
iclr_2018_SJQO7UJCW
We propose a method for semi-supervised semantic segmentation using the adversarial network. While most existing discriminators are trained to classify input images as real or fake on the image level, we design a discriminator in a fully convolutional manner to differentiate the predicted probability maps from the ground truth segmentation distribution with the consideration of the spatial resolution. We show that the proposed discriminator can be used to improve the performance on semantic segmentation by coupling the adversarial loss with the standard cross entropy loss on the segmentation network. In addition, the fully convolutional discriminator enables the semi-supervised learning through discovering the trustworthy regions in prediction results of unlabeled images, providing additional supervisory signals. In contrast to existing methods that utilize weakly-labeled images, our method leverages unlabeled images without any annotation to enhance the segmentation model. Experimental results on both the PASCAL VOC 2012 dataset and the Cityscapes dataset demonstrate the effectiveness of our algorithm.
This paper describes techniques for training semantic segmentation networks. There are two key ideas: - Attach a pixel-level GAN loss to the output semantic segmentation map. That is, add a discriminator network that decides whether each pixel in the label map belongs to a real label map or not. Of course, this loss alone is unaware of the input image and would drive the network to produce plausible label maps that have no relation to the input image. An additional cross-entropy loss (the standard semantic segmentation loss) is used to tie the network to the input and the ground-truth label map, when available. - Additional unlabeled data is utilized by using a trained semantic segmentation network to produce a label map with associated confidences; high-confidence pixels are used as ground-truth labels and are fed back to the network as training data. The paper is fine and the work is competently done, but the experimental results never quite come together. The technical development isn’t surprising and doesn’t have much to teach researchers working in the area. Given that the technical novelty is rather light and the experimental benefits are not quite there, I cannot recommend the paper for publication in a first-tier conference. Some more detailed comments: 1. The GAN and the semi-supervised training scheme appear to be largely independent. The GAN can be applied without any unlabeled data, for example. The paper generally appears to present two largely independent ideas. This is fine, except they don’t convincingly pan out in experiments. 2. The biggest issue is that the experimental results do not convincingly indicate that the presented ideas are useful. 2a. In the “Full” condition, the presented approach does not come close to the performance of the DeepLab baseline, even though the DeepLab network is used in the presented approach. Perhaps the authors have taken out some components of the DeepLab scheme for these experiments, such as multi-scale processing, but the question then is “Why?”. These components are not illegal, they are not cheating, they are not overly complex and are widely used. If the authors cannot demonstrate an improvement with these components, their ideas are unlikely to be adopted in state-of-the-art semantic systems, which do use these components and are doing fine. 2b. In the 1/8, 1/4, and 1/2 conditions, the performance of the baselines is not quoted. This is wrong. Since the authors are evaluating on the validation sets, there is no reason not to train the baselines on the same amount of labeled data (1/8, 1/4, 1/2) and report the results. The training scripts are widely available and such training of baselines for controlled experiments is commonly done in the literature. The reviewer is left to suspect, with no evidence given to the contrary, that the presented approach does not outperform the DeepLab baseline even in the reduced-data conditions. A somewhat unflattering view of the work would be that this is another example of throwing a GAN at everything to see if it sticks. In this case, the experiments do not indicate that it did.
iclr_2018_ryF-cQ6T-
The resemblance between the methods used in studying quantum-many body physics and in machine learning has drawn considerable attention. In particular, tensor networks (TNs) and deep learning architectures bear striking similarities to the extent that TNs can be used for machine learning. Previous results used one-dimensional TNs in image recognition, showing limited scalability and a request of high bond dimension. In this work, we train two-dimensional hierarchical TNs to solve image recognition problems, using a training algorithm derived from the multipartite entanglement renormalization ansatz (MERA). This approach overcomes scalability issues and implies novel mathematical connections among quantum many-body physics, quantum information theory, and machine learning. While keeping the TN unitary in the training phase, TN states can be defined, which optimally encodes each class of the images into a quantum many-body state. We study the quantum features of the TN states, including quantum entanglement and fidelity. We suggest these quantities could be novel properties that characterize the image classes, as well as the machine learning tasks. Our work could be further applied to identifying possible quantum properties of certain artificial intelligence methods.
Full disclosure: the authors' submission is not anonymous. They included a github link at the bottom of page 6 and I am aware of the name of the author and coauthors (and have previously read their work and am a fan of it). Thus, this review is not double blind. I notified the area chair last week and we agreed that I submit this review. --- This is an interesting application of tensor networks to machine learning. The work proposes using a tree tensor network for image classification. Each image is first mapped into a higher-dimensional space. Then the input features are contracted with the tensors of the tensor network. The maximum value of the final layer of the network gives the predicted class. The training algorithm is inspired by the multipartite entanglement renormalization ansatz: it corresponds to updating each tensor in the network by performing a singular value decomposition of the environment tensor (everything in the cost function after removing the current tensor to be updated). Overall, I think this is an interesting, novel contribution, but it is not accessible to non-physicists right now. The paper could be rewritten to be accessible to non-physicists and would be a highly-valuable interdisciplinary contribution. * Consider redoing the experiments with a different cost function: least squares is an unnatural cost function to use for classification. Cross entropy would be better. * discuss the scalability: why did you downsample MNIST from 28x28 pixels to 16x16 pixels? Why is training accuracy not reported on the 10-class model in Table 1? If it is because of a slow implementation, that's fine. But if it is because of the scalability of the method, it would be good to report that. In either case it wouldn't hurt the paper, it is just important to know. * In section 5, you say "input vectors are still initially arranged ... according to their spatial locations in the image". But don't you change the spatial locations of the image to follow equation (10)? It would be good to add a sentence clarifying this. --- In its current form, reading the paper requires a physics background. There are a few things that would make it easier to read for a general machine learning audience: * connect your method to matrix factorization and tensor decomposition approaches * include an algorithm box for Strategy-I and Strategy-II * include an appendix, with a brief review of upward and downward indices which is crucial for understanding your method (few people in machine learning are familiar with Einstein notation) * relate your interesting ideas about quantum states to existing work in information theory. I am skeptical of the label 'quantum': how do quantum mechanical tools apply to images? What is a 'quantum' many-body state here? There is no intrinsic uncertainty principle at play in image classification. I would guess that the ideas you propose are equivalent to existing work in information theory. That would make it less confusing. * in general, maybe mention the inspiration of your work from MERA, but avoid using physics language when there are no clear physical systems. This will make your work more understandable and easier to follow. A high-level motivation for MERA from a physics perspective suffices; the rest can be phrased in terms of tensor decompositions. --- Minor nits: * replace \citet with \citep everywhere - all citations are malformed * figure 1 could be clarified - say that see-through gray dots are dimensions, blue squares are tensors, edges are contractions * all figure x and y labels and legends are too small * some typos: "which classify an input image by choosing"; "we apply different feature map to each"; small grammar issues in many places * Figure 4: "up-down" and "left-right" not defined anywhere
iclr_2018_BydLzGb0Z
Published as a conference paper at ICLR 2018 TWIN NETWORKS: MATCHING THE FUTURE FOR SEQUENCE GENERATION We propose a simple technique for encouraging generative RNNs to plan ahead. We train a "backward" recurrent network to generate a given sequence in reverse order, and we encourage states of the forward model to predict cotemporal states of the backward model. The backward network is used only during training, and plays no role during sampling or inference. We hypothesize that our approach eases modeling of long-term dependencies by implicitly forcing the forward states to hold information about the longer-term future (as contained in the backward states). We show empirically that our approach achieves 9% relative improvement for a speech recognition task, and achieves significant improvement on a COCO caption generation task.
1) Summary This paper proposes a recurrent neural network (RNN) training formulation for encouraging RNN the hidden representations to contain information useful for predicting future timesteps reliably. The authors propose to train a forward and backward RNN in parallel. The forward RNN predicts forward in time and the backward RNN predicts backwards in time. While the forward RNN is trained to predict the next timestep, its hidden representation is forced to be similar to the representation of the backward RNN in the same optimization step. In experiments, it is shown that the proposed method improves training speed in terms of number of training iterations, achieves 0.8 CIDEr points improvement over baselines using the proposed training, and also achieves improved performance for the task of speech recognition. 2) Pros: + Novel idea that makes sense for learning a more robust representation for predicting the future and prevent only local temporal correlations learned. + Informative analysis for clearly identifying the strengths of the proposed method and where it is failing to perform as expected. + Improved performance in speech recognition task. + The idea is clearly explained and well motivated. 3) Cons: Image captioning experiment: In the experimental section, there is an image captioning result in which the proposed method is used on top of two baselines. This experiment shows improvement over such baselines, however, the performance is still worse compared against baselines such as Lu et al, 2017 and Yao et al, 2016. It would be optimal if the authors can use their training method on such baselines and show improved performance, or explain why this cannot be done. Unconditioned generation experiments: In these experiments, sequential pixel-by-pixel MNIST generation is performed in which the proposed method did not help. Because of this, two conditioned set ups are performed: 1) 25% of pixels are given before generation, and 2) 75% of pixels are given before generation. The proposed method performs similar to the baseline in the 25% case, and better than the baseline in the 75% case. For completeness, and to come to a stronger conclusion on how much uncertainty really affects the proposed method, this experiment needs a case in which 50% of the pixels are given. Observing 25% of the pixels gives almost no information about the identity of the digit and it makes sense that it’s hard to encode the future, however, 50% of the pixels give a good idea of what the digit identity is. If the authors believe that the 50% case is not necessary, please feel free to explain why. Additional comments: The method is shown to converge faster compared to the baselines, however, it is possible that the baseline may finish training faster (the authors do acknowledge the additional computation needed in the backward RNN). It would be informative for the research community to see the relationship of training time (how long it takes in hours) versus how fast it learns (iterations taken to learn). Experiments on RL planning tasks would be interesting to see (Maybe on a simple/predictable environment). 4) Conclusion The paper proposes a method for training RNN architectures to better model the future in its internal state supervised by another RNN modeling the future in reverse. Correctly modeling the future is very important for tasks that require making decisions of what to do in the future based on what we predict from the past. The proposed method presents a possible way of better modeling the future, however, some the results do not clearly back up the claim yet. The given score will improve if the authors are able to address the stated issues. POST REBUTTAL RESPONSE: The authors have addressed the comments on the MNIST experiments and show better results, however, as far as I can see, they did not address my concern about the comparisons on the image captioning experiment. In the image captioning experiment the authors choose two networks (Show & Tell and Soft attention) that they improve using the proposed method that end up performing similar to the second best baseline (Yao et al. 2016) based on Table 3 and their response. I requested for the authors to use their method on the best performing baselines (i.e. Yao et al. 2016 or Liu et al. 2017) or explain why this cannot be done (maybe my request was not clearly stated). Applying the proposed method on the strong baselines would highlight the author's claims more strongly than just applying on the average performing chosen baselines. This request was not addressed and instead the authors just improved the average performing baselines in Table 3 to meet the best baselines. Given, that the authors were able to improve the results in the sequential MNIST and improve the average baselines, my rating improves one point. However, I still have concerns about this method not being shown to improve the best methods presented in Table 3 which would give a more solid result. My rating changes to marginally above threshold for acceptance.
iclr_2018_HJrJpzZRZ
Can we build models that automatically learn about object motion from raw, unlabeled videos? In this paper, we study the problem of multi-step video prediction, where the goal is to predict a sequence of future frames conditioned on a short context. We focus specifically on two aspects of video prediction: accurately modeling object motion, and producing naturalistic image predictions. Our model is based on a flow-based generator network with a discriminator used to improve prediction quality. The implicit flow in the generator can be examined to determine its accuracy, and the predicted images can be evaluated for image quality. We argue that these two metrics are critical for understanding whether the model has effectively learned object motion, and propose a novel evaluation benchmark based on ground truth object flow. Our network achieves state-of-the-art results in terms of both the realism of the predicted images, as determined by human judges, and the accuracy of the predicted flow. Videos and full results can be viewed on the supplementary website: https://sites.google.com/site/omvideoprediction.
This is a fine paper that generally reads as a new episode in a series on motion-based video prediction with an eye towards robotic manipulation [Finn et al. 2016, Finn and Levine 2017, Ebert et al. 2017]. The work is rather incremental but is competently executed. It is in line with current trends in the research community and is a good fit for ICLR. The paper is well-written, reasonably scholarly, and contains stimulating insights. I recommend acceptance, despite some reservations. My chief criticism is a matter of research style: instead of this deluge of barely distinguishable least-publishable-unit papers on the same topic, in every single conference, I wish the authors didn’t slice so thinly, devoted more time to each paper, and served up a more substantial dish. Some more detailed comments: - The argument for evaluating visual realism never quite gels and is not convincing. The paper advocates two primary metrics: accuracy of the predicted motion and perceptual realism of the synthesized images. The argument for motion accuracy is clear and is clearly stated: it’s the measure that is actually tied to the intended application, which is using action-conditional motion prediction for control. A corresponding argument for perceptual realism is missing. Indeed, a skeptical reviewer may suspect that the authors needed to add perceptual realism to the evaluation because that’s the only thing that justifies the adversarial loss. The adversarial loss is presented as the central conceptual contribution of the paper, but doesn’t actually make a difference in terms of task-relevant metrics. A skeptical perspective on the paper is that the adversarial loss just makes the images look prettier but makes no difference in terms of task performance (control). This is an informative negative result. It's not how the paper is written, though. - The “no adversary”/“no adv” condition in Table 1 and Figure 4 is misleading. It’s not properly controlled. It is not the case that the adversarial loss was simply removed. The regression loss was also changed from l_1 to l_2. This is not right. The motivation for this control is to evaluate the impact of the adversarial loss, which is presented as the key conceptual contribution of the paper. It should be a proper control. The other loss should remain what it is in the full “Ours” condition (i.e., l_1). - The last sentence in the caption of Table 1 -- “Slight improvement in motion is observed by training with an adversary as well” -- should be removed. The improvement is in the noise. - Generally, the quantitative impact of the adversarial loss never comes together. The only statistically significant improvement is on perceptual image realism. The relevance of perceptual image realism to the intended task (control) is not substantiated, as discussed earlier. - In the perceptual evaluation procedure, the “1 second” restriction is artificial and makes the evaluated methods appear better than they are. If we are serious about evaluating image realism and working towards passing the visual Turing test, we should report results without an artificial time limit. They won’t look as flattering, but will properly report our progress on this journey. If desired, the results of timed comparisons can also be reported, but reporting just a timed comparison with an artificial limit of 1 second may mislead some readers into thinking that we are farther along than we actually are. There are some broken sentences that mar an otherwise well-written paper: - End of Section 1, “producing use a learned discriminator and show improvements in visual quality” - Beginning of Section 3, “We first present the our overall network architecture” - page 4, “to choose to copy pixels from the previous frame, used transformed versions of the previous frame” - page 4, “convolving in the input image with” - page 5, “is know to produce” - page 5, “an additional indicating” - page 5, “Adam Kingma & Ba (2015)” (use the other cite command) - page 5, “we observes” - page 5, “smaller batch sizes degrades” - page 5, “larger batch sizes provides”
iclr_2018_ryjw_eAaZ
We introduce an unsupervised structure learning algorithm for deep, feed-forward, neural networks. We propose a new interpretation for depth and inter-layer connectivity where a hierarchy of independencies in the input distribution is encoded in the network structure. This results in structures allowing neurons to connect to neurons in any deeper layer skipping intermediate layers. Moreover, neurons in deeper layers encode low-order (small condition sets) independencies and have a wide scope of the input, whereas neurons in the first layers encode higher-order (larger condition sets) independencies and have a narrower scope. Thus, the depth of the network is automatically determined-equal to the maximal order of independence in the input distribution, which is the recursion-depth of the algorithm. The proposed algorithm constructs two main graphical models: 1) a generative latent graph (a deep belief network) learned from data and 2) a deep discriminative graph constructed from the generative latent graph. We prove that conditional dependencies between the nodes in the learned generative latent graph are preserved in the class-conditional discriminative graph. Finally, a deep neural network structure is constructed based on the discriminative graph. We demonstrate on image classification benchmarks that the algorithm replaces the deepest layers (convolutional and dense layers) of common convolutional networks, achieving high classification accuracy, while constructing significantly smaller structures. The proposed structure learning algorithm requires a small computational cost and runs efficiently on a standard desktop CPU.
The paper proposes an unsupervised structure learning method for deep neural networks. It first constructs a fully visible DAG by learning from data, and decomposes variables into autonomous sets. Then latent variables are introduced and stochastic inverse is generated. Later a deep neural network structure is constructed based on the discriminative graph. Both the problem considered in the paper and the proposed method look interesting. The resulting structure seems nice. However, the reviewer indeed finds a major technical flaw in the paper. The foundation of the proposed method is on preserving the conditional dependencies in graph G. And each step mentioned in the paper, as it claims, can preserve all the conditional dependencies. However, in section 2.2, it seems that the stochastic inverse cannot. In Fig. 3(b), A and B are no longer dependent conditioned on {C,D,E} due to the v-structure induced in node H_A and H_B. Also in Fig. 3(c), if the reviewer understands correctly, the bidirectional edge between H_A and H_B is equivalent to H_A <- h -> H_B, which also induces a v-structure, blocking the dependency between A and B. Therefore, the very foundation of the proposed method is shattered. And the reviewer requests an explicit explanation of this issue. Besides that, the reviewer also finds unfair comparisons in the experiments. 1. In section 5.1, although the authors show that the learned structure achieves 99.04%-99.07% compared with 98.4%-98.75% for fully connected layers, the comparisons are made by keeping the number of parameters similar in both cases. The comparisons are reasonable but not very convincing. Observing that the learned structures would be much sparser than the fully connected ones, it means that the number of neurons in the fully connected network is significantly smaller. Did the authors compare with fully connected network with similar number of neurons? In such case, which one is better? (Having fewer parameters is a plus, but in terms of accuracy the number of neurons really matters for fair comparison. In practice, we definitely would not use that small number of neurons in fully connected layers.) 2. In section 5.2, it is interesting to observe that using features from conv10 is better than that from last dense layer. But it is not a fair comparison with vanilla network. In vanilla VGG-16-D, there are 3 more conv layers and 3 more fully connected layers. If you find that taking features from conv10 is good for the learned structure, then maybe it will also be good by taking features from conv10 and then apply 2-3 fully-connected layers directly (The proposed structure learning is not comparable to convolutional layers, and what it should really compare to is fully-connected layers.) In such case, which one is better? Secondly, VGG-16 is a large network designed for ImageNet data. For small dataset such as CIFAR10 and CIFAR100, it is really overkilled. That's maybe the reason why taking the output of shallow layers could achieve pretty good results. 3. In Fig. 6, again, comparing the learned structure with fully-connected network by keeping parameters to be similar and resulting in large difference of the number of neurons is unfair from my point of view. Furthermore, all the comparisons are made with respect to fully-connected network or vanilla CNNs. No other structure learning methods are compared with. Reasonable baseline methods should be included. In conclusion, due to the above issues both in method and experiments, the reviewer thinks that this paper is not ready for publication.
iclr_2018_rJUBryZ0W
In representational lifelong learning an agent aims to learn to solve novel tasks while updating its representation in light of previous tasks. Under the assumption that future tasks are 'related' to previous tasks, representations should be learned in such a way that they capture the common structure across learned tasks, while allowing the learner sufficient flexibility to adapt to novel aspects of a new task. We develop a framework for lifelong learning in deep neural networks that is based on generalization bounds, developed within the PAC-Bayes framework. Learning takes place through the construction of a distribution over networks based on the tasks seen so far, and its utilization for learning a new task. Thus, prior knowledge is incorporated through setting a history-dependent prior for novel tasks. We develop a gradient-based algorithm implementing these ideas, based on minimizing an objective function motivated by generalization bounds, and demonstrate its effectiveness through numerical examples.
The paper considers multi-task setting of machine learning. The first contribution of the paper is a novel PAC-Bayesian risk bound. This risk bound serves as an objective function for multi-task machine learning. A second contribution is an algorithm, called LAP, for minimizing a simplified version of this objective function. LAP algorithm uses several training tasks to learn a prior distribution P over hypothesis space. This prior distribution P is then used to find a posterior distribution Q that minimizes the same objective function over the test task. The third contribution is an empirical evaluation of LAP over toy dataset of two clusters and over MNIST. While the paper has the title of "life-long learning", the authors admit that all experiments are in multi-task setting, where the training is done over all tasks simultaneously. The novel risk bound and LAP algorithm can definitely be applied to life-long setting, where training tasks are available sequentially. But since there is no empirical evaluation in this setting, I suggest to adjust the title of the paper. The novel risk bound of the paper is an extension of the bound from [Pentina & Lampert, ICML 2014]. The extension seems to be quite significant. Unlike the bound of [Pentina & Lampert, ICML 2014], the new bound allows to re-use many different PAC-Bayesian complexity terms that were published previously. I liked risk bound and optimization sections of the paper. But I was less convinced by the empirical experiments. Since the paper improves the risk bound of [Pentina & Lampert, ICML 2014], I expected to see an empirical comparison of LAP and optimization algorithm from the latter paper. To make such comparison fair, both optimization algorithms should use the same base algorithm, e.g. ridge regression, as in [Pentina & Lampert, ICML 2014]. Also I suggest to use the datasets from the latter paper. The experiment with multi-task learning over MNIST dataset looks interesting, but it is still a toy experiment. This experiment will be more convincing with more sophisticated datasets (CIFAR-10, ImageNet) and architectures (e.g. Inception-V4, ResNet). Minor remarks: Section 6, line 4: "Combing" -> "Combining" Page 14, first equation: There should be "=" before the second expectation.
iclr_2018_ByRWCqvT-
Published as a conference paper at ICLR 2018 LEARNING TO CLUSTER IN ORDER TO TRANSFER ACROSS DOMAINS AND TASKS This paper introduces a novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster. The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning. We begin by reducing categorical information to pairwise constraints, which only considers whether two instances belong to the same class or not (pairwise semantic similarity). This similarity is category-agnostic and can be learned from data in the source domain using a similarity network. We then present two novel approaches for performing transfer learning using this similarity function. First, for unsupervised domain adaptation, we design a new loss function to regularize classification with a constrained clustering loss, hence learning a clustering network with the transferred similarity metric generating the training inputs. Second, for cross-task learning (i.e., unsupervised clustering with unseen categories), we propose a framework to reconstruct and estimate the number of semantic clusters, again using the clustering network. Since the similarity network is noisy, the key is to use a robust clustering algorithm, and we show that our formulation is more robust than the alternative constrained and unconstrained clustering approaches. Using this method, we first show state of the art results for the challenging cross-task problem, applied on Omniglot and ImageNet. Our results show that we can reconstruct semantic clusters with high accuracy. We then evaluate the performance of cross-domain transfer using images from the Office-31 and SVHN-MNIST tasks and present top accuracy on both datasets. Our approach doesn't explicitly deal with domain discrepancy. If we combine with a domain adaptation loss, it shows further improvement.
The authors propose a method for performing transfer learning and domain adaptation via a clustering approach. The primary contribution is the introduction of a Learnable Clustering Objective (LCO) that is trained on an auxiliary set of labeled data to correctly identify whether pairs of data belong to the same class. Once the LCO is trained, it is applied to the unlabeled target data and effectively serves to provide "soft labels" for whether or not pairs of target data belong to the same class. A separate model can then be trained to assign target data to clusters while satisfying these soft labels, thereby ensuring that clusters are made up of similar data points. The proposed LCO is novel and seems sound, serving as a way to transfer the general knowledge of what a cluster is without requiring advance knowledge of the specific clusters of interest. The authors also demonstrate a variety of extensions, such as how to handle the case when the number of target categories is unknown, as well as how the model can make use of labeled source data in the setting where the source and target share the same task. The way the method is presented is quite confusing, and required many more reads than normal to understand exactly what is going on. To point out one such problem point, Section 4 introduces f, a network that classifies each data instance into one of k clusters. However, f seems to be mentioned only in a few times by name, despite seeming like a crucial part of the method. Explaining how f is used to construct the CCN could help in clarifying exactly what role f plays in the final model. Likewise, the introduction of G during the explanation of the LCO is rather abrupt, and the intuition of what purpose G serves and why it must be learned from data is unclear. Additionally, because G is introduced alongside the LCO, I was initially misled into understanding was that G was optimized to minimize the LCO. Further text explaining intuitively what G accomplishes (soft labels transferred from the auxiliary dataset to the target dataset) and perhaps a general diagram of what portions of the model are trained on what datasets (G is trained on A, CCN is trained on T and optionally S') would serve the method section greatly and provide a better overview of how the model works. The experimental evaluation is very thorough, spanning a variety of tasks and settings. Strong results in multiple settings indicate that the proposed method is effective and generalizable. Further details are provided in a very comprehensive appendix, which provides a mix of discussion and analysis of the provided results. It would be nice to see some examples of the types of predictions and mistakes the model makes to further develop an intuition for how the model works. I'm also curious how well the model works if, you do not make use of the labeled source data in the cross-domain setting, thereby mimicking the cross-task setup. At times, the experimental details are a little unclear. Consistent use of the A, T, and S' dataset abbreviations would help. Also, the results section seems to switch off between calling the method CCN and LCO interchangeably. Finally, a few of the experimental settings differ from their baselines in nontrivial ways. For the Office experiment, the LCO appears to be trained on ImageNet data. While this seems similar in nature to initializing from a network pre-trained on ImageNet, it's worth noting that this requires one to have the entire ImageNet dataset on hand when training such a model, as opposed to other baselines which merely initialize weights and then fine-tune exclusively on the Office data. Similarly, the evaluation on SVHN-MNIST makes use of auxiliary Omniglot data, which makes the results hard to compare to the existing literature, since they generally do not use additional training data in this setting. In addition to the existing comparison, perhaps the authors can also validate a variant in which the auxiliary data is also drawn from the source so as to serve as a more direct comparison to the existing literature. Overall, the paper seems to have both a novel contribution and strong technical merit. However, the presentation of the method is lacking, and makes it unnecessarily difficult to understand how the model is composed of its parts and how it is trained. I think a more careful presentation of the intuition behind the method and more consistent use of notation would greatly improve the quality of this submission. ========================= Update after author rebuttal: ========================= I have read the author's response and have looked at the changes to the manuscript. I am satisfied with the improvements to the paper and have changed my review to 'accept'.
iclr_2018_B1n8LexRZ
Published as a conference paper at ICLR 2018 GENERALIZING HAMILTONIAN MONTE CARLO WITH NEURAL NETWORKS We present a general-purpose method to train Markov chain Monte Carlo kernels, parameterized by deep neural networks, that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jumped distance, a proxy for mixing speed. We demonstrate large empirical gains on a collection of simple but challenging distributions, for instance achieving a 106× improvement in effective sample size in one case, and mixing when standard HMC makes no measurable progress in a second. Finally, we show quantitative and qualitative gains on a real-world task: latent-variable generative modeling. We release an open source TensorFlow implementation of the algorithm.
The paper introduces a non-volume-preserving generalization of HMC whose transitions are determined by a set of neural network functions. These functions are trained to maximize expected squared jump distance. This works because each variable (of the state space) is modified in turn, so that the resulting update is invertible, with a tractable transformation inspired by Dinh et al 2016. Overall, I believe this paper is of good quality, clearly and carefully written, and potentially accelerates mixing in a state-of-the-art MCMC method, HMC, in many practical cases. A few downsides are commented on below. The experimental section proves the usefulness of the method on a range of relevant test cases; in addition, an application to a latent variable model is provided sec5.2. Fig 1a presents results in terms of numbers of gradient evaluations, but I couldn't find much in the way of computational cost of L2HMC in the paper. I can't see where the number "124x" in sec 5.1 stems from. As a user, I would be interested in the typical computational cost of both "MCMC sampler training" and MCMC sampler usage (inference?), compared to competing methods. This is admittedly hard to quantify objectively, but just an order of magnitude would be helpful for orientation. Would it be relevant, in sec5.1, to compare to other methods than just HMC, eg LAHMC? I am missing an intuition for several things: eq7, the time encoding defined in Appendix C Appendix Fig5, I cannot quite see how the caption claim is supported by the figure (just hardly for VAE, but not for HMC). The number "124x ESS" in sec5.1 seems at odds with the number in the abstract, "50x". # Minor errors - sec1: "The sampler is trained to minimize a variation": should be maximize "as well as on a the real-world" - sec3.2 "and 1/2 v^T v the kinetic": "energy" missing - sec4: the acronym L2HMC is not expanded anywhere in the paper The sentence "We will denote the complete augmented...p(d)" might be moved to after "from a uniform distribution" in the same paragraph. In paragraph starting "We now update x": - specify for clarity: "the first update, which yields x' "/ "the second update, which yields x'' " - "only affects $x_{\bar{m}^t}$": should be $x'_{\bar{m}^t}$ (prime missing) - the syntax using subscript m^t is confusing to read; wouldn't it be clearer to write this as a function, eg "mask(x',m^t)"? - inside zeta_2 and zeta_3, do you not mean $m^t" and $\bar{m}^t$ ? - sec5: add reference for first mention of "A NICE MC" - Appendix A: - "Let's" -> "Let" - eq12 should be x''=... - Appendix C: space missing after "Section 5.1" - Appendix D1: "In this section is presented" : sounds odd - Appendix D3: presumably this should consist of the figure 5 ? Maybe specify.
iclr_2018_HJ3d2Ax0-
Workshop track -ICLR 2018 BENEFITS OF DEPTH FOR LONG-TERM MEMORY OF RECURRENT NETWORKS The key attribute that drives the unprecedented success of modern Recurrent Neural Networks (RNNs) on learning tasks which involve sequential data, is their ever-improving ability to model intricate long-term temporal dependencies. However, a well established measure of RNNs' long-term memory capacity is lacking, and thus formal understanding of their ability to correlate data throughout time is limited. Though depth efficiency in convolutional networks is well established by now, it does not suffice in order to account for the success of deep RNNs on inputs of varying lengths, and the need to address their 'time-series expressive power' arises. In this paper, we analyze the effect of depth on the ability of recurrent networks to express correlations ranging over long time-scales. To meet the above need, we introduce a measure of the information flow across time that can be supported by the network, referred to as the Start-End separation rank. Essentially, this measure reflects the distance of the function realized by the recurrent network from a function that models no interaction whatsoever between the beginning and end of the input sequence. We prove that deep recurrent networks support StartEnd separation ranks which are exponentially higher than those supported by their shallow counterparts. Moreover, we show that the ability of deep recurrent networks to correlate different parts of the input sequence increases exponentially as the input sequence extends, while that of vanilla shallow recurrent networks does not adapt to the sequence length at all. Thus, we establish that depth brings forth an overwhelming advantage in the ability of recurrent networks to model long-term dependencies, and provide an exemplar of quantifying this key attribute which may be readily extended to other RNN architectures of interest, e.g. variants of LSTM networks. We obtain our results by considering a class of recurrent networks referred to as Recurrent Arithmetic Circuits (RACs), which merge the hidden state with the input via the Multiplicative Integration operation.
After reading the authors's rebuttal I increased my score from a 7 to a 6. I do think the paper would benefit from experimental results, but agree with the authors that the theoretical results are non-trivial and interesting on their own merit. ------------------------ The paper presents a theoretical analysis of depth in RNNs (technically a variant called RACs) i.e. stacking RNNs on top of one another, so that h_t^l (i.e. hidden state at time t and layer l is a function of h_t^{l-1} and h_{t-1}^{l}) The work is inspired by previous results for feed forward nets and CNNs. However, what is unique to RNNs is their ability to model long term dependencies across time. To analyze this specific property, the authors propose a concept called "start-end rank" that essentially models the richness of the dependency between two disjoint subsets of inputs. Specifically, let S = {1, . . . , T/2} and E === {T/2 + 1, . . . , T}. sep_{S,E}(y) models the dependence between these two sets of time points. Specifically sep_{S,E}(y) = K means there exists g_s^k and g_e^k for k=1...K such that y(x) = \sum_{k} g_s^k(x_S) g_e^k(x_E). Therefore sep_{S,E}(y) is the rank of a particular matricization of y (with respect to the partition S,E). If sep_{S,E}=1 then it is rank 1 (and would correspond to independence if y(x) was a probability distribution). A higher rank would correspond to more dependence across time. (Comment: I believe if I understood the above correctly, it would be easier to explain tensors/matricization first and then introduce separation rank, since I think it much makes it clearer to explain. Right now the authors explain separation rank first and then discuss tensors / matricization). Using this concept, the authors prove that deep recurrent networks can express functions that have exponentially higher start/end ranks than shallow RNNs. I overall like the paper's theoretical results, but I have the following complaints: (1) I have the same question as the other reviewer. Why is Theorem 1 not a function of L? Do the papers that prove similar theorems about ConvNets able to handle general L? What makes this more challenging? I feel if comparing L=2 vs L=3 is hard, the authors should be more up front about that in the introduction/abstract. (2) I think it would have been stronger if the authors would have provided some empirical results validating their claims.
iclr_2018_SkmiegW0b
We study the problem of building models that disentangle independent factors of variation. Such models encode features that can efficiently be used for classification and to transfer attributes between different images in image synthesis. As data we use a weakly labeled training set, where labels indicate what single factor has changed between two data samples, although the relative value of the change is unknown. This labeling is of particular interest as it may be readily available without annotation costs. We introduce an autoencoder model and train it through constraints on image pairs and triplets. We show the role of feature dimensionality and adversarial training theoretically and experimentally. We formally prove the existence of the reference ambiguity, which is inherently present in the disentangling task when weakly labeled data is used. The numerical value of a factor has different meaning in different reference frames. When the reference depends on other factors, transferring that factor becomes ambiguous. We demonstrate experimentally that the proposed model can successfully transfer attributes on several datasets, but show also cases when the reference ambiguity occurs.
The paper considers the challenges of disentangling factors of variation in images: for example disentangling viewpoint from vehicle type in an image of a car. They identify a well-known problem, which they call "reference ambiguity", and show that in general without further assumptions one cannot tell apart two different factors of variation. They then go on to suggest an interesting AE+GAN architecture where the main novelty is the idea of taking triplets such that the first two instances vary in only one factor of variation, while the third instance varies in both from the pair. This is clever and allows them to try and disentangle the variation factors using a joint encoder-decoder architecture working on the triplet. Pros: 1. Interesting use of constructed triplets. 2. Interesting use of GAN on the artificial instance named x_{3 \oplus 1} Cons: 1. Lack of clarity: the paper is hard to follow at times. It's not entirely obvious how the theoretical part informs the practical part. See detailed comments below. 2. The theory addresses two widely recognized problems as if they're novel: "reference ambiguity" and "shortcut problem". The second merely refers to the fact that unconstrained autoencoders will merely memorize the instance. 3. Some of the architectural choices (the one derived from "shortcut problem") are barely explained or looked into. Specific comments: 1. An important point regarding the reference ambiguity problem and eq. (2): a general bijective function mixing v and c would not have the two components as independent. The authors could have used this extremely important aspect of the generative process they posit in order to circumvent the problem of ambiguity. In fact, I suspect that this is what allows their method to succeed. 2. I think the intro could be made better if more concrete examples be made earlier on. Specifically the car-type/viewpoint example, along with noting what weak labels mean in that context. 3. In presenting autoencoders it is crucial to note that they are all built around the idea of compression. Otherwise, the perfect latent representation is z=x. 4. I would consider switching the order of sections 2 and 3, so the reader will be better grounded in what this paper is about before reading the related work. 5. In discussing attributes and "valid" features, I found the paper rather vague. An image has many attributes: the glint in the corner of a window, the hue of a leaf. The authors should be much more specific in this discussion and definite explicitly and clearly what they mean when they use these terms. 6. In equation (5), should it be p(v_1,v_2)? Or are v_1 and v_2 assumed to be independent? 7. Under equation (5), the paper mentions an "autoencoder constraint". Such a constraint is not mentioned up to this point in the paper if I'm not mistaken. 8. Also under equation (5): is this where the encoder requirements are defined? If so, please be more explicit about it. Also note that you should require c_1 \neq c_2. 9. In proof of Proposition 1, there is discussion of N_c. N_c was mentioned before but never properly defined; same for R_c and C^-1. These should be part of the proposition statement or defined formally. Currently they are only discussed ad-hoc after equation (5). 10 .In the proof of Proposition 1, what is f_c^-1 ? It's only defined later in the paper. 11. In general, what promises that f_c^-1 and f_v^-1 are well defined? Are f_c and f_v injective? Why? 12. Before explaining the training of the model, the task should be defined properly. What is the goal of the training? 13. In eq. (15) I am missing a term which addresses "the shortcut problem" as defined in the previous page. 14. The weak labels are never properly defined and are discussed in a vague manner. Please define what does that term mean in your context and what were the weak labels in each experiment. 15. In the conclusion, I would edit to say the "our trained model works well on *several* datasets". Minor comments: Please use \citep when appropriate. Instead of "Generative Adversarial Nets Goodfellow et al. (2014)", you should have "Generative Adversarial Nets (Goodfellow et al., 2014)"
iclr_2018_HkMCybx0-
We introduce the "inverse square root linear unit" (ISRLU) to speed up learning in deep neural networks. ISRLU has better performance than ELU but has many of the same benefits. ISRLU and ELU have similar curves and characteristics. Both have negative values, allowing them to push mean unit activation closer to zero, and bring the normal gradient closer to the unit natural gradient, ensuring a noiserobust deactivation state, lessening the over fitting risk. The significant performance advantage of ISRLU on traditional CPUs also carry over to more efficient HW implementations on HW/SW codesign for CNNs/RNNs. In experiments with TensorFlow, ISRLU leads to faster learning and better generalization than ReLU on CNNs. This work also suggests a computationally efficient variant called the "inverse square root unit" (ISRU) which can be used for RNNs. Many RNNs use either long short-term memory (LSTM) and gated recurrent units (GRU) which are implemented with tanh and sigmoid activation functions. ISRU has less computational complexity but still has a similar curve to tanh and sigmoid.
Summary: - The paper proposes a new activation function that looks similar to ELU but much cheaper by using the inverse square root function. Contributions: - The paper proposes a cheaper activation and validates it with an MNIST experiment. The paper also shows major speedup compared to ELU and TANH (unit-wise speedup). Pros: - The proposed function has similar behavior as ELU but 4x cheaper. - The authors also refer us to faster ways to compute square root functions numerically, which can be of general interests to the community for efficient network designs in the future. - The paper is clearly written and key contributions are well present. Cons: - Clearly, the proposed function is not faster than ReLU. In the introduction, the authors explain the motivation that ReLU needs centered activation (such as BN). But the authors also need to justify that ISRLU (or ELU) doesn’t need BN. In fact, in a recent study of ELU-ResNet (Shah et al., 2016) finds that ELU without BN leads to gradient explosion. To my knowledge, BN (at least in training time) is much more expensive than the activation function itself, so the speedup get from ISRLU may be killed by using BN in deeper networks on larger benchmarks. At inference time, all of ReLU, ELU, and ISRLU can fuse BN weights into convolution weights, so again ISRLU will not be faster than ReLU. The core question here is, whether the smoothness and centered zero property of ELU can buy us any win, compared to ReLU? I couldn’t find it based on the results presented here. - The authors need to validate on larger datasets (e.g. CIFAR, if not ImageNet) so that their proposed methods can be widely adopted. - The speedup is only measured on CPU. For practical usage, especially in computer vision, GPU speedup is needed to show an impact. Conclusion: - Based on the comments above, I recommend weak reject. References: - Shah, A., Shinde, S., Kadam, E., Shah, H., Shingade, S.. Deep Residual Networks with Exponential Linear Unit. In Proceedings of the Third International Symposium on Computer Vision and the Internet (VisionNet'16).
iclr_2018_HyXNCZbCZ
We propose a novel hierarchical generative model with a simple Markovian structure and a corresponding inference model. Both the generative and inference model are trained using the adversarial learning paradigm. We demonstrate that the hierarchical structure supports the learning of progressively more abstract representations as well as providing semantically meaningful reconstructions with different levels of fidelity. Furthermore, we show that minimizing the Jensen-Shanon divergence between the generative and inference network is enough to minimize the reconstruction error. The resulting semantically meaningful hierarchical latent structure discovery is exemplified on the CelebA dataset. There, we show that the features learned by our model in an unsupervised way outperform the best handcrafted features. Furthermore, the extracted features remain competitive when compared to several recent deep supervised approaches on an attribute prediction task on CelebA. Finally, we leverage the model's inference network to achieve state-ofthe-art performance on a semi-supervised variant of the MNIST digit classification task.
_________________________________________________________________________________________________________ I raise my rating on the condition that the authors will also address the minor concerns in the final version, please see details below. _________________________________________________________________________________________________________ This paper proposes to perform Adversarially Learned Inference (ALI) in a layer-wise manner. The idea is interesting, and the authors did a good job to describe high-level idea, and demonstrate one advantage of hierarchy: providing different levels reconstructions. However, the advantage of better reconstruction could be better demonstrated. Some major concerns should be clarified before publishing: (1) How did the authors implement p(x|z) and q(z|x), or p(z_l | z_{l+1}) and q(z_{l+1} | z_l )? Please provide the details, as this is key to the reconstruction issues of ALI. (2) Could the authors provide the pseudocode procedure of the proposed algorithm? In the current form of the writing, it is not clear what the HALI procedure is, whether (1) one discriminator is used to distinguish the concatenation of (x, z_1, ..., z_L), or (2) L discriminators are used to distinguish the concatenation of (z_l, z_{l+1}) at each layer, respectively? The above two points are important. If not correctly constructed, it might reveal potential flaws of the proposed technique. Since one of the major claims for HALI is to provide better reconstruction with higher fidelity than ALI. Could the authors provide quantitative results on MNIST and CIFAR to demonstrate this? The reconstruction issues have first been highlighted and theoretically analyzed in ALICE [*], and some remedy has been proposed to alleviate the issue. Quantitative comparison on MNIST and CIFAR are also conducted. Could the authors report numbers to compare with them (ALI and ALICE)? The 3rd paragraph in Introduction should be adjusted to correctly clarify details of algorithms, and reflect up-to-date literature. "One interesting feature highlighted in the original ALI work (Dumoulin et al., 2016) is that ... never explicitly trained to perform reconstruction, this can nevertheless be easily done...". Note that ALI can only perform reconstruction when the deterministic mapping is used, while ALI itself adopted the stochastic mapping. Further, the deterministic mapping is the major difference of BiGAN from ALI. Therefore, more rigorous way to phrase is that "the original ALI work with deterministic mappings", or "BiGAN" never explicitly trained to perform reconstruction, this can nevertheless be easily done... This tiny difference between deterministic/stochastic mappings makes major difference for the quality of reconstruction, as theoretically analyzed and experimentally compared in ALICE. In ALICE, the authors confirmed further source of poor reconstructions of ALI in practice. It would be better to reflect the non-identifiability issues raised by ALICE in Introduction, rather than hiding it in Future Work as "Although recent work designed to improve the stability of training in ALI does show some promise (Chunyuan Li, 2017), more work is needed on this front." Also, please fix the typo in reference as: [*] Chunyuan Li, Hao Liu, Changyou Chen, Yunchen Pu, Liqun Chen, Ricardo Henao and Lawrence Carin. ALICE: Towards understanding adversarial learning for joint distribution matching. In Advances in Neural Information Processing Systems (NIPS), 2017.
iclr_2018_BJJ9bz-0-
Robust real-world learning should benefit from both demonstrations and interaction with the environment. Current approaches to learning from demonstration and reward perform supervised learning on expert demonstration data and use reinforcement learning to further improve performance based on reward from the environment. These tasks have divergent losses which are difficult to jointly optimize; further, such methods can be very sensitive to noisy demonstrations. We propose a unified reinforcement learning algorithm, Normalized Actor-Critic (NAC), that effectively normalizes the Q-function, reducing the Q-values of actions unseen in the demonstration data. NAC learns an initial policy network from demonstration and refines the policy in a real environment. Crucially, both learning from demonstration and interactive refinement use exactly the same objective, unlike prior approaches that combine distinct supervised and reinforcement losses. This makes NAC robust to suboptimal demonstration data, since the method is not forced to mimic all of the examples in the dataset. We show that our unified reinforcement learning algorithm can learn robustly and outperform existing baselines when evaluated on several realistic driving games.
SUMMARY: The motivation for this work is to have an RL algorithm that can use imperfect demonstrations to accelerate learning. The paper proposes an actor-critic algorithm, called Normalized Actor-Critic (NAC), based on the entropy-regularized formulation of RL, which is defined by adding the entropy of the policy as an additional term in the reward function. Entropy-regularized formulation leads to nice relationships between the value function and the policy, and has been explored recently by many, including [Ziebart, 2010], [Schulman, 2017], [Nachum, 2017], and [Haarnoja, 2017]. The paper benefits from such a relationship and derives an actor-critic algorithm. Specifically, the paper only parametrizes the Q function, and computes the policy gradient using the relation between the policy and Q function (Appendix A.1). Through a set of experiments, the paper shows the effectiveness of the method. EVALUATION: I think exploring and understanding entropy-regularized RL algorithm is important. It is also important to be able to benefit from off-policy data. I also find the empirical results encouraging. But I have some concerns about this paper: - The derivations of the paper are unclear. - The relation with other recent work in entropy-regularized RL should be expanded. - The work is less about benefiting from demonstration data and more about using off-policy data. - The algorithm that performs well is not the one that was actually derived. * Unclear derivations: The derivations of Appendix A.1 is unclear. It makes it difficult to verify the derivations. To begin with, what is the loss function of which (9) and (10) are its gradients? To be more specific, the choices of \hat{Q} in (15) and \hat{V} in (19) are not clear. For example, just after (18) it is said that “\hat{Q} could be obtained through bootstrapping by R + gamma V_Q”. But if it is the case, shouldn’t we have a gradient of Q in (15) too? (or show that it can be ignored?) It appears that \hat{Q} and \hat{V} are parameterized independently from Q (which is a function of theta). Later in the paper they are estimated using a target network, but this is not specified in the derivations. The main problem boils down to the fact that the paper does not start from a loss function and compute all the gradients in a systematic way. Instead it starts from gradient terms, each of which seems to be from different papers, and then simplifies them. For example, the policy gradient in (8), which is further decomposed in Appendix A.1 as (15) and (16) and simplified, appears to be Eq. (50) of [Schulman et al., 2017] (https://arxiv.org/abs/1704.06440). In that paper we have Q_pi instead of \hat{Q} though. I suggest that the authors start from a loss function and clearly derive all necessary steps. * Unclear relation with other papers: What part of the derivations of this work are novel? Currently the novelty is not obvious. For example, having the gradient of both Q and V, as in (9), has been stated by [Haarnoja et al., 2017] (very similar formulation is developed in Appendix B of https://arxiv.org/abs/1702.08165). An algorithm that can work with off-policy data has also been developed by [Nachum, 2017] (in the form of a Bellman residual minimization algorithm, as opposed to this work which essentially uses a Fitted Q-Iteration algorithm as the critic). I think the paper could do a better job differentiating from those other papers. * The claim that this paper is about learning from demonstration is a bit questionable. The paper essentially introduces a method to use off-policy data, which is of course important, but does not cover the important scenario where we only have access to (state,action) pairs given by an expert. Here it appears from the description of Algorithm 1 that the transitions in the demonstration data have the same semantic as the interaction data, i.e., (s,a,r,s’). This makes it different from the work by [Kim et al., 2013], [Piot et al., 2014], and [Chemali et al., 2015], which do not require such a restriction on the demonstration data. * The paper mentions that to formalize the method as a policy gradient one, importance sampling should be used (the paragraph after (12)), but the performance of such a formulation is bad, as depicted in Figure 2. As a result, Algorithm 1 does not use importance sampling. This basically suggests that by ignoring the fact that the data is collected off-policy, and treating it as an on-policy data, the agent might perform better. This is an interesting phenomenon and deservers further study, as currently doing the “wrong” things is better than doing the “right” thing. I think a good paper should investigate this fact more.
iclr_2018_H18uzzWAZ
Profiling cellular phenotypes from microscopic imaging can provide meaningful biological information resulting from various factors affecting the cells. One motivating application is drug development: morphological cell features can be captured from images, from which similarities between different drugs applied at different dosages can be quantified. The general approach is to find a function mapping the images to an embedding space of manageable dimensionality whose geometry captures relevant features of the input images. An important known issue for such methods is separating relevant biological signal from nuisance variation. For example, the embedding vectors tend to be more correlated for cells that were cultured and imaged during the same week than for cells from a different week, despite having identical drug compounds applied in both cases. In this case, the particular batch a set of experiments were conducted in constitutes the domain of the data; an ideal set of image embeddings should contain only the relevant biological information (e.g. drug effects). We develop a general framework for adjusting the image embeddings in order to 'forget' domain-specific information while preserving relevant biological information. To do this, we minimize a loss function based on distances between marginal distributions (such as the Wasserstein distance) of embeddings across domains for each replicated treatment. For the dataset presented, the replicated treatment is the negative control. We find that for our transformed embeddings (1) the underlying geometric structure is not only preserved but the embeddings also carry improved biological signal (2) less domain-specific information is present.
The authors present a method that aims to remove domain-specific information while preserving the relevant biological information between biological data measured in different experiments or "batches". A network is trained to learn the transformations that minimize the Wasserstein distance between distributions. The wasserstein distance is also called the "earth mover distance" and is traditionally formulated as the cost it takes for an optimal transport plan to move one distribution to another. In this paper they have a neural network compute the wasserstein distance using a different formulation that was used in Arjovsky et al. 2017, finds a lipschitz function f, which shows the maximal difference when evaluated on samples from the two distributions. Here these functions are formulated as affine transforms of the data with parameters theta that are computed by a neural network. Results are examined mainly by looking at the first two PCA components of the data. The paper presents an interesting idea and is fairly well written. However I have a few concerns: 1. Most of the ideas presented in the paper rely on works by Arjovsky et al. (2017), Gulrajani et al. (2017), and Gulrajani et al. (2017). Some selections, which are presented in the papers are not explained, for example, the gradient penalty, the choice of \lambda and the choice of points for gradient computation. 2. The experimental results are not fully convincing, they simply compare the first two PC components on this Broad Bioimage benchmark collection. This section could be improved by demonstrating the approach on more datasets. 3. There is a lack comparison to other methods such as Shaham et al. (2017). Why is using earth mover distance better than MMD based distance? They only compare it to a method named CORAL and to Typical Variation Normalization (TVN). What about comparison to other batch normalization methods in biology such as SEURAT? 4. Why is the affine transform assumption valid in biology? There can definitely be non-linear effects that are different between batches, such as ion detection efficiency differences. 5. Only early stopping seems to constrain their model to be near identity. Doesn't this also prevent optimal results ? How does this compare to the near-identity constraints in resnets in Shaham et al. ?
iclr_2018_HyMTkQZAb
KRONECKER-FACTORED CURVATURE APPROXIMA- TIONS FOR RECURRENT NEURAL NETWORKS Kronecker-factor Approximate Curvature (Martens & Grosse, 2015) (K-FAC) is a 2nd-order optimization method which has been shown to give state-of-the-art performance on large-scale neural network optimization tasks (Ba et al., 2017). It is based on an approximation to the Fisher information matrix (FIM) that makes assumptions about the particular structure of the network and the way it is parameterized. The original K-FAC method was applicable only to fully-connected networks, although it has been recently extended by Grosse & Martens (2016) to handle convolutional networks as well. In this work we extend the method to handle RNNs by introducing a novel approximation to the FIM for RNNs. This approximation works by modelling the statistical structure between the gradient contributions at different time-steps using a chain-structured linear Gaussian graphical model, summing the various cross-moments, and computing the inverse in closed form. We demonstrate in experiments that our method significantly outperforms general purpose state-of-the-art optimizers like SGD with momentum and Adam on several challenging RNN training tasks.
This paper extends the Kronecker-factor Approximate Curvature (K-FAC) optimization method to the setting of recurrent neural networks. The K-FAC method is an approximate 2nd-order optimization method that builds a block diagonal approximation of the Fisher information matrix, where the block diagonal elements are Kronecker products of smaller matrices. In order to approximate the Fisher information matrix for RNNs, the authors assume that the derivative of the loss function with respect to each weight matrix at each time step is independent of the length of the sequence, that these derivatives are temporally homogeneous, that the input and derivatives of the output are independent across every point in time, and that either the one-step cross-covariance of these derivatives is symmetric or that the training sequences are effectively infinite in length. Based on these assumptions, the authors show that the Fisher information can be reduced into a form in which the derivatives of the weight matrices can be approximated by a linear Gaussian graphical model and in which the approximate 2nd order method can be efficiently carried out. The authors compare their method to SGD on two language modeling tasks and against Adam for learning differentiable neural computers. The paper is relatively clear, and the authors do a reasonable job of introducing related work of the original K-FAC algorithm as well as its extension to CNNs before systematically deriving their method for RNNs. The problem of extending the K-FAC algorithm is natural, and the steps taken in this paper seem natural yet also original and non-trivial. The main issue that I have with this paper is the lack of theoretical justification or even intuition for the many approximations carried out in the course of approximating the Fisher information matrix. In many instances, it seemed like these approximations were made purely for convenience and tractability without much regard for (even approximate) correctness. This quality of this paper would be greatly strengthened if it had some bounds on approximation error or even empirical results testing the validity of the assumptions in the paper. Moreover, the experiments do not demonstrate levels of statistical significance in the results, so it is difficult to assert the practical significance of this work. Specific comments and questions Page 2, "r is is". Typo. Page 2, "DV". I found the introduction of V without any explanation to be confusing. Page 2, "P_{y|x}(\theta)". The relation between P_{y|x}(\theta) and f(x,\theta) is never explained. Page 3, "common practice of computing the natural gradient as (F + \lambda I) \nabla h instead of F^{-1} \nabla h". I don't see how the former can serve as a replacement for the latter. Page 3, "approximate g and a as statistically independent". Even though K-FAC already exists, it would be good to explain why this assumption is reasonable, since similar assumptions are made for the work presented in this paper. Page 4, "This new approximation, called "KFC", is derived by assuming....". Same as previous comment. It would be good to briefly discuss why these assumptions are reasonable. Page 5, Independence of T and w_t's, temporal homogeneity of w_t's,, and independence between a_t's and g_t's. I can see why these are convenient assumptions, but why are they reasonable? Moreover, why is it further natural to assume that A and G are temporally homogeneous as well? Page 7, "But insofar as the w_t's ... encode the relevant information contained in these external variables, they should be approximately Markovian". I am not sure what this means. Page 7, "The linear-Gaussian assumption meanwhile is a more severe one to make, but it seems necessary for there to be any hope that the required expectations remain tractable". I am not sure that this is a good enough justification for such an idea, unless there are compelling approximation error bounds. Page 8, Option 1. In what situations is it reasonable to assume that V_1 is symmetric? Pages 8-9, Option 2. What is a good finite sample size in which the assumption that the training sequences are infinitely long is reasonable in practice? Can the error |\kappa(x) - \zeta_T(x)| be translated into a statement on the approximation error? Page 9, "V_1 = V_{1,0} = ...". Typos (that appear to have been caught by the authors already). Page 9, "The 2nd-order statistics ... are accumulated through an exponential moving average during training". How sensitive is the performance of this method to the decay rate of the exponential moving average? Page 10, "The additional computations required to get the approximate Fisher inverse from these statistics ... are performed asynchronously on the CPU's". I find it a bit unfair to compare SGD to K-FAC in terms of wall clock time without also using the extra CPU's for SGD as well (e.g. via Hogwild or synchronous parallel SGD). Page 10, "The hyperparameters of our approach...". What is the sensitivity of the experimental results to these hyperparameters? Moreover, how sensitive are the results to initialization? Page 10, "we found that each parameter update of our method required about 80% more wall-clock time than an SGD update". How much of this is attributed to the fact that the statistics are computed asynchronously? Pages 10-12, Experiments. There are no error bars in any of the plots, so it is impossible to ascertain the statistical significance of any of these results. Page 11: Figure 2. Where is the Adam batchsize 50 line in the left plot? Why did the Adam batchsize 200 line disappear halfway through the right plot?
iclr_2018_HJg1NTGZRZ
We present a novel optimization strategy for training neural networks which we call "BitNet". The parameters of neural networks are usually unconstrained and have a dynamic range dispersed over all real values. Our key idea is to limit the expressive power of the network by dynamically controlling the range and set of values that the parameters can take. We formulate this idea using a novel end-to-end approach that regularizes a typical classification loss function. Our regularizer is inspired by the Minimum Description Length (MDL) principle. For each layer of the network, our approach optimizes real-valued translation and scaling factors and integervalued parameters (weights). We empirically compare BitNet to an equivalent unregularized model on the MNIST and CIFAR-10 datasets. We show that BitNet converges faster to a superior quality solution. Additionally, the resulting model has significant savings in memory due to the use of integer-valued parameters.
The paper proposes a technique for training quantized neural networks, where the precision (number of bits) varies per layer and is learned in an end-to-end fashion. The idea is to add two terms to the loss, one representing quantization error, and the other representing the number of discrete values the quantization can support (or alternatively the number of bits used). Updates are made to the parameter representing the # of bits via the sign of its gradient. Experiments are conducted using a LeNet-inspired architecture on MNIST and CIFAR10. Overall, the idea is interesting, as providing an end-to-end trainable technique for distributing the precision across layers of a network would indeed be quite useful. I have a few concerns: First, I find the discussion around the training methodology insufficient. Inherently, the objective is discontinuous since # of bits is a discrete parameter. This is worked around by updating the parameter using the sign of its gradient. This is assuming the local linear approximation given by the derivative is accurate enough one integer away; this may or may not be true, but it's not clear and there is little discussion of whether this is reasonable to assume. It's also difficult for me to understand how this interacts with the other terms in the objective (quantization error and loss). We'd like the number of bits parameter to trade off between accuracy (at least in terms of quantization error, and ideally overall loss as well) and precision. But it's not at all clear that the gradient of either the loss or the quantization error w.r.t. the number of bits will in general suggest increasing the number of bit (thus requiring the bit regularization term). This will clearly not be the case when the continuous weights coincide with the quantized values for the current bit setting. More generally, the direction of the gradient will be highly dependent on the specific setting of the current weights. It's unclear to me how effectively accuracy and precision are balanced by this training strategy, and there isn't any discussion of this point either. I would be less concerned about the above points if I found the experiments compelling. Unfortunately, although I am quite sympathetic to the argument that state of the art results or architectures aren't necessary for a paper of this kind, the results on MNIST and CIFAR10 are so poor that they give me some concern about how the training was performed and whether the results are meaningful. Performance on MNIST in the 7-11% test error range is comparable to a simple linear logistic regression model; for a CNN that is extremely bad. Similarly, 40% error on CIFAR10 is worse than what some very simple fully connected models can achieve. Overall, while I like the and think the goal is good, I think the motivation and discussion for the training methodology is insufficient, and the empirical work is concerning. I can't recommend acceptance.
iclr_2018_HyjC5yWCW
Published as a conference paper at ICLR 2018 META-LEARNING AND UNIVERSALITY: DEEP REPRESENTATIONS AND GRADIENT DESCENT CAN APPROXIMATE ANY LEARNING ALGORITHM Learning to learn is a powerful paradigm for enabling models to learn from data more effectively and efficiently. A popular approach to meta-learning is to train a recurrent model to read in a training dataset as input and output the parameters of a learned model, or output predictions for new test inputs. Alternatively, a more recent approach to meta-learning aims to acquire deep representations that can be effectively fine-tuned, via standard gradient descent, to new tasks. In this paper, we consider the meta-learning problem from the perspective of universality, formalizing the notion of learning algorithm approximation and comparing the expressive power of the aforementioned recurrent models to the more recent approaches that embed gradient descent into the meta-learner. In particular, we seek to answer the following question: does deep representation combined with standard gradient descent have sufficient capacity to approximate any learning algorithm? We find that this is indeed true, and further find, in our experiments, that gradient-based meta-learning consistently leads to learning strategies that generalize more widely compared to those represented by recurrent models.
This paper studies the capacity of the model-agnostic meta-learning (MAML) framework as a universal learning algorithm approximator. Since a (supervised) learning algorithm can be interpreted as a map from a dataset and an input to an output, the authors define a universal learning algorithm approximator to be a universal function approximator over the set of functions that map a set of data points and an input to an output. The authors show constructively that there exists a neural network architecture for which the model learned through MAML can approximate any learning algorithm. The paper is for the most part clear, and the main result seems original and technically interesting. At the same time, it is not clear to me that this result is also practically significant. This is because the universal approximation result relies on a particular architecture that is not necessarily the design one would always use in MAML. This implies that MAML as typically used (including in the original paper by Finn et al, 2017a) is not necessarily a universal learning algorithm approximator, and this paper does not actually justify its empirical efficacy theoretically. For instance, the authors do not even use the architecture proposed in their proof in their experiments. This is in contrast to the classical universal function approximator results for feedforward neural networks, as a single hidden layer feedforward network is often among the family of architectures considered in the course of hyperparameter tuning. This distinction should be explicitly discussed in the paper. Moreover, the questions posed in the experimental results do not seem related to the theoretical result, which seems odd. Specific comments and questions: Page 4: "\hat{f}(\cdot; \theta') approximates f_{\text{target}}(x, y, x^*) up to arbitrary position". There seems to be an abuse of notation here as the first expression is a function and the second expression is a value. Page 4: "to show universality, we will construct a setting of the weight matrices that enables independent control of the information flow...". How does this differ from the classical UFA proofs? The relative technical merit of this paper would be more clear if this is properly discussed. Page 4: "\prod_{i=1}^N (W_i - \alpha \nabla_{W_i})". There seems to be a typo here: \nabla_{W_i} should be \nabla_{W_i} L. Page 7: "These error functions effectively lose information because simply looking at their gradient is insufficient to determine the label." It would be interesting the compare the efficacy of MAML on these error functions as compared to cross entropy and mean-squared error. Page 7: "(1) can a learner trained with MAML further improve from additional gradient steps when learning new tasks at test time...? (2) does the inductive bias of gradient descent enable better few-shot learning performance on tasks outside of the training distribution...?". These questions seem unrelated to the universal learning algorithm approximator result that constitutes the main part of the paper. If you're going to study these question empirically, why didn't you also try to investigate them theoretically (e.g. sample complexity and convergence of MAML)? A systematic and comprehensive analysis of these questions from both a theoretical and empirical perspective would have constituted a compelling paper on its own. Pages 7-8: Experiments. What are the architectures and hyperparameters used in the experiments, and how sensitive are the meta-learning algorithms to their choice? Page 8: "our experiments show that learning strategies acquired with MAML are more successful when faced with out-of-domain tasks compared to recurrent learners....we show that the representations acquired with MAML are highly resilient to overfitting". I'm not sure that such general claims are justified based on the experimental results in this paper. Generalizing to out-of-domain tasks is heavily dependent on the specific level and type of drift between the old and new distributions. These properties aren't studied at all in this work. POST AUTHOR REBUTTAL: After reading the response from the authors and seeing the updated draft, I have decided to upgrade my rating of the manuscript to a 6. The universal learning algorithm approximator result is a nice result, although I do not agree with the other reviewer that it is a "significant contribution to the theoretical understanding of meta-learning," which the authors have reinforced (although it can probably be considered a significant contribution to the theoretical understanding of MAML in particular). Expressivity of the model or algorithm is far from the main or most significant consideration in a machine learning problem, even in the standard supervised learning scenario. Questions pertaining to issues such as optimization and model selection are just as, if not more, important. These sorts of ideas are explored in the empirical part of the paper, but I did not find the actual experiments in this section to be very compelling. Still, I think the universal learning algorithm approximator result is sufficient on its own for the paper to be accepted.
iclr_2018_r1ZdKJ-0W
DEEP GAUSSIAN EMBEDDING OF GRAPHS: UNSUPERVISED INDUCTIVE LEARNING VIA RANKING Methods that learn representations of nodes in a graph play a critical role in network analysis since they enable many downstream learning tasks. We propose Graph2Gauss -an approach that can efficiently learn versatile node embeddings on large scale (attributed) graphs that show strong performance on tasks such as link prediction and node classification. Unlike most approaches that represent nodes as point vectors in a low-dimensional continuous space, we embed each node as a Gaussian distribution, allowing us to capture uncertainty about the representation. Furthermore, we propose an unsupervised method that handles inductive learning scenarios and is applicable to different types of graphs: plain/attributed, directed/undirected. By leveraging both the network structure and the associated node attributes, we are able to generalize to unseen nodes without additional training. To learn the embeddings we adopt a personalized ranking formulation w.r.t. the node distances that exploits the natural ordering of the nodes imposed by the network structure. Experiments on real world networks demonstrate the high performance of our approach, outperforming state-of-the-art network embedding methods on several different tasks. Additionally, we demonstrate the benefits of modeling uncertainty -by analyzing it we can estimate neighborhood diversity and detect the intrinsic latent dimensionality of a graph.
This paper proposes Graph2Gauss (G2G), a node embedding method that embeds nodes in attributed graphs (can work w/o attributes as well) into Gaussian distributions rather than conventionally latent vectors. By doing so, G2G can reflect the uncertainty of a node's embedding. The authors then use these Gaussian distributions and neighborhood ranking constraints to obtain the final node embeddings. Experiments on link prediction and node classification showed improved performance over several strong embedding methods. Overall, the paper is well-written and the contributions are remarkable. The reason I am giving a less possible rating is that some statements are questionable and can severely affect the conclusions claimed in this paper, which therefore requires the authors' detailed response. I am certainly willing to change my rating if the authors clarify my questions. Major concern 1: Is the latent vector dimension L really the same for G2G and other compared methods? In the first paragraph of Section 4, it is stated that "in all experiments if the competing techniques use an embedding of dimensionality L, G2G’s embedding is actually only half of this dimensionality so that the overall number of ’parameters’ per node (mean vector + variance terms) matches L." This setting can be wrong since the degree of freedom of a L-dim Gaussian distribution should be L+L(L-1)/2, where the first term corresponds to the mean and the second term corresponds to the covariance. If I understand it correctly, when any compared embedding method used an L-dim vector, the authors used the dimension of L/2. But this setting is wrong if one wants the overall number of ’parameters’ per node (mean vector + variance terms) matches L, as stated by the authors. Fixing L, the equivalent dimension L_G2G for G2G should be set such that L_G2G +L_G2G (L_G2G -1)/2=L, not 2*L_G2G=L. Since this setting is universal to the follow-up analysis and may severely degrade the performance of GSG due to less embedding dimensions, I hope the authors can clarify this point. Major concern 2: The claim on inductive learning Inductive learning is one of the major contributions claimed in this paper. The authors claim G2G can learn an embedding of an unseen node solely based on their attributes. However, is it not clear why this can be done. In the learning stage of Sec. 3.3, the attributes do not seem to play a role in the energy function. Also, since no algorithm descriptions are available, it's not clear how using only an unseen node's attributes can yield a good embedding under G2G work (so does Sec. 4.5). Moreover, how does it compare to directly using raw user attributes for these tasks? Minor concern/suggestions: The "similarity" measure in section 3.1 using KL divergence should be better rephased by "dissimilarity" measure. Otherwise, one has a similarity measure $Delta$ and wants it to increase as the hop distance k decreases (closer nodes are more similar). But the ranking constraints are somewhat counter-intuitive because you want $Delta$ to be small if nodes are closer. There is nothing wrong with the ranking condition, but rather an inconsistency between the use of "similarity" measure for KL divergence.
iclr_2018_BJ0hF1Z0b
Published as a conference paper at ICLR 2018 LEARNING DIFFERENTIALLY PRIVATE RECURRENT LANGUAGE MODELS We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent. In particular, we add user-level privacy protection to the federated averaging algorithm, which makes "large step" updates from user-level data. Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work. We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset.
Summary of the paper ------------------------------- The authors propose to add 4 elements to the 'FederatedAveraging' algorithm to provide a user-level differential privacy guarantee. The impact of those 4 elements on the model'a accuracy and privacy is then carefully analysed. Clarity, Significance and Correctness -------------------------------------------------- Clarity: Excellent Significance: I'm not familiar with the literature of differential privacy, so I'll let more knowledgeable reviewers evaluate this point. Correctness: The paper is technically correct. Questions -------------- 1. Figure 1: Adding some noise to the updates could be view as some form of regularization, so I have trouble understand why the models with noise are less efficient than the baseline. 2. Clipping is supposed to help with the exploding gradients problem. Do you have an idea why a low threshold hurts the performances? Is it because it reduces the amplitude of the updates (and thus simply slows down the training)? 3. Is your method compatible with other optimizers, such as RMSprop or ADAM (which are commonly used to train RNNs)? Pros ------ 1. Nice extensions to FederatedAveraging to provide privacy guarantee. 2. Strong experimental setup that analyses in details the proposed extensions. 3. Experiments performed on public datasets. Cons ------- None Typos -------- 1. Section 2, paragraph 3 : "is given in Figure 1" -> "is given in Algorithm 1" Note ------- Since I'm not familiar with the differential privacy literature, I'm flexible with my evaluation based on what other reviewers with more expertise have to say.
iclr_2018_rkHywl-A-
Published as a conference paper at ICLR 2018 LEARNING ROBUST REWARDS WITH ADVERSARIAL INVERSE REINFORCEMENT LEARNING Reinforcement learning provides a powerful and general framework for decision making and control, but its application in practice is often hindered by the need for extensive feature and reward engineering. Deep reinforcement learning methods can remove the need for explicit engineering of policy or value features, but still require a manually specified reward function. Inverse reinforcement learning holds the promise of automatic reward acquisition, but has proven exceptionally difficult to apply to large, high-dimensional problems with unknown dynamics. In this work, we propose AIRL, a practical and scalable inverse reinforcement learning algorithm based on an adversarial reward learning formulation. We demonstrate that AIRL is able to recover reward functions that are robust to changes in dynamics, enabling us to learn policies even under significant variation in the environment seen during training. Our experiments show that AIRL greatly outperforms prior methods in these transfer settings.
SUMMARY: This paper considers the Inverse Reinforcement Learning (IRL) problem, and particularly suggests a method that obtains a reward function that is robust to the change of dynamics of the MDP. It starts from formulating the problem within the MaxEnt IRL framework of Ziebart et al. (2008). The challenge of MaxEnt IRL is the computation of a partition function. Guided Cost Learning (GCL) of Finn et al. (2016b) is an approximation of MaxEnt IRL that uses an adaptive importance sampler to estimate the partition function. This can be shown to be a form of GAN, obtained by using a specific discriminator [Finn et al. (2016a)]. If the discriminator directly works with trajectories tau, the result would be GAN-GCL. But this leads to high variance estimates, so the paper suggests using a single state-action formulation, in which the discriminator f_theta(s,a) is a function of (s,a) instead of the trajectory. The optimal solution of this discriminator is to have f(s,a) = A(s,a) — the advantage function. The paper, however, argues that the advantage function is “entangled” with the dynamics, and this is undesirable. So it modified the discriminator to learn a function that is a combination of two terms, one only depends on state-action and the other depends on state, and has the form of shaped reward transformation. EVALUATION: This is an interesting paper with good empirical results. As I am not very familiar with the work of Finn et al. (2016a) and Finn et al. (2016b), I have not verified the detail of derivations of this new paper very closely. That being said, I have some comments and questions: * The MaxEnt IRL formulation of this work, which assumes that p_theta(tau) is proportional to exp( r_theta (tau) ), comes from [Ziebart et al., 2008] and assumes a deterministic dynamics. Ziebart’s PhD dissertation [Ziebart, 2010] or the following paper show that the formulation is different for stochastic dynamics: Ziebart, Bagnell, Dey, “The Principle of Maximum Causal Entropy for Estimating Interacting Processes,” IEEE Trans. on IT, 2013. Is it still a reasonable thing to develop based on this earlier, an inaccurate, formulation? * I am not convinced about the argument of Appendix C that shows that AIRL recovers reward up to constants. It is suggested that since the only items on both sides of the equation on top of p. 13 depend on s’ are h* and V, they should be equal. This would be true if s’ could be chosen arbitrararily. But s’ would be uniquely determined by s for a deterministic dynamics. In that case, this conclusion is not obvious anymore. Consider the state space to be integers 0, 1, 2, 3, … . Suppose the dynamics is that whenever we are at state s (which is an integer), at the next time step the state decreases toward 1, that is s’ = phi(s,a) = s - 1; unless s = 0, which we just stay at s’ = s = 0. This is independent of actions. Also define r(s) = 1/s for s>=1 and r(0) = 0. Suppose the discount factor is gamma = 1 (note that in Appendix B.1, the undiscounted case is studied, so I assume gamma = 1 is acceptable). With this choices, the value function V(s) = 1/s + 1/(s-1) + … + 1/1 = H_s, i.e., the Harmonic function. The advantage function is zero. So we can choose g*(s) = 0, and h*(s) = h*(s’) = 1. This is in contrast to the conclusion that h*(s’) = V(s’) + c, which would be H_s + c, and g*(s) = r(s) = 1/s. (In fact, nothing is special about this choice of reward and dynamics.) Am I missing something obvious here? Also please discuss how ergodicity leads to the conclusion that spaces of s’ and s are identical. What does “space of s” mean? Do you mean the support of s? Please make the argument more rigorous. * Please make the argument of Section 5.1 more rigorous.
iclr_2018_ByhthReRb
Many goal-oriented dialog tasks, especially ones in which the dialog system has to interact with external knowledge sources such as databases, have to handle a large number of Named Entities (NEs). There are at least two challenges in handling NEs using neural methods in such settings: individual NEs may occur only rarely making it hard to learn good representations of them, and many of the Out Of Vocabulary words that occur during test time may be NEs. Thus, the need to interact well with these NEs has emerged as a serious challenge to building neural methods for goal-oriented dialog tasks. In this paper, we propose a new neural method for this problem, and present empirical evaluations on a structured Question Answering task and three related goal-oriented dialog tasks that show that our proposed method can be effective in interacting with NEs in these settings.
The paper addresses the task of dealing with named entities in goal oriented dialog systems. Named entities, and rare words in general, are indeed troublesome since adding them to the dictionary is expensive, replacing them with coarse labels (ne_loc, unk) looses information, and so on. The proposed solution is to extend neural dialog models by introducing a named entity table, instantiated on the fly, where the keys are distributed representations of the dialog context and the values are the named entities themselves. The approach is applied to settings involving interacting to a database and a mechanism for handling the interaction is proposed. The resulting model is illustrated on a few goal-oriented dialog tasks. I found the paper difficult to read. The concrete mappings used to create the NE keys and attention keys are missing. Providing more structure to the text would also be useful vs. long, wordy paragraphs. Here are some specific questions: 1. How are the keys generated? That are the functions used? Does the "knowledge of the current user utterance" include the word itself? The authors should include the exact model specification, including for the HRED model. 2. According to the description, referring to an existing named entity must be done by "generating a key to match the keys in the NE table and then retrieve the corresponding value and use it". Is there a guarantee that a same named entity, appearing later in the dialog, will be given the same key? Or are the keys for already found entities retrieved directly, by value? 3. In the decoding phase, how does the system decide whether to query the DB? 4. How is the model trained? In its current form, it's not clear how the proposed approach tackles the shortcomings mentioned in the introduction. Furthermore, while the highlighted contribution is the named entity table, it is always used in conjunction to the database approach. This raises the question whether the named entity table can only work in this context. For the structured QA task, there are 400 training examples, and 100 named entities. This means that the number of training examples per named entity is very small. Is that correct? If yes, then it's not very surprising that adding the named entities to the vocabulary leads to overfitting. Have you compared with using random embeddings for the named entities? Typos: page 2, second-to-last paragraph: firs -> first, page 7, second to last paragraph: and and -> and
iclr_2018_B1tExikAW
Robustness and security of machine learning (ML) systems are intertwined, wherein a non-robust ML system (classifiers, regressors, etc.) can be subject to attacks using a wide variety of exploits. With the advent of scalable deep learning methodologies, a lot of emphasis has been put on the robustness of supervised, unsupervised and reinforcement learning algorithms. Here, we study the robustness of the latent space of a deep variational autoencoder (dVAE), an unsupervised generative framework, to show that it is indeed possible to perturb the latent space, flip the class predictions and keep the classification probability approximately equal before and after an attack. This means that an agent that looks at the outputs of a decoder would remain oblivious to an attack.
This paper is concerned with both security and machine learning. Assuming that data is encoded, transmited, and decoded using a VAE, the paper proposes a man-in-middle attack that alters the VAE encoding of the input data so that the decoded output will be misclassified. The objectives are to: 1) fool the autoencoder; the classification output of the autoencoder is different from the actual class of the input; 2) make minimal change in the middle so that the attack is not detectable. This paper is concerned with both security and machine learning, but there is no clear contributions to either field. From the machine learning perspective, the proposed "attacking" method is standard without any technical novelty. From the security perspective, the scenarios are too simplistic. The encoding-decoding mechanism being attacked is too simple without any security enhancement. This is an unrealistic scenario. For applications with security concerns, there should have been methods to guard against man-in-the-middle attack, and the paper should have at least considered some of them. Without considering the state-of-the-art security defending mechanism, it is difficult to judge the contribution of the paper to the security community. I am not a security expert, but I doubt that the proposed method are formulated based on well founded security concepts and ideas. For example, what are the necessary and sufficient conditions for an attacking method to be undetectable? Are the criteria about the magnitude of epsilon given on Section 3.3. necessary and sufficient? Is there any reference for them? Why do we require the correspondence between the classification confidence of tranformed and original data? Would it be enough to match the DISTRIBUTION of the confidence?
iclr_2018_HJ_aoCyRZ
SPECTRALNET: SPECTRAL CLUSTERING USING DEEP NEURAL NETWORKS Spectral clustering is a leading and popular technique in unsupervised data analysis. Two of its major limitations are scalability and generalization of the spectral embedding (i.e., out-of-sample-extension). In this paper we introduce a deep learning approach to spectral clustering that overcomes the above shortcomings. Our network, which we call SpectralNet, learns a map that embeds input data points into the eigenspace of their associated graph Laplacian matrix and subsequently clusters them. We train SpectralNet using a procedure that involves constrained stochastic optimization. Stochastic optimization allows it to scale to large datasets, while the constraints, which are implemented using a specialpurpose output layer, allow us to keep the network output orthogonal. Moreover, the map learned by SpectralNet naturally generalizes the spectral embedding to unseen data points. To further improve the quality of the clustering, we replace the standard pairwise Gaussian affinities with affinities learned from the given unlabeled data using a Siamese network. Additional improvement of the resulting clustering can be achieved by applying the network to code representations produced, e.g., by standard autoencoders. Our end-to-end learning procedure is fully unsupervised. In addition, we apply VC dimension theory to derive a lower bound on the size of SpectralNet. State-of-the-art clustering results are reported on the Reuters dataset. Our implementation is publicly available at https://github.com/kstant0725/SpectralNet.
PAPER SUMMARY This paper aims to address two limitations of spectral clustering: its scalability to large datasets and its generalizability to new samples. The proposed solution is based on designing a neural network called SpectralNet that maps the input data to the eigenspace of the graph Laplacian and finds an orthogonal basis for this eigenspace. The network is trained by alternating between orthogonalization and gradient descent steps, where scalability is achieved by using a stochastic optimization scheme that instead of computing an eigendecomposition of the entire data (as in vanilla spectral clustering) uses a Cholesky decomposition of the mini batch to orthogonalize the output. The method can also handle out-of-sample data by applying the learned embedding function to new data. Experiments on the MNIST handwritten digit database and the Reuters document database demonstrate the effectiveness of the proposed SpectralNet. COMMENTS 1) I find that the output layer (i.e. the orthogonalization layer) is not well-justified. In principle, different batches require different weights on the output layer. Although the authors observe empirically that orthogonalization weights are roughly shared across different batches, the paper lacks a convincing argument for why this can happen. Moreover, it is not clear why an output layer designed to orthogonalized batches from the training set would also orthogonalize batches from the test set? 2) One claimed contribution of this work is that it extends spectral clustering to large scale data. However, the paper could have commented more on what makes spectral clustering not scalable, and how the method in this paper addresses that. The authors did mention that spectral clustering requires computing eigenvectors for large matrices, which is prohibitive. However, this argument is not entirely true, as eigen-decomposition for large sparse matrices can be carried out efficiently by tools such as ARPACK. On the other hand, computing the nearest neighbor affinity or Gaussian affinity is N^2 complexity, which could be the bottleneck of computation for spectral clustering on large scale data. But this issue can be addressed using approximate nearest neighbors obtained, e.g., via hashing. Overall, the paper compares only to vanilla spectral clustering, which is not representative of the state of the art. The paper should do an analysis of the computational complexity of the proposed method and compare it to the computational complexity of both vanilla as well as scalable spectral clustering methods to demonstrate that the proposed approach is more scalable than the state of the art. 3) Continuing with the point above, an experimental comparison with prior work on large scale spectral clustering (see, e.g. [a] and the references therein) is missing. In particular, the result of spectral clustering on the Reuters database is not reported, but one could use other scalable versions of spectral clustering as a baseline. 4) Another benefit of the proposed method is that it can handle out-of-sample data. However, the evaluation of such benefit in experiments is rather limited. In reporting the performance on out-of-sample data, there is no other baseline to compare with. One can at least compare with the following baseline: apply k-means to the training data in input space, and classify each test data to the nearest centroid. 5) The reason for using an autoencoder to extract features is unclear. In subspace clustering, it has been observed that features extracted from a scattering transform network [b] can significantly improve clustering performance, see e.g. [c] where all methods have >85% accuracy on MNIST. The methods in [c] are also tested on larger datasets. [a] Choromanska, et. al., Fast Spectral Clustering via the Nystrom Method, International conference on algorithmic learning theory, 2013 [b] Bruna, Mallat, Invariant Scattering Convolution Networks, arXiv 2012 [c] You, et. al., Oracle Based Active Set Algorithm for Scalable Elastic Net Subspace Clustering, CVPR 2016
iclr_2018_r1VVsebAZ
Published as a conference paper at ICLR 2018 SYNTHESIZING REALISTIC NEURAL POPULATION ACTIVITY PATTERNS USING GENERATIVE ADVERSARIAL NETWORKS The ability to synthesize realistic patterns of neural activity is crucial for studying neural information processing. Here we used the Generative Adversarial Networks (GANs) framework to simulate the concerted activity of a population of neurons. We adapted the Wasserstein-GAN variant to facilitate the generation of unconstrained neural population activity patterns while still benefiting from parameter sharing in the temporal domain. We demonstrate that our proposed GAN, which we termed Spike-GAN, generates spike trains that match accurately the first-and second-order statistics of datasets of tens of neurons and also approximates well their higher-order statistics. We applied Spike-GAN to a real dataset recorded from salamander retina and showed that it performs as well as state-ofthe-art approaches based on the maximum entropy and the dichotomized Gaussian frameworks. Importantly, Spike-GAN does not require to specify a priori the statistics to be matched by the model, and so constitutes a more flexible method than these alternative approaches. Finally, we show how to exploit a trained Spike-GAN to construct 'importance maps' to detect the most relevant statistical structures present in a spike train. Spike-GAN provides a powerful, easy-to-use technique for generating realistic spiking neural activity and for describing the most relevant features of the large-scale neural population recordings studied in modern systems neuroscience.
[Summary of paper] The paper presents a method for simulating spike trains from populations of neurons which match empirically measured multi-neuron recordings. They set up a Wasserstein-GAN and train it on both synthetic and real multi-neuron recordings, using data from the salamander retina. They find that their method (Spike-GAN) can produce spike trains that visually look like the original data, and which have low-order statistics (firing rates, correlations, time-lagged-correlations, total sum of population activity) which matches those of the original data. They emphasize that their network architecture is 'semi-convolutional', i.e. convolutional in time but not across neurons. Finally, they suggest a way to analyse the fitted networks in order to gain insights into what the 'relevant' neural features are, and illustrate it on synthetic data into which they embedded these features. [Originality] This paper falls into the category of papers that do a next obvious thing ("GANs have not been applied to population spike trains yet"), and which do it pretty well: If one wants to create simulated neural activity data which matches experimentally observed one, then this method indeed seems to do that. As far as I know, this would be the first peer-reviewed application of GANs to multi-neuron recordings of neural data (but see https://arxiv.org/abs/1707.04582 for an arxiv paper, not cited here-- should be discussed at least). On a technical level, there is very little to no innovation here -- while the authors emphasise their 'semi-convolutional' network architecture, this is obviously the right architecture to use for multivariate time-series data, and in itself not a big technical novel. Therefore, the paper should really be evaluate as an `application' paper, and be assessed in terms of i) how important the application is, ii) how clearly it is presented, and iii) how convincing the results are relative to state of the art. i) [Importance of problem, potential significance] Finding statistical models for modelling and simulating population spike trains is a topic which is extensively studied in computational neuroscience, predominantly using model-based approaches using MaxEnt models, GLMs or latent variable models. These models are typically simple and restricted, and certainly fall short of capturing the full complexity of neural data. Thus, better, and more flexible solutions for this problem would certainly be very welcome, and have an immediate impact in this community. However, I think that the approach based on GANs actually has two shortcomings which are not stated by the authors, and which possibly limit the impact of the method: First, statistical models of neural spike trains are often used to compute probabilities e.g. for decoding analyses— this is difficult or impossible for GANs. Second, one most often does not want to simulate data which match a specific recording, but rather which have specified statistics (e.g. firing rates and correlations)— the method here is based on fitting a particular data-set, and it is actually unclear to me when that will be useful. ii) [Clarity] The methods are presented and explained clearly and cleanly. In my view, too much emphasis is given to highlighting the ‘semi-convolutional’ network, and, conversely, practical issues (exact architectures, cost of training) should be explained more clearly, possibly in an appendix. Similarly, the method would benefit from the authors releasing their code. iii) [Quality, advance over previous methods] The authors discuss several methods for simulating spike trains in the introduction. In their empirical comparisons, however, they completely focus on a particular model-class (maximum entropy models, ME) which they label being the ‘state-of-the-art’. This label is misleading— ME models are but one of several approaches to modelling neural spike trains, with different models having different advantages and limitations (there is no benchmark which can be used to rank them...). In particular, the only ‘gain’ of the GAN over ME models in the results comes from their ability of the GAN to match temporal statistics. Given that the ME models used by the authors are blind to temporal correlations, this is, of course (and as pointed out by the authors) hardly surprising. How does the GAN approach fair against alternative models which do take temporal statistics into account, e.g. GLMs, or simple moment-based method e.g. Krumin et al 2009, Lyamzin 2010, Gutnisky et al 2010— setting these up would be simple, and it would provide a non-trivial baseline for the ability of spike-GAN to outperform at least these models? While it true that GANs are much more expressive than the model-based approaches used in neuroscience, a clear demonstration would have been useful. Minor comments: - p.3: The abbreviation “1D-DCGAN” is never spelled out. - p.3: The architecture of Spike-GAN is never explicitely given. - p.3: (Sec. 2.2) Statistic 2) “average time course across activity patterns” is unclear to me -- how does one select the activity patterns over which to average? Moreover, later figures do not seem to use this statistic. - p.4: “introduced correlations between randomly selected pairs” -- How many such pairs were formed? - p.7 (just above Discussion) At the beginning of this section, and for Figs. 4A,B, the texts suggests that packets fire spontaneously with a given probability. For Figs. 4C-E, a particular packet responds to a particular input. Is then the neuron population used in these figures different from the one in Figs. 4A,B? How did the authors ensure that a particular set of neurons respond to their stimulus as a packet? What stimulus did they use? - p.8 (Fig. 4E) Are the eight neurons with higher importance those corresponding to the packet? This is insinuated but not stated. - p.12 (Appendix A) + The authors do not mention how they produced their “ground truth” data. (What was its firing rate? Did it include correlations? A refractory period?) + Generating samples from the trained Spike-GAN is ostensibly cheap. Hence it is unclear why the authors did not produce a large enough number of samples in order to obtain a 'numerical probability', just as they did for the ground truth data? + Fig. S1B: The figure shows that every sample has the same empirical frequency. This indicates more a lack of statistical power rather than any correspondence between the theoretical and empirical probabilities. This undermines the argument in the second paragraph of p.12. In the other hand, if the authors did approximate numerical probabilities for the Spike-GAN, this argument would no longer be required. - p.13 Fig. S1A,B: the abscissas mention “frequency”, while the ordinates mention “probability” - p.25 Fig. S4: This figure suggests that the first layer of the Spike-GAN critic sometimes recognizes the packet patterns in the data. However, to know whether this is true, we would need to compare this to a representation of the neurons reordered in the same way and identified by packet. I.e. one expects something something like figure like Fig. 4A, with the packets lining up with the recovered filters when neurons are ordered the same way.
iclr_2018_BkVf1AeAZ
We propose a method, called Label Embedding Network, which can learn label representation (label embedding) during the training process of deep networks. With the proposed method, the label embedding is adaptively and automatically learned through back propagation. The original one-hot represented loss function is converted into a new loss function with soft distributions, such that the originally unrelated labels have continuous interactions with each other during the training process. As a result, the trained model can achieve substantially higher accuracy and with faster convergence speed. Experimental results based on competitive tasks demonstrate the effectiveness of the proposed method, and the learned label embedding is reasonable and interpretable. The proposed method achieves comparable or even better results than the state-of-the-art systems.
The paper proposes to add an embedding layer for labels that constrains normal classifiers in order to find label representations that are semantically consistent. The approach is then experimented on various image and text tasks. The description of the model is laborious and hard to follow. Figure 1 helps but is only referred to at the end of the description (at the end of section 2.1), which instead explains each step without the big picture and loses the reader with confusing notation. For instance, it only became clear at the end of the section that E was learned. One of the motivations behing the model is to force label representations to be in a semantic space (where two labels with similar meanings would be nearby). The assumption given in the introduction is that softmax would not yield such a representation, but nowhere in the paper this assumption is verified. I believe that using cross-entropy with softmax should also push semantically similar labels to be nearby in the weight space entering the softmax. This should at least be verified and compared appropriately. Another motivation of the paper is that targets are given as 1s or 0s while soft targets should work better. I believe this is true, but there is a lot of prior work on these, such as adding a temperature to the softmax, or using distillation, etc. None of these are discussed appropriately in the paper. Section 2.2 describes a way to compress the label embedding representation, but it is not clear if this is actually used in the experiments. h is never discussed after section 2.2. Experiments on known datasets are interesting, but none of the results are competitive with current state-of-the-art results (SOTA), despite what is said in Appending D. For instance, one can find SOTA results for CIFAR100 around 16% and for CIFAR10 around 3%. Similarly, one can find SOTA results for IWSLT2015 around 28 BLEU. It can be fine to not be SOTA as long as it is acknowledged and discussed appropriately.
iclr_2018_ByYPLJA6W
We introduce our Distribution Regression Network (DRN) which performs regression from input probability distributions to output probability distributions. Compared to existing methods, DRN learns with fewer model parameters and easily extends to multiple input and multiple output distributions. On synthetic and real-world datasets, DRN performs similarly or better than the state-of-the-art. Furthermore, DRN generalizes the conventional multilayer perceptron (MLP). In the framework of MLP, each node encodes a real number, whereas in DRN, each node encodes a probability distribution.
Summary: This paper presents a new network architecture for learning a regression of probability distributions. The distribution output from a given node is defined in terms of a learned conditional probability function, and the output distributions of its input nodes. The conditional probability function is an unnormalized distribution with the same form as the Boltzman distribution, and distributions are approximated from point estimates by discretizing the finite support into predefined equal-sized bins. By letting the conditional distribution between nodes be unnormalized, and using an energy function that incorporates child nodes independently, the approach admits efficient computation that does not need to model the interaction between the distributions output by nodes at a given level. Under these dynamics and discretization, the chain rule can be used to derive a matrix of gradients at each node that denotes the derivative of the discretized output distribution with respect to the current node's discretized distribution. These gradients are in turn used to calculate updates for the network parameters with respect to the Jensen Shannon divergence between the predicted distribution and a target distribution. The approach is evaluated on three tasks, two synthetic and one real world. The baselines are the state of the art triple basis estimator (3BE) or a standard MLP that represents the output distribution using a softmax over quantiles. On both of the synthetic tasks --- which involve predicting gaussians --- the proposed approach can fit the data reasonably using far fewer parameters than the baselines, although 3BE does achieve better overall performance. On a real world task that involves predicting a distribution of future stock market prices from multiple input stock marked distributions, the proposed approach significantly outperforms both baselines. However, this experiment uses 3BE outside of its intended use case --- which is for a single input distribution --- so it's not entirely clear how well the very simple proposed model is doing. Notes to authors: I'm not familiar with 3BE but the fact that it is used outside of its intended use case for the stock data is worrying. How does 3BE perform at predicting the FTSE distribution at time t + k from the FTSE distribution at time t only? Do the multiple input distributions actually help? You use a kernel density estimate with a Gaussian kernel function to estimate the stock market pdf, but then you apply your network directly to this estimate. What would happen if you built more complex networks using the kernel values themselves as inputs? Could you also run experiments on the real-world datasets used by the 3BE paper? What is the structure of the DRN that uses > 10^3 parameters (from Fig. 4)? The width of the network is bounded by the two input distributions, so is this network just incredibly deep? Also, is it reasonable to assume that both the DRN and MLP are overfitting the toy task when they have access to an order of magnitude more parameters than datapoints. It would be nice if section 2.4 was expanded to actually define the cost gradients for the network parameters, either in line or in an appendix.
iclr_2018_S1vuO-bCW
Published as a conference paper at ICLR 2018 LEAVE NO TRACE: LEARNING TO RESET FOR SAFE AND AUTONOMOUS REINFORCEMENT LEARNING Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
This paper proposes the idea of having an agent learning a policy that resets the agent's state to one of the states drawn from the distribution of starting states. The agent learns such policy while also learning how to solve the actual task. This approach generates more autonomous agents that require fewer human interventions in the learning process. This is a very elegant and general idea, where the value function learned in the reset task also encodes some measure of safety in the environment. All that being said, I gave this paper a score of 6 because two aspects that seem fundamental to me are not clear in the paper. If clarified, I'd happily increase my score. 1) *Defining state visitation/equality in the function approximation setting:* The main idea behind the proposed algorithm is to ensure that "when the reset policy is executed from any state, the distribution over final states matches the initial state distribution p_0". This is formally described, for example, in line 13 of Algorithm 1. The authors "define a set of safe states S_{reset} \subseteq S, and say that we are in an irreversible state if the set of states visited by the reset policy over the past N episodes is disjoint from S_{reset}." However, it is not clear to me how one can uniquely identify a state in the function approximation case. Obviously, it is straightforward to apply such definition in the tabular case, where counting state visitation is easy. However, how do we count state visitation in continuous domains? Did the authors manually define the range of each joint/torque/angle that characterizes the start state? In a control task from pixels, for example, would the exact configuration of pixels seen at the beginning be the start state? Defining state visitation in the function approximation setting is not trivial and it seems to me the authors just glossed over it, despite being essential to your work. 2) *Experimental design for Figure 5*: This setup is not clear to me at all and in fact, my first reaction is to say it is wrong. An episodic task is generally defined as: the agent starts in a state drawn from the distribution of starting states and at the moment it reaches the goal state, the task is reset and the agent starts again. It doesn't seem to be what the authors did, is that right? The sentence: "our method learns to solve this task by automatically resetting the environment after each episode, so the forward policy can practice catching the ball when initialized below the cup" is confusion. When is the task reset to the "status quo" approach? Also, let's say an agent takes 50 time steps to reach the goal and then it decides to do a soft-reset. Are the time steps it is spending on its soft-reset being taken into account when generating the reported results? Some other minor points are: - The authors should standardize their use of citations in the paper. Sometimes there are way too many parentheses in a reference. For example: "manual resets are necessary when the robot or environment breaks (e.g. Gandhi et al. (2017))", or "Our methods can also be used directly with any other Q-learning methods ((Watkins & Dayan, 1992; Mnih et al., 2013; Gu et al., 2017; Amos et al., 2016; Metz et al., 2017))" - There is a whole line of work in safe RL that is not acknowledged in the related work section. Representative papers are: [1] Philip S. Thomas, Georgios Theocharous, Mohammad Ghavamzadeh: High-Confidence Off-Policy Evaluation. AAAI 2015: 3000-3006 [2] Philip S. Thomas, Georgios Theocharous, Mohammad Ghavamzadeh: High Confidence Policy Improvement. ICML 2015: 2380-2388 - In the Preliminaries Section the next state is said to be drawn from s_{t+1} ~ P(s'| s, a). However, this hides the fact the next state is dependent on the environment dynamics and on the policy being followed. I think it would be clearer if written: s_{t+1} ~ P(s'| s, \pi(a|s)). - It seems to me that, in Algorithm 1, the name 'Act' is misleading. Shouldn't it be 'ChooseAction' or 'EpsilonGreedy'? If I understand correctly, the function 'Act' just returns the action to be executed, while the function 'Step' is the one that actually executes the action. - It is absolutely essential to depict the confidence intervals in the plots in Figure 3. Ideally we should have confidence intervals in all the plots in the paper.
iclr_2018_SkhQHMW0W
Published as a conference paper at ICLR 2018 DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD are redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during this compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270× to 600× without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile.
I think this is a good work that I am sure will have some influence in the near future. I think it should be accepted and my comments are mostly suggestions for improvement or requests for additional information that would be interesting to have. Generally, my feeling is that this work is a little bit too dense, and would like to encourage the authors in this case to make use of the non-strict ICLR page limit, or move some details to appendix and focus more on more thorough explanations. With increased clarity, I think my rating (7) would be higher. Several Figures and Tables are never referenced in the text, making it a little harder to properly follow text. Pointing to them from appropriate places would improve clarity I think. Algorithm 1 line 14: You never seem to explain what is sparse(G). Sec 3.1: What is it exactly that gets communicated? How do you later calculate the Compression Ratio? This should surely be explained somewhere. Sec 3.2 you mention 1% loss of accuracy. A pointer here would be good, at that point it is not clear if it is in your work later, or in another paper. The efficient momentum correction is great! As I was reading the paper, I got to the experiments and realized I still don't understand what is it that you refer to as "deep gradient compression". Pointer to Table 1 at the end of Sec 3 would probably be ideal along with some summary comments. I feel the presentation of experimental results is somewhat disorganized. It is not clear what is immediately clear what is the baseline, that should be somewhere stressed. I find it really confusing why you sometimes compare against Gradient Dropping, sometimes against TernGrad, sometimes against neither, sometimes include Gradient Sparsification with momentum correction (not clear again what is the difference from DGC). I recommend reorganizing this and make it more consistent for sake of clarity. Perhaps show here only some highlights, and point to more in the Appendix. Sec 5: Here I feel would be good to comment on several other things not mentioned earlier. Why do you only work with 99.9% sparsity? Does 99% with 64 training nodes lead to almost dense total updates, making it inefficient in your communication model? If yes, does that suggest a scaling limit in terms of number of training nodes? If not, how important is the 99.9% sparsity if you care about communication cost dominating the total runtime? I would really like to better understand how does this change and what is the point beyond which more sparsity is not practically useful. Put differently, is DGC with 600x size reduction in total runtime any better than DGC with 60x reduction? Finally, a side remark: Under eq. (2) you point to something that I think could be more discussed. When you say what you do has the effect of increasing stepsize, why don't you just increase the stepsize? There has recently been this works on training ImageNet in 1 hour, then in 24 minutes, latest in 15 minutes... You cite the former, but highlight different part of their work. Broader idea is that this is trend that potentially makes this kind of work less relevant. While I don't think that makes your work bad or misplaced, I think mentioning this would be useful as an alternative approach to the problems you mention in the introduction and use to motivate your contribution. ...what would be your reason for using DGC as opposed to just increasing the batch size?
iclr_2018_ryQu7f-RZ
ON THE CONVERGENCE OF ADAM AND BEYOND Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSPROP, ADAM, ADADELTA, NADAM are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where ADAM does not converge to the optimal solution, and describe the precise problems with the previous analysis of ADAM algorithm. Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with "long-term memory" of past gradients, and propose new variants of the ADAM algorithm which not only fix the convergence issues but often also lead to improved empirical performance.
This work identifies a mistake in the existing proof of convergence of Adam, which is among the most popular optimization methods in deep learning. Moreover, it gives a simple 1-dimensional counterexample with linear losses on which Adam does not converge. The same issue also affects RMSprop, which may be viewed as a special case of Adam without momentum. The problem with Adam is that the "learning rate" matrices V_t^{1/2}/alpha_t are not monotonically decreasing. A new method, called AMSGrad is therefore proposed, which modifies Adam by forcing these matrices to be decreasing. It is then shown that AMSGrad does satisfy essentially the same convergence bound as the one previously claimed for Adam. Experiments and simulations are provided that support the theoretical analysis. Apart from some issues with the technical presentation (see below), the paper is well-written. Given the popularity of Adam, I consider this paper to make a very interesting observation. I further believe all issues with the technical presentation can be readily addressed. Issues with Technical Presentation: - All theorems should explicitly state the conditions they require instead of referring to "all the conditions in (Kingma & Ba, 2015)". - Theorem 2 is a repetition of Theorem 1 (except for additional conditions). - The proof of Theorem 3 assumes there are no projections, so this should be stated as part of its conditions. (The claim in footnote 2 that they can be handled seems highly plausible, but you should be up front about the limitations of your results.) - The regret bound Theorem 4 establishes convergence of the optimization method, so it plays the role of a sanity check. However, it is strictly worse than the regret bound O(sqrt{T}) for online gradient descent [Zinkevich,2003], so it cannot explain why the proposed AMSgrad method might be adaptive. (The method may indeed be adaptive in some sense; I am just saying the *bound* does not express that. This is also not a criticism of the current paper; the same remark also applies to the previously claimed regret bound for Adam.) - The discussion following Corollary 1 suggests that sum_i hat{v}_{T,i}^{1/2} might be much smaller than d G_infty. This is true, but we should always expect it to be at least a constant, because hat{v}_{t,i} is monotonically increasing by definition of the algorithm, so the bound does not get better than O(sqrt(T)). It is also suggested that sum_i ||g_{1:T,i}|| = sqrt{sum_{t=1}^T g_{t,i}^2} might be much smaller than dG_infty, but this is very unlikely, because this term will typically grow like O(sqrt{T}), unless the data are extremely sparse, so we should at least expect some dependence on T. - In the proof of Theorem 1, the initial point is taken to be x_1 = 1, which is perfectly fine, but it is not "without loss of generality", as claimed. This should be stated in the statement of the Theorem. - The proof of Theorem 6 in appendix B only covers epsilon=1. If it is "easy to show" that the same construction also works for other epsilon, as claimed, then please provide the proof for general epsilon. Other remarks: - Theoretically, nonconvergence of Adam seems a severe problem. Can you speculate on why this issue has not prevented its widespread adoption? Which factors might mitigate the issue in practice? - Please define g_t \circ g_t and g_{1:T,i} - I would recommend sticking with standard linear algebra notation for the sqrt and the inverse of a matrix and simply using A^{-1} and A^{1/2} instead of 1/A and sqrt{A}. - In theorems 1,2,3, I would recommend stating the dimension (d=1) of your counterexamples, which makes them very nice! Minor issues: - Check accent on Nicol\`o Cesa-Bianchi in bibliography. - Near the end of the proof of Theorem 6: I believe you mean Adam suffers a "regret" instead of a "loss" of at least 2C-4. Also 2C-4=2C-4 is trivial in the second but last display.
iclr_2018_rJIN_4lA-
Social dilemmas are situations where individuals face a temptation to increase their payoffs at a cost to total welfare. Building artificially intelligent agents that achieve good outcomes in these situations is important because many real world interactions include a tension between selfish interests and the welfare of others. We show how to modify modern reinforcement learning methods to construct agents that act in ways that are simple to understand, nice (begin by cooperating), provokable (try to avoid being exploited), and forgiving (try to return to mutual cooperation). We show both theoretically and experimentally that such agents can maintain cooperation in Markov social dilemmas. Our construction does not require training methods beyond a modification of self-play, thus if an environment is such that good strategies can be constructed in the zero-sum case (eg. Atari) then we can construct agents that solve social dilemmas in this environment.
This paper addresses multiagent learning problems in which there is a social dilemma: settings where there are no 'cooperative polices' that form an equilibrium. The paper proposes a way of dealing with these problems via amTFT, a variation of the well-known tit-for-that strategy, and presents some empirical results. My main problem with this paper is clarity and I am afraid that not everything might be technically correct. Let me just list my main concerns in the below. The definition of social dilemma, is unclear: "A social dilemma is a game where there are no cooperative policies which form equilibria. In other words, if one player commits to play a cooperative policy at every state, there is a way for their partner to exploit them and earn higher rewards at their expense." does this mean to say "there are no cooperative *Markov* policies" ? It seems to me that the paper precisely intents to show that by resorting to history-dependent policies (such as both using amTFT), there is a cooperative equilibrium. I don't understand: "Note that in a social dilemma there may be policies which achieve the payoffs of cooperative policies because they cooperate on the trajectory of play and prevent exploitation by threatening non-cooperation on states which are never reached by the trajectory. If such policies exist, we call the social dilemma solvable." is this now talking about non-Markov policies? If not, there seems to be a contradiction? The work focuses on TFT-like policies, motivated by "if one can commit to them, create incentives for a partner to behave cooperatively" however it seems that, as made clear below definition 4, we can only create such incentives for sufficiently powerful agents, that remember and learn from their failures to cooperate in the past? Why is the method called "approximate Markov"? As soon as one introduces history dependence, the Markov property stops to hold? On page 4, I have problems following the text due to inconsistent use of notation: subscripts and superscripts seem random, it is not clear which symbols denote strategy profiles (rather than individual strategies), there seems mix-ups between 'i' and '1' / '2', there is sudden use of \hat{}, and other undefined symbols (Q_CC?). For all practical purposes, it seems that the made assumptions imply uniqueness of the cooperative joint strategy. I fully appreciate that the coordination question is difficult and important, so if the proposed method is not compatible with dealing with that important question, that strikes me as a large drawback. I have problems understanding how it is possible to guarantee "If they start in a D phase, they eventually return to a C phase." without making more assumptions on the domain. The clear example being the typical 'heaven or hell' type of problems: what if after one defect, we are trapped in the 'hell' state where no cooperation is even possible? "If policies converge with this training then πˆ is a Markov equilibrium (up to function approximation)." There are two problems here: 1) A problem is that very typically things will not converge... E.g., Wunder, Michael, Michael L. Littman, and Monica Babes. "Classes of multiagent q-learning dynamics with epsilon-greedy exploration." Proceedings of the 27th International Conference on Machine Learning (ICML-10). 2010. 2) "Up to function approximation" could be arbitrary large? Another significant problem seems to be with this statement: "while in the cooperative reward schedule the standard RL convergence guarantees apply. The latter is because cooperative training is equivalent to one super-agent controlling both players and trying to optimize for a single scalar reward." The training of individual learners is quite different from "joint action learners" [Claus & Boutilier 98], and this in turn is different from a 'super-agent' which would also control the exploration. In absence of the super-agent, I believe that the only guarantee is that one will, in the limit, converge to a Nash equilibrum, which might be arbitrary far from the optimal joint policy. And this only holds for the tabular case. See the discussion in A concise introduction to multiagent systems and distributed artificial intelligence. N Vlassis. Synthesis Lectures on Artificial Intelligence and Machine Learning 1 (1), 1-71 Also, the approach used in the experiments "Cooperative (self play with both agents receiving sum of rewards) training for both games", would be insufficient for many settings where a cooperative joint policy would be asymmetric. The entire approach hinges on using rollouts (the commented lines in Algo. 1). However, it is completely not clear to me how this works. The one paragraph is insufficient to get across these crucial parts of the proposed approach. It is not clear why the tables in Figure 1 are not symmetric; this strikes me as extremely problematic. It is not clear what the colors encode either. It also seems that "grim" is better against all, except against amTFT, why should we not use that? In general, the explanation of this closely related paper by De Cote & Littman (which was published at UAI'08), is insufficient. It is not quite clear to me what the proposed approach offers over the previous method.
iclr_2018_ByquB-WC-
To solve the text-based question and answering task that requires relational reasoning, it is necessary to memorize a large amount of information and find out the question relevant information from the memory. Most approaches were based on external memory and four components proposed by Memory Network. The distinctive component among them was the way of finding the necessary information and it contributes to the performance. Recently, a simple but powerful neural network module for reasoning called Relation Network (RN) has been introduced. We analyzed RN from the view of Memory Network, and realized that its MLP component is able to reveal the complicate relation between question and object pair. Motivated from it, we introduce Relation Memory Network (RMN) which uses MLP to find out relevant information on Memory Network architecture. It shows new state-of-the-art results in jointly trained bAbI-10k story-based question answering tasks and bAbI dialog-based question answering tasks.
This paper introduces Related Memory Network (RMN), an improvement over Relationship Networks (RN). RMN avoids growing the relationship time complexity as suffered by RN (Santoro et. Al 2017). RMN reduces the complexity to linear time for the bAbi dataset. RN constructs pair-wise interactions between objects in RN to solve complex tasks such as transitive reasoning. RMN instead uses a multi-hop attention over objects followed by an MLP to learn relationships in linear time. Comments for the author: The paper addresses an important problem since understanding object interactions are crucial for reasoning. However, how widespread is this problem across other models or are you simply addressing a point problem for RN? For example, Entnet is able to reason as the input is fed in and the decoding costs are low. Likewise, other graph-based networks (which although may require strong supervision) are able to decode quite cheaply. The relationship network considers all pair-wise interactions that are replaced by a two-hop attention mechanism (and an MLP). It would not be fair to claim superiority over RN since you only evaluate on bABi while RN also demonstrated results on other tasks. For more complex tasks (even over just text), it is necessary to show that you outperform RN w/o considering all objects in a pairwise fashion. More specifically, RN uses an MLP over pair-wise interactions, does that allow it to model more complex interactions than just selecting two hops to generate attention weights. Showing results with multiple hops (1,2,..) would be useful here. More details are needed about Figure 3. Is this on bAbi as well? How did you generate these stories with so many sentences? Another clarification is the bAbi performance over Entnet which claims to solve all tasks. Your results show 4 failed tasks, is this your reproduction of Entnet? Finally, what are the savings from reducing this time complexity? Some wall clock time results or FLOPs of train/test time should be provided since you use multiple hops. Overall, this paper feels like a small improvement over RN. Without experiments over other datasets and wall clock time results, it is hard to appreciate the significance of this improvement. One direction to strengthen this paper is to examine if RMN can do better than pair-wise interactions (and other baselines) for more complex reasoning tasks.
iclr_2018_rye7IMbAZ
In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch. When using fine-tuning, the underlying assumption is that the pre-trained model extracts generic features, which are at least partially relevant for solving the target task, but would be difficult to extract from the limited amount of data available on the target task. However, besides the initialization with the pre-trained model and the early stopping, there is no mechanism in fine-tuning for retaining the features learned on the source task. In this paper, we investigate several regularization schemes that explicitly promote the similarity of the final solution with the initial model. We eventually recommend a simple L 2 penalty using the pre-trained model as a reference, and we show that this approach behaves much better than the standard scheme using weight decay on a partially frozen network.
This work addresses the scenario of fine-tuning a pre-trained network for new data/tasks and empirically studies various regularization techniques. Overall, the evaluation concludes with recommending that all layers of a network whose weights are directly transferred during fine-tuning should be regularized against the initial net with an L2 penalty during further training. Relationship to prior work: Regularizing a target model against a source model is not a new idea. The authors miss key connections to A-SVM [1] and PMT-SVM [2] -- two proposed transfer learning models applied to SVM weights, but otherwise very much the same as the proposed solution in this paper. Though the study here may offer new insights for deep nets, it is critical to mention prior work which also does analysis of these regularization techniques. Significance: As the majority of visual recognition problems are currently solved using variants of fine-tuning, if the findings reported in this paper generalize, then it could present a simple new regularization which improves the training of new models. The change is both conceptually simple and easy to implement so could be quickly integrated by many people. Clarity and Questions: The purpose of the paper is clear, however, some questions remain unanswered. 1) How is the regularization weight of 0.01 chosen? This is likely a critical parameter. In an experimental paper, I would expect to see a plot of performance for at least one experiment as this regularization weighting parameter is varied. 2) How does the use of L2 regularization on the last layer effect the regularization choice of other layers? What happens if you use no regularization on the last layer? L1 regularization? 3) Figure 1 is difficult to read. Please at least label the test sets on each sub-graph. 4) There seems to be some issue with the freezing experiment in Figure 2. Why does performance of L2 regularization improve as you freeze more and more layers, but is outperformed by un-freezing all. 5) Figure 3 and the discussion of linear dependence with the original model in general seems does not add much to the paper. It is clear that regularizing against the source model weights instead of 0 should result in final weights that are more similar to the initial source weights. I would rather the authors use this space to provide a deeper analysis of why this property should help performance. 6) Initializing with a source model offers a strong starting point so full from scratch learning isn’t necessary -- meaning fewer examples are needed for the continued learning (fine-tuning) phase. In a similar line of reasoning, does regularizing against the source further reduce the number of labeled points needed for fine-tuning? Can you recover L2 fine-tuning performance with fewer examples when you use L2-SP? [1] J. Yang, R. Yan, and A. Hauptmann. Adapting svm classifiers to data with shifted distributions. In ICDM Workshops, 2007. [2] Y. Aytar and A. Zisserman. Tabula rasa: Model transfer for object category detection. In Proc. ICCV, 2011. ------------------ Post rebuttal ------------------ The changes made to the paper draft as well as the answers to the questions posed above have convinced me to upgrade my recommendation to a weak accept. The experiments are now clear and thorough enough to provide a convincing argument for using this regularization in deep nets. Since it is simple and well validated it should be easily adopted.
iclr_2018_rkrC3GbRW
LEARNING A GENERATIVE MODEL FOR VALIDITY IN COMPLEX DISCRETE STRUCTURES Deep generative models have been successfully used to learn representations for high-dimensional discrete spaces by representing discrete objects as sequences and employing powerful sequence-based deep models. Unfortunately, these sequencebased models often produce invalid sequences: sequences which do not represent any underlying discrete structure; invalid sequences hinder the utility of such models. As a step towards solving this problem, we propose to learn a deep recurrent validator model, which can estimate whether a partial sequence can function as the beginning of a full, valid sequence. This validator provides insight as to how individual sequence elements influence the validity of the overall sequence, and can be used to constrain sequence based models to generate valid sequences -and thus faithfully model discrete objects. Our approach is inspired by reinforcement learning, where an oracle which can evaluate validity of complete sequences provides a sparse reward signal. We demonstrate its effectiveness as a generative model of Python 3 source code for mathematical expressions, and in improving the ability of a variational autoencoder trained on SMILES strings to decode valid molecular structures.
The authors use a recurrent neural network to build generative models of sequences in domains where the vast majority of sequences is invalid. The basic idea, outlined in Eq. 2, is moderately straightforward: at each step, use an approximation of the Q function for subsequences of the appropriate length to pick a valid extension. There are numerous details to get right. The writing is mostly clear, and the examples are moderately convincing. I wish the paper had more detailed arguments and discussions. I question the appropriateness of Eq. 2 as a target. A correctly learned model will put positive weight on valid sequences, but it may be an arbitrarily slow way to generate diverse sequences, depending on the domain. For instance, imagine a domain of binary strings where the valid sequences are the all 1 sequence, or any sequence beginning with a 0. Half the generated sequences would be all 1's in this situation, right? And it's easy to construct further examples that are much worse than this? The use of Bayesian active learning to generate the training set feels like an elegant idea. However, I wish there were more clarity about what was ad hoc and what wasn't. For instance, I think the use of dropout to get q is suspect (see for instance https://arxiv.org/abs/1711.02989), and I'd prefer a little more detail on statements like "The nonlinearity of g(·) means that our Monte Carlo approximation is biased, but still consistent." Do we have any way of quantifying the bias? Is the statement about K=16 being reasonable a statement about bias, variance, or both? For Python strings: - Should we view the fact that high values of tau give a validity of 1.0 as indicative that the domain's constraints are fairly easy to learn? - "The use of a Boltzmann policy allows us to tune the temperature parameter to identify policies which hit high levels of accuracy for any learned Q-function approximation." This is only true to the extent the domain is sufficiently "easy" right? Is the argument that even in very hard domains, you might get this by just having an RNN which memorized a single valid sequence (assuming at least one could be found)? - What's the best explanation for *why* the active model has much higher diversity? I understand that the active model is picking examples that tell us more about the uncertainty in w, but it's not obvious to me that means higher diversity. Do we think this is a universal property of domains? - The highest temperature active model is exploring about half of valid sequences (modulo the non-tightness of the bound)? Have you tried gaining some insight by generating thousands of valid sequences manually and seeing which ones the model is rejecting? - The coverage bound is used only for for Python expressions, right? Why not just randomly sample a few thousand positives and use that to get a better estimate of coverage? Since you can sample from the true positive set, it seems that your argument from the appendix about the validation set being "too similar to the training set" doesn't apply? - It would be better to see a comparison to a strong non-NN baseline. For instance, I could easily make a PCFG over Python math expressions, and use rejection sampling to get rid of those that aren't exactly length 25, etc.? I question how easy the Python strings example is. In particular, it might be that it's quite an easy example (compared to the SMILES) example. For SMILES, it seems like the Bayesian active learning technique is not by itself sufficient to create a good model? It is interesting that in the solubility domain the active model outperforms, but it would be nice to see more discussion / explanation. Minor note: The incidence of valid strings in the Python expressions domain is (I believe) > 1/5000, although I guess 1 in 10,000 is still the right order of magnitude. If I could score between "marginal accept" and "accept" I would.
iclr_2018_SJLy_SxC-
Skip connections are increasingly utilized by deep neural networks to improve accuracy and cost-efficiency. In particular, the recent DenseNet is efficient in computation and parameters, and achieves state-of-the-art predictions by directly connecting each feature layer to all previous ones. However, DenseNet's extreme connectivity pattern may hinder its scalability to high depths, and in applications like fully convolutional networks, full DenseNet connections are prohibitively expensive. This work first experimentally shows that one key advantage of skip connections is to have short distances among feature layers during backpropagation. Specifically, using a fixed number of skip connections, the connection patterns with shorter backpropagation distance among layers have more accurate predictions. Following this insight, we propose a connection template, Log-DenseNet, which, in comparison to DenseNet, only slightly increases the backpropagation distances among layers from 1 to (1+log 2 L), but uses only L log 2 L total connections instead of O(L 2 ). Hence, Log-DenseNets are easier to scale than DenseNets, and no longer require careful GPU memory management. We demonstrate the effectiveness of our design principle through ablation studies and by showing better performance than DenseNets on tabula rasa semantic segmentation, and competitive results on visual recognition.
This paper investigates how to impose layer-wise connections in DenseNets most efficiently. The authors propose a connection-pattern, which connects layer i to layer i-2^k, k=0,1,2... The authors also propose maximum backpropgation distance (MBD) for measuring the fluency of gradient flow in the network, and justify the Log-DenseNet's advantage in this framework. Empirically, the author demonstrates the effectiveness of Log-DenseNet by comparing it with two other intuitive connection patterns on CIFAR datasets. Log-DenseNet also improves on FC-DenseNet, where the connection budget is the bottleneck because the feature maps are of high resolutions. Strengths: 1. Generally, DenseNet is memory-hungry if the connection is dense, and it is worth studying how to sparsify a DenseNet. By showing the improvements on FC-DenseNet, Log-DenseNet demonstrates good potential on tasks which require upsampling of feature maps. 2. The ablation experiments are well-designed and the visualizations of connectivity pattern are clear. Weakness: 1. Adding a comparison with Log-DenseNet and vanilla DenseNet in the Table 2 experiment would make the paper stronger. Also, the NearestHalfAndLog pattern is not used in any latter visual recognition experiments, so I think it's better to just compare LogDenseNet with the two baselines instead. Despite there are CIFAR experiments on Log-DenseNet in latter sections, including results here would be easier to follow. 2. I would like to see the a comparison with the DenseNet-BC in the segmentation and CIFAR classification tasks, which uses 1x1 conv layers to reduce the number of channels. It should be interesting to study whether it is possible to further sparsify DenseNet-BC, as it has much higher efficiency. 3. The improvement of efficiency on classifications task is not that significant.
iclr_2018_rkZvSe-RZ
ENSEMBLE ADVERSARIAL TRAINING: ATTACKS AND DEFENSES Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with strong robustness to black-box attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks (Kurakin et al., 2017c).
This paper describes computationally efficient methods for training adversarially robust deep neural networks for image classification. (These methods may extend to other machine learning models and domains as well, but that's beyond the scope of this paper.) The former standard method for generating adversarially images quickly and using them in training was to do a single gradient step to increase the loss of the true label or decrease the loss of an alternate label. This paper shows that such training methods only lead to robustness against these "weak" adversarial examples, leaving the adversarially-trained models vulnerable to multi-step white-box attacks and black-box attacks (adversarial examples generated to attack alternate models). There are two proposed solutions. The first is to generate additional adversarial examples from other models and use them in training. This seems to yield robustness against black-box attacks from held-out models as well. Of course, it requires that you have a somewhat diverse group of models to choose from. If that's the case, why not directly build an ensemble of all the models? An ensemble of neural networks can still be represented as a neural network, although a more computationally costly one. Thus, while this heuristic appears to be useful with current models against current attacks, I don't know how well it will hold up in the future. The second solution is to add random noise before taking the gradient step. This yields more effective adversarial examples, both for attacking models and for training, because it relies less on the local gradient. This is another simple idea that appears to be effective. However, I would be interested to see a comparison to a 2-step gradient-based attack. R+Step-LL can be viewed as a 2-step attack: a random step followed by a gradient step. What if both steps were gradient steps instead? This interpolates between Step-LL and I-Step-LL, with an intermediate computational cost. It would be very interesting to know if R+Step-LL is more or less effective than 2+Step-LL, and how large the difference is. I like that this paper demonstrates the weakness of previous methods, including extensive experiments and a very nice visualization of the loss landscape in two adversarial dimensions. The proposed heuristics seem effective in practice, but they're somewhat ad hoc and there is no analysis of how these heuristics might or might not be vulnerable to future attacks.
iclr_2018_Sk7cHb-C-
We propose an unsupervised method for building dynamic representations of sequential data, particularly of observed interactions. The method simultaneously acquires representations of input data and its dynamics. It is based on a hierarchical generative model composed of two levels. In the first level, a model learns representations to generate observed data. In the second level, representational states encode the dynamics of the lower one. The model is designed as a Bayesian network with switching variables represented in the higher level, and which generates transition models. The method actively explores the latent space guided by its knowledge and the uncertainty about it. That is achieved by updating the latent variables from prediction error signals backpropagated to the latent space. So, no encoder or inference models are used since the generators also serve as their inverse transformations. The method is evaluated in two scenarios, with static images and with videos. The results show that the adaptation over time leads to better performance than with similar architectures without temporal dependencies, e.g., variational autoencoders. With videos, it is shown that the system extracts the dynamics of the data in states that highly correlate with the ground truth of the actions observed.
The authors propose an architecture and generative model for static images and video sequences, with the purpose of generating an image that looks as similar as possible to the one that is supplied. This is useful for for example frame prediction in video and detection of changes in video as a consequence to changes in the dynamics of objects in the scene. The architecture minimizes the error between the generated image(s) and the supplied image(s) by refining the generated image over time when the same image is shown and by adapting when the image is changed. The model consists of three neural networks (F_Zµ, F_Zsigma, f_X|Z) and three multivariate Gaussian distributions P(S_t), P(Z_t) and P(X_t,Z_t) with diagonal covariances. The NNs do not change over time but they relate the three Gaussian distributions in different ways and these distributions are changing over time in order to minimize the error of the generated image(s). The paper took a while to understand due to its structure and how it is written. A short overview of the different components of Figure 1 giving the general idea and explaining * what nodes are stochastic variables and NNs * what is trained offline/online It would also help the structure if the links/arrows/nodes had numbers corresponding to the relevant equations defining the relations/computations. Some of these relations are defined with explicit equation numbers, others are baked into the text which makes it difficult to jump around in the paper when reading it and trying to understand the architecture. There are also numerous language errors in the paper and many of them are grammatical. For example: Page 5, second paragraph: "osculations" -> "oscillations" Page 5, fourth paragraph: "Defining .. is defined as.." The results seem impressive and the problem under consideration is important and have several applications. There is however not much in terms of discussion nor analysis of the two experiments. I find the contribution fairly significant but I lack some clarity in the presentation as well as in the experiments section. I do not find the paper clearly written. The presentation can be improved in several chapters, such as the introduction and the method section. The paper seem to be technically correct. I did not spot any errors. General comments: - Why is 2 a suitable size of S? - Why use two by two encoding instead of stride for the VAE baseline? How does this affect the experiments? - How is S_0 and the Z_0 prior set (initialized) in the experiments? - It would improve the readability of the paper, not least for a broader audience, if more details are added on how the VAE baseline architecture differ from the proposed architecture. - In the first experiment: -- How many iterations does it take for your method to beat VAE? -- What is the difference between the VAE basline and your approach that make VAE perform better than your approach initially (or even after a few iterations)? -- What affect does the momentum formulation have on the convergence rate (number of iterations necessary to reach VAE and your methods result at t=10)? - In the second experiment and Figure 5 in particular it was observed that some actions are clearly detected while others are not. It is mentioned that those that are not detected by the approach are more similar. In what sense are the actions more similar, which are the most prominent (from a humans perspective) such actions, what is making the model not detecting them and what can be done (within your approach) in order to improve or adjust the detection fidelity? Please add more time labels under the time axis in Figure 4 and Figure 5. Also please annotate the figures at the time points where the action transitions are according to the ground truth.
iclr_2018_r1drp-WCZ
Long Short-Term Memory (LSTM) is one of the most powerful sequence models. Despite the strong performance, however, it lacks the nice interpretability as in state space models. In this paper, we present a way to combine the best of both worlds by introducing State Space LSTM (SSL) models that generalizes the earlier work Zaheer et al. (2017) of combining topic models with LSTM. However, unlike Zaheer et al. (2017), we do not make any factorization assumptions in our inference algorithm. We present an efficient sampler based on sequential Monte Carlo (SMC) method that draws from the joint posterior directly. Experimental results confirms the superiority and stability of this SMC inference algorithm on a variety of domains.
This paper introduces a novel extension of the LSTM which incorporates stochastic inputs at each timestep. These stochastic inputs are themselves dependent on the LSTM state at the previous timestep. Considering the stochastic dependencies, this then yields a highly flexible non-Markov state space model, where the latent variable transitions are partially parameterized by an LSTM update. Naturally, the challenges are then efficiently estimating parameters and performing inference over the latent states. Here, SMC (and conditional SMC / particle Gibbs) are used for inference over the latent states z. A particularly nice touch is that even when the LSTM model is used for the transitions in the latent space, so long as the conditional distributions p(z_t | z_{1:t-1}) are conjugate with the emission distribution then it is possible to compute the optimal forward filtering proposal distribution in closed form, as done for the conditionally Gaussian (with affine Gaussian observations) and conditionally multinomial models considered here. Note that this really is a special feature of the models under consideration, though: for example, if the emission distribution p(x_t | z_t) is instead a *nonlinear* Gaussian, then one would have to fall back to bootstrap proposals. This probably deserves some mention: equations (13) are not, generally, tractable to integrate or normalize. I think this paper is missing a few necessary details on how the overall optimization algorithm proceeds, which I would like to see in an update. I understand that particle Gibbs updates (or SMC) are used to approximate the posterior distribution in a Monte Carlo EM algorithm. However, this does leave some questions: 1. For the M step, how are the \omega parameters (of the LSTM) handled in equation (8)? I understand that due to the particular models considered, maximum likelihood estimates of \phi can be found in closed form. However, that’s not the case for \omega. Is a gradient descent algorithm run to convergence? Or is a single gradient step taken, interleaved with a single PG update? Or something else? 2. How reliably does the algorithm as a whole converge? Monte Carlo EM does not in general have convergence guarantees of “standard” EM (i.e. each step is not guaranteed to monotonically improve the lower bound). This might be fine! But, I think requires a bit of discussion. 3. Is it necessary to include a replenishing operation (or independent MCMC steps) in the particle Gibbs algorithm? A known issue when running an iterated conditional SMC algorithm like this is that path degeneracy can make it very difficult for the PG kernel to mix well over the early time steps in the LSTM. Does this issue appear here? How many particles P are needed to efficiently mix, when considering time series of length T?
iclr_2018_SJxE3jlA-
Humans rely on episodic memory constantly, in remembering the name of someone they met 10 minutes ago, the plot of a movie as it unfolds, or where they parked the car. Endowing reinforcement learning agents with episodic memory is a key step on the path toward replicating human-like general intelligence. We analyze why standard RL agents lack episodic memory today, and why existing RL tasks do not require it. We design a new form of external memory called Masked Experience Memory, or MEM, modeled after key features of human episodic memory. To evaluate episodic memory we define an RL task based on the common children's game of Concentration. We find that a MEM RL agent leverages episodic memory effectively to master Concentration, unlike the baseline agents we tested.
There are a number of attempts to add episodic memory to RL agents. A common approach is to use some sort of recurrent model with a model-free agent. This work follows this approach using what could be considered a memory network with a identity embedding function and tests on 'Concentration', a game which requires matching pairs of cards. They find their model outperforms a DNC and LSTM baselines. The primary novelty is the use of an explicitly masked similarity function (with learned mask) and the concentration task, which requires more memory than, for example, common tasks adapted from the psychology literature such as the Morris watermaze or T-maze (although in the supervised setting tasks such as Omniglot are quite similar). This work is well-communicated and cites relevant prior work. The author's should also be commended for agreeing to release their code on publication. The primary weakness of this work its lack of novelty and lack of evidence of generalization of the approach, which limits its significance. The model introduced is a slight variant of memory networks. Additionally, the single task the model is tested on appears custom-designed to favor the model (see next paragraph). While the analysis of the weakness of cosine similarity is interesting, memory networks which compute separate embeddings for the 'label' (content-based label for retrieval) and memory content don't appear to suffer from the same issue as the DNC. They can store only retrieval-relevant content in the label and thus avoid issues with normalization. The observation vector is stored directly in memory without passing through an embedding function, which in general seems quite limiting. However, in the constructed task the labels are low-dimensional, random vectors and there is no noise in the labels (i.e. two cards with the same label are labelled identically, rather the similarly). The author's mention avoiding naturalistic labels such as omniglot characters (closer to the real version of concentration) due to the possibility the agent might memorise the finite set of labels, however by choosing a large dataset and using a non-overlapping set of examples for the test set this probably could be avoided and would provide a more naturalistic test set. The comparison with the DNC also seems designed to favor their model. DNC has write-gates, which might be relevant in a task with many irrelevant observations, but in this task are clearly going to impair learning. A memory network seems the more appropriate comparison. Its not clear why the DNC model used two different DNCs for computing the policy and value. To demonstrate their model is of more general interest it would be necessary to try on a wider range of more naturalistic tasks and a comparison with model-free agents augmented with memory networks. Simply showing that a customized model can outperform on a single custom, synthetic task is insufficient to demonstrate that these changes are of wider interest. Minor issues: - colorblind seems an odd description for agents which cannot perceive the card face. Why not just 'blind'? colorblind would seem to imply partial perception of the card face. - the observations of the environment are defined explicitly, but not the action space.
iclr_2018_BkUDW_lCb
The digitization of data has resulted in making datasets available to millions of users in the form of relational databases and spreadsheet tables. However, a majority of these users come from diverse backgrounds and lack the programming expertise to query and analyze such tables. We present a system that allows for querying data tables using natural language questions, where the system translates the question into an executable SQL query. We use a deep sequence to sequence model in wich the decoder uses a simple type system of SQL expressions to structure the output prediction. Based on the type, the decoder either copies an output token from the input question using an attention-based copying mechanism or generates it from a fixed vocabulary. We also introduce a value-based loss function that transforms a distribution over locations to copy from into a distribution over the set of input tokens to improve training of our model. We evaluate our model on the recently released WikiSQL dataset and show that our model trained using only supervised learning significantly outperforms the current state-of-the-art Seq2SQL model that uses reinforcement learning.
This paper proposes a model for solving the WikiSQL dataset that was released recently. The main issues with the paper is that its contributions are not new. * The first claimed contribution is to use typing at decoding time (they don't say why but this helps search and learning). Restricting the type of the decoded tokens based on the programming language has already been done by the Neural Symbolic Machines of Liang et al. 2017. Then Krishnamurthy et al. expanded that in EMNLP 2017 and used typing in a grammar at decoding time. I don't really see why the authors say their approach is simpler, it is only simpler because the sub-language of sql used in wikisql makes doing this in an encoder-decoder framework very simple, but in general sql is not regular. Of course even for CFG this is possible using post-fix notation or fixed-arity pre-fix notation of the language as has been done by Guu et al. 2017 for the SCONE dataset, and more recently for CNLVR by Goldman et al., 2017. So at least 4 papers have done that in the last year on 4 different datasets, and it is now close to being common practice so I don't really see this as a contribution. * The authors explain that they use a novel loss function that is better than an RL based function used by Zhong et al., 2017. If I understand correctly they did not implement Zhong et al. only compared to their numbers which is a problem because it is hard to judge the role of optimization in the results. Moreover, it seems that the problem they are trying to address is standard - they would like to use cross-entropy loss when there are multiple tokens that could be gold. the standard solution to this is to just have uniform distribution over all gold tokens and minimize the cross-entropy between the predicted distribution and the gold distribution which is uniform over all tokens. The authors re-invent this and find it works better than randomly choosing a gold token or taking the max. But again, this is something that has been done already in the context of pointer networks and other work like See et al. 2017 for summarization and Jia et al., 2016 for semantic parsing. * As for the good results - the data is new, so it is probable that numbers are not very fine-tuned yet so it is hard to say what is important and what not for final performance. In general I tend to agree that using RL for this task is probably unnecessary when you have the full program as supervision.
iclr_2018_S1lN69AT-
Model pruning seeks to induce sparsity in a deep neural network's various connection matrices, thereby reducing the number of nonzero-valued parameters in the model. Recent reports (Han et al., 2015a;Narang et al., 2017) prune deep networks at the cost of only a marginal loss in accuracy and achieve a sizable reduction in model size. This hints at the possibility that the baseline models in these experiments are perhaps severely over-parameterized at the outset and a viable alternative for model compression might be to simply reduce the number of hidden units while maintaining the model's dense connection structure, exposing a similar trade-off in model size and accuracy. We investigate these two distinct paths for model compression within the context of energy-efficient inference in resource-constrained environments and propose a new gradual pruning technique that is simple and straightforward to apply across a variety of models/datasets with minimal tuning and can be seamlessly incorporated within the training process. We compare the accuracy of large, but pruned models (large-sparse) and their smaller, but dense (small-dense) counterparts with identical memory footprint. Across a broad range of neural network architectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find large-sparse models to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy.
Summary: This paper presents a thorough examination of the effects of pruning on model performance. Importantly, they compare the performance of "large-sparse" models (large models that underwent pruning in order to reduce memory footprint of model) and "small-dense" models, showing that "large-sparse" models typically perform better than the "small-dense" models of comparable size (in terms of number of non-zero parameters, and/or memory footprint). They present results across a number of domains (computer vision, language modelling, and neural machine translation) and model types (CNNs, LSTMs). They also propose a way of performing pruning with a pre-defined sparsity schedule, simplifying the pruning process in a way which works across domains. They are able to show convincingly that pruning is an effective way of trading off accuracy for model size (more effective than simply reducing the size of model architecture), although there does come a point where too much sparsity degrades the model performance considerably; this suggests that pruning a medium size model to 80%-90% sparsity is likely better than pruning a larger model to >= 95% sparsity. Review: Quality: The quality of the work is high --- the experiments are extensive and thorough. I would have liked to see "small-dense" vs. "large-sparse" comparisons on Inception (only large-sparse results are reported). Clarity: The paper is clearly written, though there is room for improvement. For example, many of the results are presented in a redundant manner (in both tables and figures, where the table and figure are often not next to each other in the document). Also, it is not clear in several cases exactly which training/heldout/test sets are used, and on which partition of the data the accuracies/BLEU scores/perplexities presented correspond to. A small section (before "Methods") describing the datasets/features in detail would be helpful. Also, it would have probably been nice to explain all of the tasks and datasets early on, and then present all the results at once (NIT: include the plots in paper, and move the tables to an appendix). Originality: Although the experiments are informative, the work as a whole is not very original. The method proposed of using a sparsity schedule to perform pruning is simple and effective, but is a rather incremental contribution. The primary contribution of this paper is its experiments, which for the most part compare known methods. Significance: The paper makes a nice contribution, though it is not particularly significant or surprising. The primary observations are: (1) large-sparse is typically better than small-dense, for a fixed number of non-zero parameters and/or memory footprint. (2) There is a point at which increasing the sparsity percentage severely degrades the performance of the model, which suggests that there is a "sweet-spot" when it comes to choosing the model architecture and sparsity percentage which give the best performance (for a fixed memory footprint). Result #1 is not very surprising, given that Han et al (2016) were able to show significant compression without loss in accuracy; thus, because one would expect a smaller dense model to perform worse than the large dense model, it would also perform worse than the large sparse model. Result #2 had already been seen in Han et al (2016) (for example, in Figure 6). Pros: - Very thorough experiments across a number of domains Cons: - Methodological contributions are minor. - Results are not surprising, and are in line with previous papers.
iclr_2018_BkQqq0gRb
Published as a conference paper at ICLR 2018 VARIATIONAL CONTINUAL LEARNING This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks. The framework can successfully train both deep discriminative models and deep generative models in complex continual learning settings where existing tasks evolve over time and entirely new tasks emerge. Experimental results show that VCL outperforms stateof-the-art continual learning methods on a variety of tasks, avoiding catastrophic forgetting in a fully automatic way.
This paper proposes a new method, called VCL, for continual learning. This method is a combination of the online variational inference for streaming environment with Monte Carlo method. The authors further propose to maintain a coreset which consists of representative data points from the past tasks. Such a coreset is used for the main aim of avoiding the catastrophic forgetting problem in continual learning. Extensive experiments shows that VCL performs very well, compared with some state-of-the-art methods. The authors present two ideas for continual learning in this paper: (1) Combination of online variational inference and sampling method, (2) Use of coreset to deal with the catastrophic forgetting problem. Both ideas have been investigated in Bayesian literature, while (2) has been recently investigated in continual learning. Therefore, the authors seems to be the first to investigate the effectiveness of (1) for continual learning. From extensive experiments, the authors find that the first idea results in VCL which can outperform other state-of-the-art approaches, while the second idea plays little role. The finding of the effectiveness of idea (1) seems to be significant. The authors did a good job when providing a clear presentation, a detailed analysis about related work, an employment to deep discriminative models and deep generative models, and a thorough investigation of empirical performance. There are some concerns the authors should consider: - Since the coreset plays little role in the superior performance of VCL, it might be better if the authors rephrase the title of the paper. When the coreset is empty, VCL turns out to be online variational inference [Broderich et al., 2013; Ghahramani & Attias, 2000]. Their finding of the effectiveness of online variational inference for continual learning should be reflected in the writing of the paper as well. - It is unclear about the sensitivity of VCL with respect to the size of the coreset. The authors should investigate this aspect. - What is the trade-off when the size of the coreset increases?
iclr_2018_HktXuGb0-
Reinforcement learning typically requires carefully designed reward functions in order to learn the desired behavior. We present a novel reward estimation method that is based on a finite sample of optimal state trajectories from expert demonstrations and can be used for guiding an agent to mimic the expert behavior. The optimal state trajectories are used to learn a generative or predictive model of the "good" states distribution. The reward signal is computed by a function of the difference between the actual next state acquired by the agent and the predicted next state given by the learned generative or predictive model. With this inferred reward function, we perform standard reinforcement learning in the inner loop to guide the agent to learn the given task. Experimental evaluations across a range of tasks demonstrate that the proposed method produces superior performance compared to standard reinforcement learning with both complete or sparse hand engineered rewards. Furthermore, we show that our method successfully enables an agent to learn good actions directly from expert player video of games such as the Super Mario Bros and Flappy Bird.
To speed up RL algorithms, the authors propose a simple method based on utilizing expert demonstrations. The proposed method consists in explicitly learning a prediction function that maps each time-step into a state. This function is learned from expert demonstrations. The cost of visiting a state is then defined as the distance between that state and the predicted state according to the learned function. This reward is then used in standard RL algorithms to learn to stick close to the expert's demonstrations. An on-loop variante of this method consists of learning a function that maps each state into a next state according to the expert, instead of the off-loop function that maps time-steps into states. While the experiments clearly show the advantage of this method, this is hardly surprising or novel. The concept of encoding the demonstration explicitly in the form of a reward has been around for over a decade. This is the most basic form of teaching by demonstration. Previous works had used other models for generalizing demonstrations (GMMs, GPs, Kernel methods, neural nets etc..). This paper uses a three layered fully connected auto-encoder (which is not that deep of a model, btw) for the same purpose. The idea of using this model as a reward instead of directly cloning the demonstrations is pretty straightforward. Other comments: - Most IRL methods would work just fine by defining rewards on states only and ignoring actions all together. If you know the transition function, you can choose actions that lead to highly rewarding states, so you don't need to know the expert's executed actions. - "We assume that maximizing likelihood of next step prediction in equation 1 will be globally optimized in RL". Could you elaborate more on this assumption? Your model finds rewards based on local state features, where a greedy (one-step planning) policy would reproduce the expert's demonstrations (if the system is deterministic). It does not compare the global performance of the expert to alternative policies (as is typically done in IRL). - Related to the previous point: a reward function that makes every step of the expert optimal may not be always exist. The expert may choose to go to terrible states with the hope of getting to a highly rewarding state in the future. Therefore, the objective functions set in this paper may not be the right ones, unless your state description contains features related to future states so that you can incorporate future rewards in the current state (like in the reacher task, where a single image contains all the information about the problem). What you need is actually features that can capture the value function (like in DQN) and not just the immediate reward (as is done in IRL methods). - What if in two different trajectories, the expert chooses opposite actions for the same state appearing in both trajectories? For example, there are two shortest paths to a goal, one starts with going left and another starts with going right. If you try to generate a state that minimizes the sum of distances to the two states (left and right ones), then you may choose to remain in the middle, which is suboptimal. You wouldn't have this issue with regular IRL techniques, because you can explain both behaviors with future rewards instead of trying to explain every action of the expert using only local state description.
iclr_2018_H1VjBebR-
THE ROLE OF MINIMAL COMPLEXITY FUNCTIONS IN UNSUPERVISED LEARNING OF SEMANTIC MAPPINGS We discuss the feasibility of the following learning problem: given unmatched samples from two domains and nothing else, learn a mapping between the two, which preserves semantics. Due to the lack of paired samples and without any definition of the semantic information, the problem might seem ill-posed. Specifically, in typical cases, it seems possible to build infinitely many alternative mappings from every target mapping. This apparent ambiguity stands in sharp contrast to the recent empirical success in solving this problem. We identify the abstract notion of aligning two domains in a semantic way with concrete terms of minimal relative complexity. A theoretical framework for measuring the complexity of compositions of functions is developed in order to show that it is reasonable to expect the minimal complexity mapping to be unique. The measured complexity used is directly related to the depth of the neural networks being learned and a semantically aligned mapping could then be captured simply by learning using architectures that are not much bigger than the minimal architecture. Various predictions are made based on the hypothesis that semantic alignment can be captured by the minimal mapping. These are verified extensively. In addition, a new mapping algorithm is proposed and shown to lead to better mapping results.
The paper addresses the problem of learning mappings between different domains without any supervision. It belongs to the recent family of papers based on GANs. The paper states three conjectures (predictions in the paper): 1. GAN are sufficient to learn « semantic mappings » in an unsupervised way, if the considered networks are small enough 2. Controlling the complexity of the network, i.e. the number of the layers, is crucial to come up with what is called « semantic » mappings when learning in an unsupervised way. More precisely there is tradeoff to achieve between the complexity of the model and its simplicity. A rich model is required in order to minimize the discrepancy between the distributions of the domains, while a not too complex model is necessary to avoid mappings that are not « meaningful ». To this aim, the authors introduce a new notion of function complexity which can be seen as a proxy of Kolmogorov complexity. The introduced notion is very simple and intuitive and is defined as the depth of a network which is necessary to implement the considered function. Based on this definition, and assuming identifiability (i.e. uniqueness up to invariants), and for networks with Leaky ReLU activations, the authors prove that if the number of mappings which preserve a degree of discrepancy (density preserving in the text) is small, then the set of « minimal » mappings of complexity C that achieve the same degree of discrepancy is also small. This result is related to the third conjecture of the paper that is : 3. the number of the number of mappings which preserve a degree of discrepancy is small. The authors also prove a byproduct result stating that identifiability holds for Leaky ReLU networks with one hidden layer. The paper comes with a series of experiments to empirically « demonstrate » the conjectures. The paper is well written. The different ideas are clearly stated and discussed, and hence open interesting questions and debates. Some of these questions that need to be addressed IMHO: - A critical general question: if the addressed problem is the alignment between e.g. images and not image generation, why not formalizing the problem as a similarity search one (using e.g. EMD or any other transport metric). The alignment task hence reduces to computing a ranking from this similarity. I have the impression that we use a jackhammer to break a small brick here (no offence). But maybe that I’m missing something here. - Several works consider the size and the depth of the network as hyper-parameters to optimize, and this is not new. What is the actual contribution of the paper w.r.t. to this body of work? - It is considered that the GAN are trained without any problem, and therefore work in an optimal regime. But the training of the GAN is in itself a problem. How does this affect the paper statements and results? - Are the results still valid for another measure of discrepancy based for instance on another measure, e.g. Wasserstein? Some minor remarks : - p3: the following sentence is not clear «  Our hypothesis is that the lowest complexity small discrepancy mapping approximates the alignment of the target semantic function. » - p6: $C^{\epsilon_0}_{A,B}$ is used (after Def. 2) before being defined. - p7: build->built Section II : A diagram explaining the different mappings (h_A, h_B, h_AB, etc.) and their spaces (D_A, D_B, D_Z) would greatly help the understanding. Papers 's pros : - clarity - technical results cons: - doubts about the interest and originality The authors provided detailed and convincing answers to my questions. I thank them for that. My scores were changed accrodingly.
iclr_2018_HJ_X8GupW
Here we study the problem of learning labels for large text corpora where each document can be assigned a variable number of labels. The problem is trivial when the label dimensionality is small and can be easily solved by a series of one-vs-all classifiers. However, as the label dimensionality increases, the parameter space of such one-vs-all classifiers becomes extremely large and outstrips the memory. Here we propose a latent variable model to reduce the size of the parameter space, but still efficiently learn the labels. We learn the model using spectral learning and show how to extract the parameters using only three passes through the training dataset. Further, we analyse the sample complexity of our model using PAC learning theory and then demonstrate the performance of our algorithm on several benchmark datasets in comparison with existing algorithms.
The paper addresses the problem of multi-label learning for text corpora and proposes to tackle the problem using tensor factorization methods. Some analysis and experimental results for the proposed algorithm are presented. QUALITY: I find the quality of the results in this paper rather low. The proposed probabilistic model is defined ambiguously. The authors then look at joint probability distributions of co-occurence of two and three words, which gives a matrix and a tensor, respectively. They propose to match these matrix and tensor to their sample estimates and refer to such procedure as the moment matching method, which it is not. They then apply a standard two step technique from the moment matching literature consisting of whitening and orthogonal tensor factorization. However, in their case this does not have much statistical meaning. Indeed, whitening of the covariance matrix is usually justified by the scaling unidentifiability of the problem. In their case, the mathematics works because of the orthogonal unidentifiability of the square root of a matrix. Furthermore, the proposed sample estimators do not actually estimate densities they are dealing with (see, e.g., Eq. (16) and (17)). Their theoretical analysis seems like a straightforward extension of the analysis by Anandkumar, et al. (2012, 2014), however, I find it difficult to assess this analysis due to numerous ambiguities in the problem formulation and method development. This justifies my statement in the beginning of the paragraph. CLARITY: The paper is not well written and, therefore, is difficult to assess. Many important details are omitted, the formulation of the model is self contradicting, the standard concepts and notations are sometimes abused, some statements are wrong. I provide some examples in the detailed comment below. ORIGINALITY AND SIGNIFICANCE: The idea to apply tensor factorization approaches to the multi-label learning is novel up to my knowledge and is a pro of the paper. However, I have problems to find other pros in this submission because the clarity is quite low and in the present form there is no novelty in the proposed procedure. Moreover, the authors claim to work with densities, but end up estimating other quantities, which are not guaranteed to have the desirable form. They also emphasize the fact that there is the simplex constraint on the estimated parameters, but this constraint is completely ignored by the algorithm and, in general, won't be satisfied in practice. If think the authors should do some more work before this paper can be published. DETAILED COMMENTS: Since I am quite critical about the paper, I point out some examples of drawbacks or flaws of this paper: - The proposed model (Section 2) is not well defined. In particular, the description in Section 2 is not sufficient to understand the proposed model; the plate diagram in Figure 2 is not consistent with the text. It is not mentioned how at least some conditional distributions behave (e.g., tokens given labels or states). The diagram in Fig. 1 does not help since it isn't consistent with the text (e.g. the elements of labels or states are not conditionally / independent). The model is very close to latent Dirichlet allocation by Blei, et al. (2003), but differences are not discussed. - The standard terminology is often abused. For example, the proposed approach is referred to as the method of moments when it is not. In Section 2.1, the authors aim to match joint distributions (not the moments) to their empirical approximations (which are also wrong; see below). The usage of tokes and documents is interchanged without any explanations. - The use of the whitening approach is not justified in their setting working with joint distributions of couples and triples and it has no statistical meaning. No explanation is provided. I would definitely not call this whitening. - In Section 2.2, the notation is not defined and is different from what is usually used in the literature. For example, Eq. (15) does not make much sense as is. One could guess from the context that they are talking about the eigenvectors of an orthogonal tensor as defined in, e.g. Anandkumar, et al. (2014). - In Section 3, the authors emphasize the fact that their parameters are constrained to the probability simplex, but this constraint is not ensured in the proposed algorithm (Alg. 1). - Importantly, the estimators of the matrix M_2 and tensor M_3 do not make much sense to me. For example, for estimating M_2 it would be reasonable to average over all word pairs, i.e. something like [M_2]_{ij} = 1/L \sum_{w_k \not = w_l} P(w_k = v_i, w_l = v_j), where L is the number of pairs. This is different from the expression in Eq. (16), which is just a rescaled non-central second moment. Similar issue is true for the order-3 estimator. - The factorization procedure does not ensure non-negativity of the obtained parameters and, therefore, the rescaling is not guaranteed to belong to the probability simplex. I could not find any explanations of this issue. - I explain good plots in the experimental section, potentially, by the fact that the authors do algorithmically something different from what they aim to do, because the estimators do not estimate the desired entities (i.e. are not consistent). The procedure looks to me quite similar to the procedure for LDA, hence the reasonable results. However, the authors do not justify their proposed method.
iclr_2018_S19dR9x0b
ALTERNATING MULTI-BIT QUANTIZATION FOR RECURRENT NEURAL NETWORKS Recurrent neural networks have achieved excellent performance in many applications. However, on portable devices with limited resources, the models are often too large to deploy. For applications on the server with large scale concurrent requests, the latency during inference can also be very critical for costly computing resources. In this work, we address these problems by quantizing the network, both weights and activations, into multiple binary codes {−1, +1}. We formulate the quantization as an optimization problem. Under the key observation that once the quantization coefficients are fixed the binary codes can be derived efficiently by binary search tree, alternating minimization is then applied. We test the quantization for two well-known RNNs, i.e., long short term memory (LSTM) and gated recurrent unit (GRU), on the language models. Compared with the full-precision counter part, by 2-bit quantization we can achieve ∼16× memory saving and ∼6× real inference acceleration on CPUs, with only a reasonable loss in the accuracy. By 3-bit quantization, we can achieve almost no loss in the accuracy or even surpass the original model, with ∼10.5× memory saving and ∼3× real inference acceleration. Both results beat the exiting quantization works with large margins. We extend our alternating quantization to image classification tasks. In both RNNs and feedforward neural networks, the method also achieves excellent performance.
Revision: The authors have addressed my concerns around the achievable speedup. I am increasing my score to 7. Original Review: The paper proposes a technique for quantizing neural network weight matrices by representing columns of weight matrices as linear combinations of binary (+1/-1) vectors. Given a weight vector, the paper proposes an alternating optimization procedure to estimate the set of k binary vectors and coefficients that best represent (in terms of MSE) the original vector. This yields a k-bit quantization. First, the coefficients/binary weights are initialized using a greedy procedure proposed in prior work. Then, the binary weights are updated using a clever binary search procedure, followed by updates to the coefficients. Experiments are conducted in an RNN context for some language modeling tasks. The paper is relatively easy to read, and the technique is clearly explained. The technique is as far as I can tell novel, and does seem to represent an improvement over existing approaches for similar multi-bit quantization strategies. I have a few questions/concerns. First, I am quite skeptical of many of the speedup calculations: These are rather delicate to do properly, and depend on the specific instructions available, SIMD widths, the number of ALUs present in a core, etc. All of these can easily shift numbers around by a factor of 2-8x. Without an implementation in hand, comparing against a well-optimized reference GEMM for full floating point, it's not clear how much faster this approach really would be in practice. Also, the online quantization of activations doesn't seem to be factored into the speedup calculations, and no benchmarks are provided demonstrating how fast the quantization is (unless I'm missing something). This is concerning since the claimed speedups aren't possible without the online quantization of actiations. It would have been nice to have more discussion of/comparison with other approaches capable of 2-4 bit quantization, such as some of the recent work on ternary quantization, product quantization approaches, or at least scalar (per-dimension) k-means (non-uniform quantization). Finally, the experiments are reasonable, but the choice of RNN setting isn't clear to me. It would have been easier to compare to prior work if the experiments also included some standard image classification tasks (e.g., CIFAR10). Overall though, I think the paper does just enough to warrant acceptance.
iclr_2018_HkCsm6lRb
GENERATIVE MODELS OF VISUALLY GROUNDED IMAGINATION It is easy for people to imagine what a man with pink hair looks like, even if they have never seen such a person before. We call the ability to create images of novel semantic concepts visually grounded imagination. In this paper, we show how we can modify variational auto-encoders to perform this task. Our method uses a novel training objective, and a novel product-of-experts inference network, which can handle partially specified (abstract) concepts in a principled and efficient way. We also propose a set of easy-to-compute evaluation metrics that capture our intuitive notions of what it means to have good visual imagination, namely correctness, coverage, and compositionality (the 3 C's). Finally, we perform a detailed comparison of our method with two existing joint image-attribute VAE methods (the JMVAE method of Suzuki et al. (2017) and the BiVCCA method of Wang et al. (2016b)) by applying them to two datasets: the MNIST-with-attributes dataset (which we introduce here), and the CelebA dataset (Liu et al., 2015).
This paper presented a multi-modal extension of variational autoencoder (VAE) for the task "visually grounded imagination." In this task, the model learns a joint embedding of the images and the attributes. The proposed model is novel but incremental comparing to existing frameworks. The author also introduced new evaluation metrics to evaluate the model performance concerning correctness, coverage, and compositionality. Pros: 1. The paper is well-written, and the contribution (both the model and the evaluation metric) potentially can to be very useful in the community. 2. The discussion comparing the related work/baseline methods is insightful. 3. The proposed model addresses many important problems, such as attribute learning, disentanged representation learning, learning with missing values, and proper evaluation methods. Cons/questions: 1. The motivation of the model choice of q is not clear. Comparing to BiVCCA, apart from the differences that the author discussed, a big difference is the choice of q. BiVCCA uses two inference networks q(z|x) and q(z|y), while the proposed method uses three. q(z|x), q(z|y), and q(z|x,y). How does such model choice affect the final performance? 2. Baselines are not necessarily sufficient. The paper compared the vanilla version of BiVCCA but not the one with factorized representation version. In the original VAECCA paper, the extension of using factorized representation (private and shared) improved the performance]. The author should also compare this extension of VAECCA. 3. Some details are not clear. a) How to set/learn the scaling parameter \lambda_y and \beta_y? If it is set as hyper-parameter, how does the performance change concerning them? b) Discussion of the experimental results is not sufficient. For example, why JMVAE performs much better than the proposed model when all attributes are given. What is the conclusion from Figure 4(b)? The JMVAE seems to generate more diverse (better coverage) results which are not consistent with the claims in the related work. The same applies to figure 5.
iclr_2018_ByaQIGg0-
We propose a novel method that makes use of deep neural networks and gradient decent to perform automated design on complex real world engineering tasks. Our approach works by training a neural network to mimic the fitness function of a design optimization task and then, using the differential nature of the neural network, perform gradient decent to maximize the fitness. We demonstrate this methods effectiveness by designing an optimized heat sink and both 2D and 3D airfoils that maximize the lift drag ratio under steady state flow conditions. We highlight that our method has two distinct benefits over other automated design approaches. First, evaluating the neural networks prediction of fitness can be orders of magnitude faster then simulating the system of interest. Second, using gradient decent allows the design space to be searched much more efficiently then other gradient free methods. These two strengths work together to overcome some of the current shortcomings of automated design.
This paper introduces an appealing application of deep learning: use a deep network to approximate the behavior of a complex physical system, and then design optimal devices (eg airfoil shapes) by optimizing this network with respect to its inputs. Overall, this research direction seems fruitful, both in terms of different applications and in terms of extra machine learning that could be done to improve performance, such as ensuring that the optimization doesn't leave the manifold of reasonable designs. On one hand, I would suggest that this work would be better placed in an engineering venue focused on fluid dynamics. On the other hand, I think the ICLR community would benefit from about the opportunities to work on problems of this nature. =Quality= The authors seem to be experts in their field. They could have done a better job explaining the quality of their final results, though. It is unclear if they are comparing to strong baselines. =Clarity= The overall setup and motivation is clear. =Originality= This is an interesting problem that will be novel to most member of the ICLR community. I think that this general approach deserves further attention from the community. =Major Comments= * It's hard for me to understand if the performance of your method is actually good. You show that it outperforms simulated annealing. Is this the state of the art? How would an experienced engineer perform if he or she just sat down and drew the shape of an airfoil, without relying on any computational simulation at all? * You can afford to spend lots of time interacting with the deep network in order to optimize it really well with respect to the inputs. Why not do lots of random initializations for the optimization? Isn't that a good way to help avoid local optima? * I'd like to see more analysis of the reliability of your deep-network-based approximation to the physics simulator. For example, you could evaluate the deep-net-predicted drag ratio vs. the simulator-predicted drag ratio at the value of the parameters corresponding to the final optimized airfoil shape. If there's a gap, it suggests that your NN approximation might have not been that accurate. =Minor Comments= * "We also found that adding a small amount of noise too the parameters when computing gradients helped jump out of local optima" Generally, people add noise to the gradients, not the values of the parameters. See, for example, uses of Langevin dynamics as a non-convex optimization method. * You have a complicated method for constraining the parameters to be in [-0.5,0.5]. Why not just enforce this constraint by doing projected gradient descent? For the constraint structure you have, projection is trivial (just clip the values). * "The gradient decent approach required roughly 150 iterations to converge where as the simulated annealing approach needed at least 800." This is of course confounded by the necessary cost to construct the training set, which is necessary for the gradient descent approach. I'd point out that this construction can be done in parallel, so it's less of a computational burden. * I'd like to hear more about the effects of different parametrizations of the airfoil surface. You optimize the coefficients of a polynomial. Did you try anything else? * Fig 6: What does 'clean gradients' mean? Can you make this more precise? * The caption for Fig 5 should explain what each of the sub figures is.
iclr_2018_rJ5C67-C-
Data structured in form of overlapping or non-overlapping sets is found in a variety of domains, sometimes explicitly but often subtly. For example, teams, which are of prime importance in social science studies are "sets of individuals"; "item sets" in pattern mining are sets; and for various types of analysis in language studies a sentence can be considered as a "set or bag of words". Although building models and inference algorithms for structured data has been an important task in the fields of machine learning and statistics, research on "set-like" data still remains less explored. Relationships between pairs of elements can be modeled as edges in a graph. However, modeling relationships that involve all members of a set, a hyperedge is a more natural representation for the set. In this work, we focus on the problem of embedding hyperedges in a hypergraph (a network of overlapping sets) to a low dimensional vector space. We propose a probabilistic deep-learning based method as well as a tensor-based algebraic model, both of which capture the hypergraph structure in a principled manner without loosing set-level information. Our central focus is to highlight the connection between hypergraphs (topology), tensors (algebra) and probabilistic models. We present a number of interesting baselines, some of which adapt existing node-level embedding models to the hyperedge-level, as well as sequence based language techniques which are adapted for set structured hypergraph topology. The performance is evaluated with a network of social groups and a network of word phrases. Our experiments show that accuracy wise our methods perform similar to those of baselines which are not designed for hypergraphs. Moreover, our tensor based method is quiet efficient as compared to deep-learning based auto-encoder method. We therefore, argue that we have proposed more general methods which are suited for hypergraphs (and therefore also for graphs) while maintaining accuracy and efficiency.
The paper studies different methods for defining hypergraph embeddings, i.e. defining vectorial representations of the set of hyperedges of a given hypergraph. It should be noted that the framework does not allow to compute a vectorial representation of a set of nodes not already given as an hyperedge. A set of methods is presented : the first one is based on an auto-encoder technique ; the second one is based on tensor decomposition ; the third one derives from sentence embedding methods. The fourth one extends over node embedding techniques and the last one use spectral methods. The two first methods use plainly the set structure of hyperedges. Experimental results are provided on semi-supervised regression tasks. They show very similar performance for all methods and variants. Also run-times are compared and the results are expected. In conclusion, the paper gives an overview of methods for computing hypernode embeddings. This is interesting in its own. Nevertheless, as the target problem on hypergraphs is left unspecified, it is difficult to infer conclusions from the study. Therefore, I am not convinced that the paper should be published in ICLR'18. * typos * Recent surveys on graph embeddings have been published in 2017 and should be cited as "A comprehensive survey of graph embedding ..." by Cai et al * Preliminaries. The occurrence number R(g_i) are not modeled in the hypergraphs. A graph N_a is defined but not used in the paper. * Section 3.1. the procedure for sampling hyperedges in the lattice shoud be given. At least, you should explain how it is made efficient when the number of nodes is large. * Section 3.2. The method seems to be restricted to cases where the cardinality of hyperedges can take a small number of values. This is discussed in Section 3.6 but the discussion is not convincing enough. * Section 3.3 The term Sen2vec is not common knowledge * Section 3.3 The length of the sentences depends on the number of permutations of $k$ elements. How can you deal with large k ? * Section 3.4 and Section 3.5. The methods proposed in these two sections should be related with previous works on hypergraph kernels. I.e. there should be mentions on the clique expansion and star expansion of hypergraphs. This leads to the question why graph embeddings methods on these expansions have not be considered in the paper. * Section 4.1. Only hyperedeges of cardinality in [2,6] are considered. This seems a rather strong limitation and this hypothesis does not seem pertinent in many applications. * Section 4. For online multi-player games, hypernode embeddings only allow to evaluate existing teams, i.e. already existing as hyperedges in the input hypergraph. One of the most important problem for multi-player games is team making where team evaluation should be made for all possible teams. * Section 5. Seems redundant with the Introduction.
iclr_2018_r1HNP0eCW
Every second, innumerable text data, including all kinds news, reports, messages, reviews, comments, and twits have been generated on the Internet, which is written not only in English but also in other languages such as Chinese, Japanese, French and so on. Not only SNS sites but also worldwide news agency such as Thomson Reuters News 1 provide news reported in more than 20 languages, reflecting the significance of the multilingual information. In this research, by taking advantage of multi-lingual text resources provided by the Thomson Reuters News, we developed a bidirectional LSTM based method to calculate cross-lingual semantic text similarity for long text and short text respectively. Thus, users could understand the situation comprehensively, by investigating similar and related cross-lingual articles, when there an important news comes in.
* PAPER SUMMARY * This paper proposes a siamese net architecture to compare text in different languages. The proposed architecture builds upon siamese RNN by Mueller and Thyagarajan. The proposed approach is evaluated on cross lingual bitext retrieval. * REVIEW SUMMARY * This paper is hard to read and need proof-reading by a person proficient in English. The experiments are extremely limited, on a toy task. No other baseline than (Mueller and Thyagarajan, 2016) is considered. The related work section lacks important references. It is hard to find positive points that would advocate for a presentation at ICLR. * DETAILED REVIEW * On related work, the authors need to consider related work on cross lingual retrieval, multilingual document representation: Bai, Bing, et al. "Learning to rank with (a lot of) word features." Information retrieval 13.3 (2010): 291-314. (Section 4). Schwenk, H., Tran, K., Firat, O., & Douze, M. Learning Joint Multilingual Sentence Representations with Neural Machine Translation, ACL Workshop on Representation Learning for NLP, 2017 Karl Moritz Hermann and Phil Blunsom. Multilingual models for compositional distributed semantics. In ACL 2014. pages 58–68. Hieu Pham, Minh-Thang Luong, and Christopher D. Manning. Learning distributed representations for multilingual text sequences. In Workshop on Vector Space Modeling for NLP. 2015 Xinjie Zhou, Xiaojun Wan, and Jianguo Xiao. Cross-lingual sentiment classification with bilingual document representation learning. In ACL 2016 ... On evaluation, the authors need to learn about standard retrieval evaluation metrics such as precision at top 10, etc and use them. For instance, this book will be a good read. Baeza-Yates, Ricardo, and Berthier Ribeiro-Neto. Modern information retrieval. Vol. 463. New York: ACM press, 1999. On learning objective, the authors might want to read about learn-to-rank objectives for information retrieval, for instance, Liu, Tie-Yan. "Learning to rank for information retrieval." Foundations and Trends in Information Retrieval 3.3 (2009): 225-331. Burges, Christopher JC. "From ranknet to lambdarank to lambdamart: An overview." Learning 11, no. 23-581 (2010): 81. Chapelle, Olivier, and Yi Chang. "Yahoo! learning to rank challenge overview." Proceedings of the Learning to Rank Challenge. 2011. Herbrich, Ralf, Thore Graepel, and Klaus Obermayer. "Large margin rank boundaries for ordinal regression." (2000). On experimental setup, the authors want to consider a setup with more than 8k training documents. More importantly, ranking a document set of 1k documents is extremely small, toyish. For instance, (Schwenk et al 2017) search through 1.5 million sentences. (Bai, Bing, et al 2009) search through 140k documents. Since you mainly introduces 2 modifications with respect to (Mueller and Thyagarajan, 2016), i.e (i) not sharing the parameters on both branch of the siamese and (ii) the fully connected net on top, I would suggest to measure the effect of each of them both on multilingual data and on the SICK dataset used in (Mueller and Thyagarajan, 2016).
iclr_2018_r1lUOzWCW
Published as a conference paper at ICLR 2018 DEMYSTIFYING MMD GANS We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.
The quality and clarity of this work are very good. The introduction of the kernel inception metric is well-motivated and novel, to my knowledge. With the mention of a bit more related work (although this is already quite good), I believe that this could be a significant resource for understanding MMD GANs and how they fit into the larger model zoo. Pros - best description of MMD GANs that I have encountered - good contextualization of related work and descriptions of relationships, at least among the works surveyed - reasonable proposed metric (KID) and comparison with other scores - proof of unbiased gradient estimates is a solid contribution Cons - although the review of related work is very good, it does focus on ~3 recent papers. As a review, it would be nice to see mention (even just in a list with citations) of how other models in the zoo fit in - connection between IPMs and MMD gets a bit lost; a figure (e.g. flow chart) would help - wavers a bit between proposing/proving novel things vs. reviewing and lacks some overall structure/storyline - Figure 1 is a bit confusing; why is KID tested without replacement, and FID with? Why 100 vs 10 samples? The comparison is good to have, but it's hard to draw any insight with these differences in the subfigures. The figure caption should also explain what we are supposed to get out of looking at this figure. Specific comments: - I suggest bolding terms where they are defined; this makes it easy for people to scan/find (e.g. Jensen-Shannon divergence, Integral Probability Metrics, witness functions, Wasserstein distance, etc.) - Although they are common knowledge in the field, because this is a review it could be helpful to provide references or brief explanations of e.g. JSD, KL, Wasserstein distance, RKHS, etc. - a flow chart (of GANs, IPMs, MMD, etc., mentioning a few more models than are discussed in depth here, would be *very* helpful. - page 2, middle paragraph, you mention "...constraints to ensure the kernel distribution embeddings remained injective"; it would be helpful to add a sentence here to explain why that's a good thing.
iclr_2018_ry9tUX_6-
We show that Entropy-SGD (Chaudhari et al., 2017), when viewed as a learning algorithm, optimizes a PAC-Bayes bound on the risk of a Gibbs (posterior) classifier, i.e., a randomized classifier obtained by a risk-sensitive perturbation of the weights of a learned classifier. Entropy-SGD works by optimizing the bound's prior, violating the hypothesis of the PAC-Bayes theorem that the prior is chosen independently of the data. Indeed, available implementations of Entropy-SGD rapidly obtain zero training error on random labels and the same holds of the Gibbs posterior. In order to obtain a valid generalization bound, we show that an ε-differentially private prior yields a valid PAC-Bayes bound, a straightforward consequence of results connecting generalization with differential privacy. Using stochastic gradient Langevin dynamics (SGLD) to approximate the wellknown exponential release mechanism, we observe that generalization error on MNIST (measured on held out data) falls within the (empirically nonvacuous) bounds computed under the assumption that SGLD produces perfect samples. In particular, Entropy-SGLD can be configured to yield relatively tight generalization bounds and still fit real labels, although these same settings do not obtain state-of-the-art performance.
Brief summary: Assume any neural net model with weights w. Assume a prior P on the weights. PAC-Bayes risk bound show that for ALL other distributions Q on the weights, the the sample risk (w.r.t to the samples in the data set) and expected risk (w.r.t distribution generating samples) of the random classifier chosen according to Q, averaged over Q, are close by a fudge factor that is KL divergence of P and Q scaled by m^{-1} + some constant. Now, the authors first show that optimizing the objective of the Entropy SGD algorithm is equivalent to optimizing the empiricial risk term + fudge term over all data dependent priors P and the best Q for that prior. However, PAC-Bayes bound holds only when P is NOT dependent on the data. So the authors invoke results from differential privacy to show that as long as the prior choosing mechanism in the optimization algorithm is differentially private with respect to data, differentially private priors can be substituted for valid PAC-Bayes bounds rectifying the issue. They show that when entrop SGD is implemented with pure gibbs sampling steps (as in Algorithm 3), the bounds hold. Weakness that remains is that the gibbs sampling step in Entropy SGD (as in algo 3 in the appendix) is actually approximated by samples from SGLD that converges to this gibbs distribution when run for infinite hops. The authors leave this hole unsolved. But under the very strong sampling assumption, the bound holds. The authors do some experiments with MNIST to demonstrate that their bounds are not trivial. Strengths: Simple connections between PAC-Bayes bound and entropy SGD objective is the first novelty. Invoking results from differential privacy for fixing the issue of validity of PAC-Bayes bound is the second novelty. Although technically the paper is not very deep, leveraging existing results (with strong assumptions) to show generalization properties of entropy-SGD is good. Weakness: a) Obvious issue : that analysis assumes the strong gibbs sampling step. b) Experimental results are ok. I see that the bounds computed are non-vacuous. - but can the authors clarify what exactly they seek to justify ? c) Typos: Page 4 footnote "the local entropy should not be <with>.." - with is missing. Eq 14 typo - r(h) instead of e(h) Definition A.2 in appendix - must have S and S' in the inequality -both seem S. d) Most important clarification: The way Thm 5.1, 5.2 and the exact gibbs sampling step connect with each other to produce Thm 6.1 is in Thm B.1. How do multiple calls on the same data sample do not degrade the loss ? Explanation is needed. Because the whole process of optimization in TRAIN with may steps is the final 'data dependent prior choosing mechanism' that has to be shown to be differentially private. Can the authors argue why the number of iterations of this does not matter at all ?? If I get run this long enough, and if I get several w's in the process (like step 8 repeated many times in algorithm 3) I should have more leakage about the data sample S intuitively right ? e) The paper is unclear in many places. Intro could be better written to highlight the connection at the expression level of PAC-Bayes bound and entropy SGD objective and the subsequent fix using differentially private prior choosing mechanism to make the connection provably correct. Why are all the algorithms in the appendix on which the theorems are claimed in the paper ?? Final decision: I waver between 6 and 7 actually. However I am willing to upgrade to 7 if the authors can provide sound arguments to my above concerns.
iclr_2018_SJaP_-xAb
Published as a conference paper at ICLR 2018 DEEP LEARNING WITH LOGGED BANDIT FEEDBACK We propose a new output layer for deep neural networks that permits the use of logged contextual bandit feedback for training. Such contextual bandit feedback can be available in huge quantities (e.g., logs of search engines, recommender systems) at little cost, opening up a path for training deep networks on orders of magnitude more data. To this effect, we propose a counterfactual risk minimization approach for training deep networks using an equivariant empirical risk estimator with variance regularization, BanditNet, and show how the resulting objective can be decomposed in a way that allows stochastic gradient descent training. We empirically demonstrate the effectiveness of the method by showing how deep networks -ResNets in particular -can be trained for object recognition without conventionally labeled images.
This paper proposes a new output layer in neural networks, which allows them to use logged contextual bandit feedback for training. The paper is well written and well structured. General feedback: I would say the problem addressed concerns stochastic learning in general, not just SGD for training neural nets. And it's not a "new output layer", but just a softmax output layer (Eq. 1) with an IPS+baseline training objective (Eq. 16). Others: - The baseline in REINFORCE (Williams'92), which is equivalent to introduced Lagrange multiplier, is well known and well defined as control variate in Monte Carlo simulation, certainly not an "ad-hoc heuristic" as claimed in the paper [see Greensmith et al. (2004). Variance Reduction for Gradient Estimates in Reinforcement Learning, JMLR 5.] - Bandit to supervised conversion: please add a supervised baseline system trained just on instances with top feedbacks -- this should be a much more interesting and relevant strong baseline. There are multiple indications that this bandit-to-supervised baseline is hard to outperform in a number of important applications. - The final objective IPS^lambda is identical to IPS with a translated loss and thus re-introduces problems of IPS in exactly the same form that the article claims to address, namely: * the estimate is not bounded by the range of delta * the importance sampling ratios can be large; samples with high such ratios lead to larger gradients thus dominating the updates. The control variate of the SNIPS objective can be seen as defining a probability distribution over the log, thus ensuring that for each sample that sample’s delta is multiplied by a value in [0,1] and not by a large importance sampling ratio. * IPS^lambda introduces a grid search which takes more time and the best value for lambda might not even be tested. How do you deal with it? - As author note, IPS^lambda is very similar to an RL-baseline, so results of using IPS with it should be reported as well: In more detail, Note: 1. IPS for losses<0 and risk minimization: raise the probability of every sample in the log irrespective of the loss itself 2. IPS for losses>0 and risk minimization: lower the same probability 3. IPS^lambda: by the translation of the loss, it divides the log into 2 groups: a group whose probabilities will be lowered and a group whose probabilities will be raised (and a third group for delta=lambda but the objective will be agnostic to these) 4. IPS with a baseline would do something similar but changes over time, which means the above groups are not fixed and might work better. Furthermore, there is no hyperparameter/grid search required for the simple RL-baseline -> results of using IPS with the RL-baseline should be reported for the BanditNet rows in Table 1 and in CIFAR-10 experiments. - What is the feedback in the CIFAR-10 experiments? Assuming it's from [0..1], and given the tested range of lambdas, you should run into the same problems with IPS and its degenerate solutions for lambdas >=1.0. In general, how are your methods behaving for lambda* (corresponding to S*) such that makes all difference (delta_i - lambda*) positive or negative? - The claim of Theorem 2 in appendix B does not follow from its proof: what is proven is that the value of S(w) lies in an interval [1-e..1+e] with a certain probability for all w. It says nothing about a solution of an optimization problem of the form f(w)/S(w) or its constrained version. Actually, the proof never makes any connection to optimization. - What the appendix C basically claims is that it's not possible to get an unbiased estimate of a gradient for a certain class of non-convex ratios with a finite-sum structure. This would contradict some previously established convergence results for this type of problems: Reddi et al. (2016) Stochastic Variance Reduction for Nonconvex Optimization, ICML and Wang et al. 2013. Variance Reduction for Stochastic Gradient Optimization, NIPS. On the other hand, there seem to be no need to prove such a claim in the first claim, since the difficulty of performing self-normalized IPS on GPU should be evident, if one remembers that the normalization should run over the whole logged dataset (while only the current mini-batch is accessible to the GPU).
iclr_2018_rkTBjG-AZ
In deep learning, performance is strongly affected by the choice of architecture and hyperparameters. While there has been extensive work on automatic hyperparameter optimization for simple spaces, complex spaces such as the space of deep architectures remain largely unexplored. As a result, the choice of architecture is done manually by the human expert through a slow trial and error process guided mainly by intuition. In this paper we describe a framework for automatically designing and training deep models. We propose an extensible and modular language that allows the human expert to compactly represent complex search spaces over architectures and their hyperparameters. The resulting search spaces are treestructured and therefore easy to traverse. Models can be automatically compiled to computational graphs once values for all hyperparameters have been chosen. We can leverage the structure of the search space to introduce different model search algorithms, such as random search, Monte Carlo tree search (MCTS), and sequential model-based optimization (SMBO). We present experiments comparing the different algorithms on CIFAR-10 and show that MCTS and SMBO outperform random search. We also present experiments on MNIST, showing that the same search space achieves near state-of-the-art performance with a few samples. These experiments show that our framework can be used effectively for model discovery, as it is possible to describe expressive search spaces and discover competitive models without much effort from the human expert. Code for our framework and experiments has been made publicly available.
Monte-Carlo Tree Search is a reasonable and promising approach to hyperparameter optimization or algorithm configuration in search spaces that involve conditional structure. This paper must acknowledge more explicitly that it is not the first to take a graph-search approach. The cited work related to SMAC and Hyperopt / TPE addresses this problem similarly. The technique of separating a description language from the optimization algorithm is also used in both of these projects / lines of research. The [mis-cited] paper titled “Making a science of model search …” is about using TPE to configure 1, 2, and 3 layer convnets for several datasets, including CIFAR-10. SMAC and Hyperopt have been used to search large search spaces involving pre-processing and classification algorithms (e.g. auto-sklearn, autoweka, hyperopt-sklearn). There have been near-annual workshops on AutoML and Bayesian optimization at NIPS and ICML (see e.g. automl.org). There is a benchmark suite of hyperparameter optimization problems that would be a better way to evaluate MCTS as a hyperparameter optimization algorithm: http://www.ml4aad.org/automl/hpolib/
iclr_2018_B1J_rgWRW
Published as a conference paper at ICLR 2018 UNDERSTANDING DEEP NEURAL NETWORKS WITH RECTIFIED LINEAR UNITS In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give an algorithm to train a ReLU DNN with one hidden layer to global optimality with runtime polynomial in the data size albeit exponential in the input dimension. Further, we improve on the known lower bounds on size (from exponential to super exponential) for approximating a ReLU deep net function by a shallower ReLU net. Our gap theorems hold for smoothly parametrized families of "hard" functions, contrary to countable, discrete families known in the literature. An example consequence of our gap theorems is the following: for every natural number k there exists a function representable by a ReLU DNN with k 2 hidden layers and total size k 3 , such that any ReLU DNN with at most k hidden layers will require at least 1 2 k k+1 − 1 total nodes. Finally, for the family of R n → R DNNs with ReLU activations, we show a new lowerbound on the number of affine pieces, which is larger than previous constructions in certain regimes of the network architecture and most distinctively our lowerbound is demonstrated by an explicit construction of a smoothly parameterized family of functions attaining this scaling. Our construction utilizes the theory of zonotopes from polyhedral theory.
The paper presents a series of definitions and results elucidating details about the functions representable by ReLU networks, their parametrisation, and gaps between deep and shallower nets. The paper is easy to read, although it does not seem to have a main focus (exponential gaps vs. optimisation vs. universal approximation). The paper makes a nice contribution to the details of deep neural networks with ReLUs, although I find the contributed results slightly overstated. The 1d results are not difficult to derive from previous results. The advertised new results on the asymptotic behaviour assume a first layer that dominates the size of the network. The optimisation method appears close to brute force and is limited to 2 layers. Theorem 3.1 appears to be easily deduced from the results from Montufar, Pascanu, Cho, Bengio, 2014. For 1d inputs, each layer will multiply the number of regions at most by the number of units in the layer, leading to the condition w’ \geq w^{k/k’}. Theorem 3.2 is simply giving a parametrization of the functions, removing symmetries of the units in the layers. In the list at the top of page 5. Note that, the function classes might be characterized in terms of countable properties, such as the number of linear regions as discussed in MPCB, but still they build a continuum of functions. Similarly, in page 5 ``Moreover, for fixed n,k,s, our functions are smoothly parameterized''. This should not be a surprise. In the last paragraph of Section 3 ``m = w^k-1'' This is a very big first layer. This also seems to subsume the first condition, s\geq w^k-1 +w(k-1) for the network discussed in Theorem 3.9. In the last paragraph of Section 3 ``To the best of our knowledge''. In the construction presented here, the network’s size is essentially in the layer of size m. Under such conditions, Corollary 6 of MPCB also reads as s^n. Here it is irrelevant whether one artificially increases the depth of the network by additional, very narrow, layers, which do not contribute to the asymptotic number of units. The function class Zonotope is a composition of two parts. It would be interesting to consider also a single construction, instead of the composition of two constructions. Theorem 3.9 (ii) it would be nice to have a construction where the size becomes 2m + wk when k’=k. Section 4, while interesting, appears to be somewhat disconnected from the rest of the paper. In Theorem 2.3. explain why the two layer case is limited to n=1. At some point in the first 4 pages it would be good to explain what is meant by ``hard’’ functions (e.g. functions that are hard to represent, as opposed to step functions, etc.)
iclr_2018_BJlrSmbAZ
Deep neural networks have led to a series of breakthroughs, dramatically improving the state-of-the-art in many domains. The techniques driving these advances, however, lack a formal method to account for model uncertainty. While the Bayesian approach to learning provides a solid theoretical framework to handle uncertainty, inference in Bayesian-inspired deep neural networks is difficult. In this paper, we provide a practical approach to Bayesian learning that relies on a regularization technique found in nearly every modern network, batch normalization. We show that training a deep network using batch normalization is equivalent to approximate inference in Bayesian models, and we demonstrate how this finding allows us to make useful estimates of the model uncertainty. Using our approach, it is possible to make meaningful uncertainty estimates using conventional architectures without modifying the network or the training procedure. Our approach is thoroughly validated in a series of empirical experiments on different tasks and using various measures, showing it to outperform baselines on a majority of datasets with strong statistical significance.
The authors show how the regularization procedure called batch normalization, currently being used by most deep learning systems, can be understood as performing approximate Bayesian inference. The authors compare this approach to Monte Carlo dropout (another regularization technique which can also be considered to perform approximate Bayesian inference). The experiments performed show that the Bayesian view of batch normalization performs similarly as MC dropout in terms of the estimates of uncertainty that it produces. Quality: I found the quality to be low in some aspects. First, the description of what is the prior used by batch normalization in section 3.3 is unsatisfactory. The authors basically refer to Appendix 6.4 for the case in which the weight decay penalty is not zero. The details in that Appendix are almost none, they just say "it is thus possible to derive the prior...". The results in Table 2 are a bit confusing. The authors should highlight in bold face the results of the best performing method. The authors indicate that they do not need to compare to variational methods because Gal and Ghahramani 2015 compare already to those methods. However, Gal and Ghahramani's code used Bayesian optimization methods to tune hyper-parameters and this code contains a bug that optimizes hyper-parameters by maximizing performance on the test data. In particular for hyperparameter selection, they average performance across (subsets of) 5 of the training sets from the 20x train/test split, and then using the tau which got the best average performance for all of 20x train/test splits to evaluate performance: https://github.com/yaringal/DropoutUncertaintyExps/blob/master/bostonHousing/net/experiment_BO.py#L54 Therefore, the claim that "Since we have established that MCBN performs on par with MCDO, by proxy we might conclude that MCBN outperforms those VI methods as well." is not valid. At the beginning of section 4.3 the authors indicate that they follow in their experiments the setup of Gal and Ghahramani (2015). However, Gal and Ghahramani (2015) actually follow Hernández-Lobato and Adams, 2015 so the correct reference should be the latter one. Clarity: The paper is clearly written and easy to follow and understand. I found confusing how to use the proposed method to obtain estimates of uncertainty for a particular test data point x_star. The paragraph just above section 4 says that the authors sample a batch of training data for this, but assume that the test point x_star has to be included in this batch. How is this actually done in practice? Originality: The proposed contribution is original. This is the first time that a Bayesian interpretation has been given to the batch normalization regularization proposal. Significance: The paper's contributions are significant. Batch normalization is a very popular regularization technique and showing that it can be used to obtain estimates of uncertainty is relevant and significant. Many existing deep learning systems can use this to produce estimates of uncertainty in their predictions.
iclr_2018_SyProzZAW
THE POWER OF DEEPER NETWORKS FOR EXPRESSING NATURAL FUNCTIONS It is well-known that neural networks are universal approximators, but that deeper networks tend in practice to be more powerful than shallower ones. We shed light on this by proving that the total number of neurons m required to approximate natural classes of multivariate polynomials of n variables grows only linearly with n for deep neural networks, but grows exponentially when merely a single hidden layer is allowed. We also provide evidence that when the number of hidden layers is increased from 1 to k, the neuron requirement grows exponentially not with n but with n 1/k , suggesting that the minimum number of layers required for practical expressibility grows only logarithmically with n.
The paper investigates the representation of polynomials by neural networks up to a certain degree and implied uniform approximations. It shows exponential gaps between the width of shallow and deep networks required for approximating a given sparse polynomial. By focusing on polynomials, the paper is able to use of a variety of tools (e.g. linear algebra) to investigate the representation question. Results such as Proposition 3.3 relate the representation of a polynomial up to a certain degree, to the approximation question. Here it would be good to be more specific about the domain, however, as approximating the low order terms certainly does not guarantee a global uniform approximation. Theorem 3.4 makes an interesting claim, that a finite network size is sufficient to achieve the best possible approximation of a polynomial (the proof building on previous results, e.g. by Lin et al that I did not verify). The idea being to construct a superposition of Taylor approximations of the individual monomials. Here it would be good to be more specific about the domain. Also, in the discussion of Taylor series, it would be good to mention the point around which the series is developed, e.g. the origin. The paper mentions that ``the theorem is false for rectified linear units (ReLUs), which are piecewise linear and do not admit a Taylor series''. However, a ReLU can also be approximated by a smooth function and a Taylor series. Theorem 4.1 seems to be implied by Theorem 4.2. Similarly, parts of Section 4.2 seem to follow directly from the previous discussion. In page 1 ```existence proofs' without explicit constructions'' This is not true, with numerous papers providing explicit constructions of functions that are representable by neural networks with specific types of activation functions.
iclr_2018_SJ1fQYlCZ
Curriculum learning and Self paced learning are popular topics in the machine learning that suggest to put the training samples in order by considering their difficulty levels. Studies in these topics show that starting with a small training set and adding new samples according to difficulty levels improves the learning performance. In this paper we experimented that we can also obtain good results by adding the samples randomly without a meaningful order. We compared our method with classical training, Curriculum learning, Self paced learning and their reverse ordered versions. Results of the statistical tests show that the proposed method is better than classical method and similar with the others. These results point a new training regime that removes the process of difficulty level determination in Curriculum and Self paced learning and as successful as these methods.
This paper addresses an interesting problem of curriculum/self-paced versus random order of samples for faster learning. Specifically, the authors argue that adding samples in random order is as beneficial as adding them with some curriculum strategy, i.e. from easiest to hardest, or reverse. The main learning strategy considered in this work is learning with growing sets, i.e. at each next stage a new portion of samples is added to the current available training set. At the last stage, all training samples are considered. The classifier is re-learned on each stage, where optimized weights in the previous stage are given as initial weights in the next stage. The work has several flaws. -First of all, it is not surprising that learning with more training samples at each next stage (growing sets) gets better - this is the basic principle of learning. The question is how fast the current classifier converges to the optimal Bayes level when using Curriculum strategy versus Random strategy. The empirical evaluations do not show evidence/disprove regarding this matter. For example, it could happen that the classifier converges to the optimal on the first stage already, so there is no difference when training in random versus curriculum order with growing sets. -Secondly, easyness/hardness of the samples are defined w.r.t. some pre-trained (external) ensemble method. It is not clear how this definition of easiness/hardness translates when training the 3-layer neural network (final classifier). For example, it could well happen that all the samples are equally easy for training the final classifier, so the curriculum order would be the same as random order. In the original work on self-paced learning, Kumar et al (2010), easiness of the samples is re-computed on each stage of the classifier learning. -The empirical evaluations are not clear. Just showing the wins across datasets without actual performance is not convincing (Table 2). -I wonder whether the section with theoretical explanation is needed. What is the main advantage of learning with growing sets (when re-training the classifier) and (traditional) learning when using the whole training dataset (last stage, in this work)?
iclr_2018_SJ1Xmf-Rb
Published as a conference paper at ICLR 2018 FEARNET: BRAIN-INSPIRED MODEL FOR INCREMENTAL LEARNING Incremental class learning involves sequentially learning classes in bursts of examples from the same class. This violates the assumptions that underlie methods for training standard deep neural networks, and will cause them to suffer from catastrophic forgetting. Arguably, the best method for incremental class learning is iCaRL, but it requires storing training examples for each class, making it challenging to scale. Here, we propose FearNet for incremental class learning. FearNet is a generative model that does not store previous examples, making it memory efficient. FearNet uses a brain-inspired dual-memory system in which new memories are consolidated from a network for recent memories inspired by the mammalian hippocampal complex to a network for long-term storage inspired by medial prefrontal cortex. Memory consolidation is inspired by mechanisms that occur during sleep. FearNet also uses a module inspired by the basolateral amygdala for determining which memory system to use for recall. FearNet achieves state-of-the-art performance at incremental class learning on image (CIFAR-100, CUB-200) and audio classification (AudioSet) benchmarks.
Quality: The paper presents a novel solution to an incremental classification problem based on a dual memory system. The proposed solution is inspired by the memory storage mechanism in brain. Clarity: The problem has been clearly described and the proposed solution is described in detail. The results of numerical experiments and the real data analysis are satisfactory and clearly shows the superior performance of the method compared to the existing ones. Originality: The solution proposed is a novel one based on a dual memory system inspired by the memory storage mechanism in brain. The memory consolidation is inspired by the mechanisms that occur during sleep. The numerical experiments showing the FearNet performance with sleep frequency also validate the comparison with the brain memory system. Significance: The work discusses a significant problem of incremental classification. Many of the shelf deep neural net methods require storage of previous training samples too and that slows up the application to larger dataset. Further the traditional deep neural net also suffers from the catastrophic forgetting. Hence, the proposed work provides a novel and scalable solution to the existing problem. pros: (a) a scalable solution to the incremental classification problem using a brain inspired dual memory system (b) mitigates the catastrophic forgetting problem using a memory consolidation by pseudorehearsal. (c) introduction of a subsystem that allows which memory system to use for the classification cons: (a) How FearNet would perform if imbalanced classes are seen in more than one study sessions? (b) Storage of class statistics during pseudo rehearsal could be computationally expensive. How to cope with that? (c) How FearNet would handle if there are multiple data sources?
iclr_2018_BJ_QxP1AZ
Convolutional neural networks (CNNs) have been generally acknowledged as one of the driving forces for the advancement of computer vision. Despite their promising performances on many tasks, CNNs still face major obstacles on the road to achieving ideal machine intelligence. One is that CNNs are complex and hard to interpret. Another is that standard CNNs require large amounts of annotated data, which is sometimes very hard to obtain, and it is desirable to be able to learn them from few examples. In this work, we address these limitations of CNNs by developing novel, simple, and interpretable models for few-shot learning. Our models are based on the idea of encoding objects in terms of visual concepts, which are interpretable visual cues represented by the feature vectors within CNNs. We first adapt the learning of visual concepts to the few-shot setting, and then uncover two key properties of feature encoding using visual concepts, which we call category sensitivity and spatial pattern. Motivated by these properties, we present two intuitive models for the problem of few-shot learning. Experiments show that our models achieve competitive performances, while being much more flexible and interpretable than alternative state-of-the-art few-shot learning methods. We conclude that using visual concepts helps expose the natural capability of CNNs for few-shot learning.
My main concern for this paper is that the description of the Visual Concepts is completely unclear for me. At some point I thought I did understand it, but then the next equation didnt make sense anymore... If I understand correctly, f_p is a representation of *all images* of a specific layer *k* at/around pixel "p", (According to last line of page 3). That would make sense, given that then the dimensions of the vector f_p is a scalar (activation value) per image for that image, in layer k, around pixel p. Then f_v is one of the centroids (named VCs). However, this doesnt seem to be the case, given that it is impossible to construct VC activations for specific images from this definition. So, it should be something else, but it does not become clear, what this f_p is. This is crucial in order to follow / judge the rest of the paper. Still I give it a try. Section 4.1 is the second most important section of the paper, where properties of VCs are discussed. It has a few shortcomings. First, iIt is unclear why coverage should be >=0.8 and firerate ~ 1, according to the motivation firerate should equal to coverage: that is each pixel f_p is assigned to a single VC centroid. Second, "VCs tent to occur for a specific class", that seems rather a bold statement from a 6 class, 3 VCs experiment, where the class sensitivity is in the order 40-77%. Also the second experiment, which shows the spatial clustering for the "car wheel" VC, is unclear, how is the name "car wheel" assigned to the VC? That has have to be named after the EM process, given that EM is unsupervised. Finally the cost effectiveness training (3c), how come that the same "car wheel" (as in 3b) is discovered by the EM clustering? Is that coincidence? Or is there some form of supervision involved? Minor remarks - Table 1: the reported results of the Matching Network are different from the results in the paper of Vinyals (2016). - It is unclear what the influence of the smoothing is, and how the smoothing parameter is estimated / set. - The VCs are introduced for few-shot classification, unclear how this is different from "previous few-shot methods" (sect 5). - 36x36 patches have a plausible size within a 84x84 image, this is rather large, do semantic parts really cover 20% of the image? - How are the networks trained, with what objective, how validated, which training images? What is the influence of the layer on the performance? - Influence of the clustering method on VCs, eg k-means, gaussian, von-mises (the last one is proposed)? On a personal note, I've difficulties with part of the writing. For example, the introduction is written rather "arrogant" (not completely the right word, sorry for that), with a sentence, like "we have only limited insights into why CNNs are effective" seems overkill for the main research body. The used Visual Concepts (VCs) were already introduced by other works (Wangt'15), and is not a novelty. Also the authors refer to another paper (about using VCs for detection) which is also under submission (somewhere). Finally, the introduction paragraph of Section 5 is rather bold, "resembles the learning process of human beings"? Not so sure that is true, and it is not supported by a reference (or an experiment). In conclusion: This paper presents a method for creating features from a (pre-trained) ConvNet. It clusters features from a specific pooling layer, and then creates a binary assignment between per image extracted feature vectors and the cluster centroids. These are used in a 1-NN classifier and a (smoothed) Naive Bayes classifier. The results show promising results, yet lack exploration of the model, at least to draw conclusions like "we address the challenge of understanding the internal visual cues of CNNs". I believe this paper needs to focus on the working of the VCs for few-shot experiments, showing the influences of some of the choices (layer, network layout, smoothing, clustering, etc). Moreover, the introduction should be rewritten, and the the background section of VCs (Sect 3) should be clarified. Therefore, I rate the current manuscript as a reject. After rebuttal: The writing of the paper greatly improved, still missing insights (see comments below). Therefore I've upgraded my rating, and due to better understanding now, als my confidence.
iclr_2018_HkGcX--0-
Generative modeling of high-dimensional data is a key problem in machine learning. Successful approaches include latent variable models and autoregressive models. The complementary strengths of these approaches, to model global and local image statistics respectively, suggest hybrid models combining the strengths of both models. Our contribution is to train such hybrid models using an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder. In contrast, prior work on such hybrid models needed to limit the capacity of the autoregressive decoder to prevent degenerate models that ignore the latent variables and only rely on autoregressive modeling. Our approach results in models with meaningful latent variable representations, and which rely on powerful autoregressive decoders to model image details. Our model generates qualitatively convincing samples, and yields stateof-the-art quantitative results.
Summary: This paper attempts to solve the problem of meaningfully combining variational autoencoders (VAEs) and PixelCNNs. It proposes to do this by simultaneously optimizing a VAE with PixelCNN++ decoder, and a VAE with factorial decoder. The model is evaluated in terms of log-likelihood (with no improvement over a PixelCNN++) and the visual appearance of samples and reconstructions. Review: Combining density networks (like VAEs) and autoregressive models is an unsolved problem and potentially very useful. To me, the most interesting bit of information in this paper was the realization that you can weight the reconstruction and KL terms of a VAE and interpret it as variational inference in a generative model with multiple copies of pixels (below Equation 7). Unfortunately the authors were unable to make any good use of this insight, and I will explain below why I don’t see any evidence of an improved generative model in this paper. As the paper is written now, it is not clear what the goal of the authors is. Is it density estimation? Then the addition of the VAE had no measurable effect on the PixelCNN++’s performance, i.e., it seems like a bad idea due to the added complexity and loss of tractability. Is it representation learning? Then the paper is missing experiments to support the idea that the learned representations are in any way an improvement. Is it image synthesis (not a real application by itself), then the paper should have demonstrated the usefulness of the model on a real task and probably involve human subjects in a quantitative evaluation. Much of the authors’ analysis is based on a qualitative evaluation of samples. However, samples can be very misleading. A lookup table storing the training data generates samples containing objects and perfect details, but obviously has not learned anything about either objects or the low-level statistics of natural images. In contrast to the authors, I fail to see a meaningful difference between the groups of samples in Figure 1. The VAE samples in Figure 3b) look quite smooth. Was independent Gaussian noise added to the VAE samples or are those (as is sometimes done) sampled means? If the former, what was sigma and how was it chosen? On page 7, the authors conclude that “the pixelCNN clearly takes into account the output of the VAE decoder” based on the samples. Being a mixture model, a PixelCNN++ could easily represent the following mixture: p(x | z) = 0.01 \prod_i p(x_i | x_{<i}) + 0.99 \prod_i p(x_i | z) The first term is just like a regular PixelCNN++, ignoring the latent variables. The second term is just like a variational autoencoder with factorial decoder. The samples in this case would be dominated by the VAE, which depends on the latent state. The log-likelihood would be dominated by the first term and would be minimally effected (see Theis et al., 2016). Note that I am not saying that this is exactly what the model has learned. I am merely providing a possible counter example to the notion that the PixelCNN++ has learned to use of the latent representation in a meaningful way. What happens if the KL term is simply downweighted but the factorial decoder is not included? This seems like it would be a useful control to include. The paper is well written and clear.
iclr_2018_r1Ddp1-Rb
mixup: BEYOND EMPIRICAL RISK MINIMIZATION Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
I enjoyed reading this well-written and easy-to-follow paper. The paper builds on the rather old idea of minimizing the empirical vicinal risk (Chapelle et al., 2000) instead of the empirical risk. The authors' contribution is to provide a particular instance of vicinity distribution, which amounts to linear interpolation between samples. This idea of linear interpolation on the training sample to generate additional (adversarial, in the words of the authors) data is definitely appealing to prevent overfitting and improve generalization performance at a mild computational cost (note that this comment does not just apply to deep learning). This notion is definitely of interest to machine learning, and to the ICLR community in particular. I have several comments and remarks on the concept of mixup, listed below in no particular order. My overall opinion on the paper is positive and I stand for acceptance, provided the authors answer the points below. I would especially be interested in discussing those with the authors. 1 - While data augmentation literature is well acknowledged in the paper, I would also like to see a comment on domain adaptation, which is a very closely related topic and of particular interest to the ICLR community. 2 - Paragraph after Eq. (1), starting with "Learning" and ending with "(Szegedy et al., 2014)": I am not so familiar with the term memorization, is this just a fancy way of talking about overfitting? If so, you might want to rephrase this paragraph with terms more used in the machine learning community. When you write "one trivial way to minimize [the empirical risk] is to memorize the training data", do you mean output a predictor which only delivers predictions on $X_i$, equal to $Y_i$? If so, this is again not specific to deep learning and I feel this should be a bit more discussed. 3 - I have not found in the paper a clear heuristics about how pairs of training samples should be picked to create interpolations. Picking at random is the simplest however I feel that a proximity measure on the space $\mathcal{X}$ on which samples live would come in handy. For example, sampling with a probability decreasing as the Euclidean distance seems a natural idea. In any case, I strongly feel this discussion is missing in the paper. 4 - On a related note, I would like to see a discussion on how many "adversarial" examples should be used. Since the computational overhead cost of computing one new sample is reasonable (sampling from a Beta distribution + one addition), I wonder why $m$ is not taken very large, yielding more accurate estimates of the empirical risk. A related question: under what conditions does the vicinal risk converge (in expectation for example) to the empirical risk? I think some comments would be nice. 5 - I am intrigued by the last paragraph of Section 5. What do the authors exactly have in mind when they suggest that mixup could be generalized to regression problems? As far as I understood the paper, since $\tilde{y}$ is defined as a linear interpolation between $y_i$ and $y_j$, this formulation only works for continuous $y$s, like in regression. This formulation is not straightforwardly transposable to classification for example. I therefore am quite confused about the fact that the authors present experiments on classification tasks, with a method that writes for regression. 6 - Writing linear interpolations to generate new data points implicitly makes the assumption that the input and output spaces ($\mathcal{X}$ and $\mathcal{Y}$) are convex. I have no clear intuition wether this is a limitation of the authors' proposed method but I strongly feel this should be carefully addressed by a comment in Section 2.
iclr_2018_S14EogZAZ
Understanding physical phenomena is a key component of human intelligence and enables physical interaction with previously unseen environments. In this paper, we study how an artificial agent can autonomously acquire this intuition through interaction with the environment. We created a synthetic block stacking environment with physics simulation in which the agent can learn a policy end-to-end through trial and error. Thereby, we bypass to explicitly model physical knowledge within the policy. We are specifically interested in tasks that require the agent to reach a given goal state that may be different for every new trial. To this end, we propose a deep reinforcement learning framework that learns policies for stacking tasks which are parametrized by a target structure -departing from conventional approaches based on simulation and planning. We validated the model on a toy example navigating in a grid world with different target positions and in a block stacking task with different target structures of the final tower. In contrast to prior work, our policies show better generalization across different goals.
Summary: This paper proposes to use deep Q-learning to learn how to reconstruct a given tower of blocks, where DQN is also parameterized by the desired goal state in addition to the current observed state. Pros: - Impressive results on a difficult block-stacking task. Cons: - The idea of parameterizing an RL algorithm by goals is not particularly novel. Quality and Clarity: The paper is extremely well-written, easy to follow, and largely technically correct, though I am somewhat concerned about how the results were obtained as it does not seem like the vanilla DQN agent could do so well, even on the 2-block scenes. Even just including stable scenes, I estimated based on Figure 5 that there must be about 70 different configurations that are stable (and this is likely an underestimate). So, if each of these scenes occurs equally often and the vanilla DQN agent does not receive any information about the target goal and just acts based on an "average" policy, I would expect it to only achieve success about 1/70th of the time. Am I missing something here? Another thing that was unclear to me is how the rotation of the blocks is chosen: is the agent given the next block with the correct rotation, or can it also choose to rotate the block? In the text it is implied that the only actions are {left, right, down}, which seems to simplify the task immensely. It would be interesting to include results where the agent additionally has to choose from actions of {rotate left by 90 degrees, rotate right by 90 degrees}. Also: are the scenes used during testing separate from those used during training? If not, it's not obvious that the agent isn't just learning to memorize the solution (which somewhat defeats the idea behind parameterizing the Q-network with new goals every time). Originality and Significance: The block-stacking task is very cool and is more complex than many other physics-based RL tasks in the literature, which often involve just stacking square blocks in a single tower. I think it is a useful contribution to introduce this task and the GDQN agent as a baseline. However, the notion of parameterizing the policy by the goal state is not particularly novel. While it is true that many RL papers do train to optimize just a single reward function for a single goal, it is also very straightforward to modify the state space to include a goal and indeed [1-4] are just a few examples of recent papers that have done this. In general, any time there is a procedurally generated environment (e.g. Sokoban, as in [5]) the goal necessarily is included as part of the state space---so the idea of GDQN isn't really that new. [1] Oh, J., Singh, S., Lee, H., & Kohli, P. (2017). Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning. arXiv Preprint arXiv:1706.05064. [2] Dosovitskiy, A., & Koltun, V. (2017). Learning to act by predicting the future. Proceedings of the 5th International Conference on Learning Representations (ICLR 2017). [3] Hamrick, J. B., Ballard, A. J., Pascanu, R., Vinyals, O., Heess, N., & Battaglia, P. W. (2017). Metacontrol for adaptive imagination-based optimization. Proceedings of the 5th International Conference on Learning Representations (ICLR 2017). [4] Pascanu, R., Li, Y., Vinyals, O., Heess, N., Buesing, L., Racanière, S., … Battaglia, P. (2017). Learning model-based planning from scratch. arXiv Preprint arXiv: 1707.06170. Retrieved from https://arxiv.org/abs/1707.06170 [5] Weber, T., Racanière, S., Reichert, D. P., Buesing, L., Guez, A., Rezende, D. J., … Wierstra, D. (2017). Imagination-Augmented Agents for Deep Reinforcement Learning. arXiv Preprint arXiv: 1707.06203. Retrieved from http://arxiv.org/abs/1707.06203
iclr_2018_SkYMnLxRW
State-of-the-art results on neural machine translation often use attentional sequence-to-sequence models with some form of convolution or recursion. Vaswani et al. (2017) propose a new architecture that avoids recurrence and convolution completely. Instead, it uses only self-attention and feed-forward layers. While the proposed architecture achieves state-of-the-art results on several machine translation tasks, it requires a large number of parameters and training iterations to converge. We propose Weighted Transformer, a Transformer with modified attention layers, that not only outperforms the baseline network in BLEU score but also converges 15 − 40% faster. Specifically, we replace the multi-head attention by multiple self-attention branches that the model learns to combine during the training process. Our model improves the state-of-the-art performance by 0.5 BLEU points on the WMT 2014 English-to-German translation task and by 0.4 on the English-to-French translation task.
The paper presentes a small extension to the Neural Transformer model of Vaswani et al 2017: the multi-head attention computation (eq. 2,3): head_i = Attention_i(Q,K,W) MultiHead = Concat_i(head_i) * W = \sum_i head_i * W_i is replaced with the so-called BranchedAttention (eq. 5,6,7,4): head_i = Attention_i(Q,K,W) // same as in the base model BranchedAttention = \sum_i \alpha_i max(0, head_i * W_i * kappa_i * W^1 + b^1) W^2 + b^2 The main difference is that the results of application of each attention head is post-processed with a 2-layer ReLU network before being summed into the aggregated attention vector. My main problem with the paper is understanding what really is implemented: the paper states that with alpha_i=1 and kappa_i=1 the two attention mechanism are equivalent. The equations, however, tell a different story: the original MultiHead attention quickly aggregates all attention heads, while the proposed BranchedAttention adds another processing step, effectively adding depth to the model. Since the BranchedAttention is the key novelty of the paper, I am confused by this contradiction and treat it as a fatal flaw of this paper (I am willing to revise my score if the authors explain the equations) - the proposed attention either adds a small amount of parameters (the alphas and kappas) that can be absorbed by the other weights of the network, and the added alphas and kappas are easier/faster to optimize, as the authors state in the text, or the BranchedAttention works as shown in the equations, and effectively adds depth to the network by processing each attention's result with a small MLP before combining multiple attention heads. This has to be clarified before the paper is published. The experiment show that the proposed change speeds convergence and improves the results by about 1 BLEU point. However, this requires a different learning rate schedule for the introduced parameters and some non-standard tricks, such as freezing the alphas and kappas during the end of the training. I also have a questions about the presented results: 1) The numbers for the original transformer match the ones in Vaswani et al 2017, am I correct to assume that the authors did not rerun the tensor2tensor code and simply copied them from the paper? 2) Is all of the experimental setup the same as in Vaswani et al 2017? Are the results obtained using their tensor2tensor implementation, or are some hyperparameters different? Detailed review: Quality: The equations and text in the paper contradict each other. Clarity: The language is clear, but the main contribution could be better explained. Originality: The proposed change is a small extension to the Neural Transformer model. Significance: Rather small, the proposed addition adds little modeling power to the network and its advantage may vanish with more data/different learning rate schedule. Pros and cons: + the proposed approach is a simple way to improve the performance of multihead attentional models. - it is not clear from the paper how the proposed extension works: does it regularize the model or dies it increase its capacity?
iclr_2018_SJa9iHgAZ
Published as a conference paper at ICLR 2018 RESIDUAL CONNECTIONS ENCOURAGE ITERATIVE IN- FERENCE Residual networks (Resnets) have become a prominent architecture in deep learning. However, a comprehensive understanding of Resnets is still a topic of ongoing research. A recent view argues that Resnets perform iterative refinement of features. We attempt to further expose properties of this aspect. To this end, we study Resnets both analytically and empirically. We formalize the notion of iterative refinement in Resnets by showing that residual connections naturally encourage features of residual blocks to move along the negative gradient of loss as we go from one block to the next. In addition, our empirical analysis suggests that Resnets are able to perform both representation learning and iterative refinement. In general, a Resnet block tends to concentrate representation learning behavior in the first few layers while higher layers perform iterative refinement of features. Finally we observe that sharing residual layers naively leads to representation explosion and counterintuitively, overfitting, and we show that simple existing strategies can help alleviating this problem.
This paper shows that residual networks can be viewed as doing a sort of iterative inference, where each layer is trained to use its “nonlinear part” to push its values in the negative direction of the loss gradient. The authors demonstrate this using a Taylor expansion of a standard residual block first, then follow up with several experiments that corroborate this interpretation of iterative inference. Overall the strength of this paper is that the main insight is quite interesting — though many people have informally thought of residual networks as having this interpretation — this paper is the first one to my knowledge to explain the intuition in a more precise way. Some weaknesses of the paper on the other hand — some of the parts of the paper (e.g. on weight sharing) are only somewhat related to the main topic of the paper. In fact, the authors moved the connection to SGD to the appendix, which I thought would be *more* related. Additionally, parts of the paper are not as clearly written as they could be and lack rigor. This includes the mathematical derivation of the main insight — some of the steps should be spelled out more explicitly. The explanation following is also handwavey despite claims to being formal. Some other lower level thoughts: * Regarding weight sharing for residual layers, I don’t understand why we can draw the conclusion that the initial gradient explosion is responsible for the lower generalization capability of the model with shared weights. Are there other papers in literature that have shown this connection? * The name “cosine loss” suggests that this function is actually being minimized by a training procedure, but it is just a value that is being plotted… perhaps just call it the cosine? * I recommend that the authors also check out Figurnov et al CVPR 2017 ("Spatially Adaptive Computation Time for Residual Networks") which proposes an “adaptive” version of ResNet based on the intuition of adaptive inference. * The plots in the later parts of the paper are quite small and hard to read. They are also spaced together too tightly (horizontally), making it difficult to immediately see what each plot is supposed to represent via the y-axis label. * Finally, the citations need to be fixed (use \citep{} instead of \cite{})
iclr_2018_rJ33wwxRb
Published as a conference paper at ICLR 2018 SGD LEARNS OVER-PARAMETERIZED NETWORKS THAT PROVABLY GENERALIZE ON LINEARLY SEPARA- BLE DATA Neural networks exhibit good generalization behavior in the over-parameterized regime, where the number of network parameters exceeds the number of observations. Nonetheless, current generalization bounds for neural networks fail to explain this phenomenon. In an attempt to bridge this gap, we study the problem of learning a two-layer over-parameterized neural network, when the data is generated by a linearly separable function. In the case where the network has Leaky ReLU activations and only the first layer is trained, we provide both optimization and generalization guarantees for over-parameterized networks. Specifically, we prove convergence rates of SGD to a global minimum, and provide generalization guarantees for this global minimum that are independent of the network size. Therefore, our result clearly shows that the use of SGD for optimization both finds a global minimum, and avoids overfitting despite the high capacity of the model. This is the first theoretical demonstration that SGD can avoid overfitting, when learning over-specified neural network classifiers.
This paper shows that on linearly separable data, SGD on a overparametrized network (one hidden layer, with leaky ReLU activations) can still lean a classifier that provably generalizes. The assumption on data and structure of network is a bit strong, but this is the first result that achieves a number of desirable properties ``1. Works for overparametrized network 2. Finds global optimal solution for a non-convex network. 3. Has generalization guarantees (and generalization is related to the SGD algorithm). 4. Number of samples need not depend on the number of neurons. There have been several papers achieving 1 and 2 (with much weaker assumptions), but they do not have 3 and 4. The proof of the optimization part is very similar to the proof of perceptron algorithm, and really relies on linear separability. The proof of generalization is based on a compression argument, where if an algorithm does not take many nonzero steps, then it must have good generalization. Ideally, one would also want to see a result where overparametrization actually helps (in the main result the whole data can be learned by a linear classifier). This is somewhat achieved when the activation is replaced with standard ReLU, where the paper showed with a small number of hidden units the algorithm is likely to get stuck at a local minima, but with enough hidden units the algorithm is likely to converge (but even in this case, the data is still linearly separable and can be learned just by a perceptron). The main concern about the paper is the possibility of generalizing the result. The algorithm part seems to heavily rely on the linear separable assumption. The generalization part relies on not making many non-zero updates, which is not really true in realistic settings (where the data is accessed in multiple passes) [After author response: Yes in the linearly separable case with hinge loss it is quite possible that the number of updates is sublinear. However what I meant here is that with more complicated data and different loss functions it is hard to believe that this can still hold.]. The related work section is also a bit unfair to some of the other generalization results (e.g. Bartlett et al. Neyshabur et al.): those results work on more general network settings, and it's not completely clear that they cannot be related to the algorithm because they rely on certain solution specific quantities (such as spectral/Frobenius norms of the weight matrices) and it could be possible that SGD tends to find a solution with small norm (which can be proved in linear setting and might also be provable for the setting of this paper) [This is addressed in the author response]. Overall, even though the assumptions might be a bit strong, I think this is an interesting result working towards a good direction and should be accepted.
iclr_2018_HklZOfW0W
In this work we propose a novel approach for learning graph representation of the data using gradients obtained via backpropagation. Next we build a neural network architecture compatible with our optimization approach and motivated by graph filtering in the vertex domain. We demonstrate that the learned graph has richer structure than often used nearest neighbors graphs constructed based on features similarity. Our experiments demonstrate that we can improve prediction quality for several convolution on graphs architectures, while others appeared to be insensitive to the input graph.
Learning adjacency matrix of a graph with sparsely connected undirected graph with nonnegative edge weights is the goal of this paper. A projected sub-gradient descent algorithm is used. The UPS optimizer by itself is not new. Graph Polynomial Signal (GPS) neural network is proposed to address two shortcomings of GSP using linear polynomial graph filter. First, a nonlinear function sigma in (8) is used, and second, weights are shared among neighbors of every data points. There are some concerns about this network that need to be clarified: 1. sigma is never clarified in the main context or experiments 2. the shared weights should be relevant to the ordering of neighbors, instead of the set of neighbors without ordering, in which case, the sharing looks random. 3. another explanation about the weights as the rescaling to matrix A needs to further clarified. As authors mentioned that the magnitude of |A| from L1 norm might be detrimental for the prediction. What is the disagreement between L1 penalty and prediction quality? Why not apply these weights to L1 norm as a weighted L1 norm to control the scaling of A? 4. Authors stated that the last step is to build a mapping from the GPS features into the response Y. They mentioned that linear fully connected layer or a more complex neural network can be build on top of the GPS features. However, no detailed information is given in the paper. In the experiments, authors only stated that “we fit the GPS architecture using UPS optimizer for varying degree of the neighborhood of the graph”, and then the graph is used to train existing models as the input of the graph. Which architecture is used for building the mapping ? In the experimental results, detailed definition or explanation of the compared methods and different settings should be clarified. For example, what is GPS 8, GCN_2 Eq. 9 in Table 1, and GCN_3 9 and GPS_1, GPS_2, GPS_3 and so on. More explanations of Figure 2 and the visualization method can be great helpful to understand the advantages of the proposed algorithm.
iclr_2018_BkfEzz-0-
Existing multi-agent reinforcement learning (MARL) communication methods have relied on a trusted third party (TTP) to distribute reward to agents, leaving them inapplicable in peer-to-peer environments. This paper proposes reward distribution using Neuron as an Agent (NaaA) in MARL without a TTP with two key ideas: (i) inter-agent reward distribution and (ii) auction theory. Auction theory is introduced because inter-agent reward distribution is insufficient for optimization. Agents in NaaA maximize their profits (the difference between reward and cost) and, as a theoretical result, the auction mechanism is shown to have agents autonomously evaluate counterfactual returns as the values of other agents. NaaA enables representation trades in peer-to-peer environments, ultimately regarding unit in neural networks as agents. Finally, numerical experiments (a single-agent environment from OpenAI Gym and a multi-agent environment from ViZDoom) confirm that NaaA framework optimization leads to better performance in reinforcement learning. To the best of our knowledge, no existing literature discusses reward distributions in the configuration described above. Because CommNet assumes an environment that distributes a uniform reward to all the agents, if the distributed reward is in limited supply (such as money), it causes the Tragedy of the Commons (Lloyd, 1833), where the reward of contributing agents will be reduced due to the participation of free riders. Although there are several MARL methods for distributing rewards ac- (Agogino & Tumer, 2006;Sukhbaatar et al., 2016;Foerster et al., 2016;2017). They should suppose TTP to distribute the optimal reward to the agents. (b) Inter-agent reward distribution model (our model). Some agents receive reward from the environment directly, and redistribute to other agents. The idea to determine the optimal reward without TTP is playing auction game among the agents. cording to agents' contribution such as QUICR (Agogino & Tumer, 2006) and COMA (Sukhbaatar et al., 2016), they suppose the existence of TTP and hence cannot be applied to the situation investigated here. The proposed method, Neuron as an Agent (NaaA), extends CommNet to actualize reward distributions in MARL without TTP based on two key ideas: (i) inter-agent reward distribution and (ii) auction theory. Auction theory was introduced because inter-agent reward distributions were insufficient for optimization. Agents in NaaA maximize profit, the difference between their received rewards and the costs which they redistribute to other agents. If the framework is naively optimized, a trivial solution is obtained where agents reduce their costs to zero to maximize profits. Then, NaaA employs the auction theory in game design to prevent costs from dropping below their necessary level. As a theoretical result, we show that agents autonomously evaluate the counterfactual return as values of other agents. The counterfactual return is equal to the discounted cumulative sum of counterfactual reward (Agogino & Tumer, 2006) distributed by QUICR and COMA. NaaA enables representation trades in peer-to-peer environments and, ultimately, regards neural network units as agents. As NaaA is capable of regarding units as agents without losing generality, this setting was utilized in the current study. The concept of the proposed method is illustrated in Figure 1. An environment extending ViZDoom (Kempka et al., 2016), a POMDP environment, to MARL was used for the experiment. Two agents, a cameraman sending information and a main player defeating enemies with a gun, were placed in the environment. Results confirmed that the cameraman learned cooperative actions for sending information from dead angles (behind the main player) and outperformed CommNet in score. Interestingly, NaaA can apply to single-and multi-agent settings, since it learns optimal topology between the units. Adaptive DropConnect (ADC), which combines DropConnect (Wan et al., 2013) (randomly masking topology) with an adaptive algorithm (which has a higher probability of pruning connections with lower counterfactual returns) was proposed as a further application for NaaA. Experimental classification and reinforcement learning task results showed ADC outperformed DropConnect. The remainder of this paper is organized as follows. In the next section, we show the problem setting. Then, we show proposed method with two key ideas: inter-agent reward distribution and auction theory in Section 3. After related works are introduced in Section 4, the experimental results are shown in classification, single-agent RL and MARL in Section 5. Finally, a conclusion ends the paper.
This paper proposed a novel framework Neuron as an Agent (NaaA) for training neural networks to perform various machine learning tasks, including classification (supervised learning) and sequential decision making (reinforcement learning). The NaaA framework is based on the idea of treating all neural network units as self-interested agents and optimizes the neural network as a multi-agent RL problem. This paper also proposes adaptive dropconnect, which extends dropconnect (Wan et al., 2013) by using an adaptive algorithm for masking network topology. This work attempts to bring several fundamental principles in game theory to solve neural network optimization problems in deep learning. Although the ideas are interest and technically sound, and the proposed algorithms are demonstrated to outperform several baselines in various machine learning tasks, there several major problems with this paper, including lacking clarity of presentation, insights and substantiations of many claims. These issues may need a significant amount of effort to fix as I will elaborate more below. 1. Introduction There are several important concepts, such as reward distribution, credit assignment, which are used (from the very beginning of the paper) without explanation until the final part of the paper. The motivation of the work is not very clear. There seems to be a gap between the first paragraph and the second paragraph. The authors mentioned that “From a micro perspective, the abstraction capability of each unit contribute to the return of the entire system. Therefore, we address the following questions. Will reinforcement learning work even if we consider each unit as an autonomous agent ” Is there any citation for the claim “From a micro perspective, the abstraction capability of each unit contribute to the return of the entire system” ? It seems to me this is a very general claim. Even RL methods with linear function approximations use abstractions. Also, it is unclear to me why this is an interest question. Does it have anything to do with existing issues in DRL? Moreover, The definition of autonomous agent is not clear, do you mean learning agent or policy execution agent? “it uses \epsilon-greedy as a policy, …” Do you mean exploration policy? I also have some concerns regarding the claim that “We confirm that optimization with the framework of NaaA leads to better performance of RL”. Since there are only two baselines are compared to the proposed method, this claim seems too general to be true. It is not clear to why the authors mention that “negative result that the return decreases if we naively consider units as agents”. What is the big picture behind this claim? “The counterfactual return is that by which we extend reward …” need to be rewritten. The last paragraph of introduction discussed the possible applications of the proposed methods without any substantiation, especially neither citations nor any related experiments of the authors are provided. 2 Related Work “POSG, a class of reinforcement learning with multiple ..” -> reinforcement learning framework “Another one is credit assignment. Instead of reward.. ” Two sentences are disconnected and need to be rewritten. “This paper unifies both issues” sounds very weird. Do you mean “solves/considers both issues in a principled way”? The introduction of GAN is very abrupt. Rather than starting from introducing those new concepts directly, it might be better to mention that the proposed method is related to many important concepts in game theory and GANs. “,which we propose in a later part of this paper” -> which we propose in this paper 3. Background “a function from the state and the action of an agent to the real value” -> a reward function Should provide a citation for DRQN There is a big gap between the last two paragraphs of section 3. 4. Neuro as an agent “We add the following assumption for characteristics of the v_i” -> assumptions for characterizing v_i “to maximize toward maximizing its own return” -> to maximize its own return We construct the framework of NaaA from the assumptions -> from these assumptions “indicates that the unit gives additional value to the obtained data. …” I am not sure what this sentence means, given that \rho_ijt is not clearly defined. 5. Optimization “NaaA assumes that all agents are not cooperative but selfish” Why? Is there any justification for such a claim? What is the relation between \rho_jit and q_it ? “A buyer which cannot receive the activation approximates x_i with …” It is unclear why a buyer need to do so given that it cannot receive the activation anyway. “Q_it maximizing the equation is designated as the optimal price.” Which equation? e_j and 0 are not defined in equation 8 6 Experiment setare -> set are what is the std for CartPole in table 1 It is hard to judge the significance of the results on the left side of figure 2. It might be better to add errorbars to those curves More description should be provided to explain the reward visualization on the right side of figure 2. What reward? External/internal? “Specifically, it is applicable to various methods as described below …” Related papers should be cited.
iclr_2018_Sktm4zWRb
Value iteration networks are an approximation of the value iteration (VI) algorithm implemented with convolutional neural networks to make VI fully differentiable. In this work, we study these networks in the context of robot motion planning, with a focus on applications to planetary rovers. The key challenging task in learningbased motion planning is to learn a transformation from terrain observations to a suitable navigation reward function. In order to deal with complex terrain observations and policy learning, we propose a value iteration recurrence, referred to as the soft value iteration network (SVIN). SVIN is designed to produce more effective training gradients through the value iteration network. It relies on a soft policy model, where the policy is represented with a probability distribution over all possible actions, rather than a deterministic policy that returns only the best action. We demonstrate the effectiveness of the proposed method in robot motion planning scenarios. In particular, we study the application of SVIN to very challenging problems in planetary rover navigation and present early training results on data gathered by the Curiosity rover that is currently operating on Mars.
Summary: The submission proposes a simple modification to the Value Iteration Networks (VIN) method of Tamar et al., basically consisting of assuming a stochastic policy and replacing the max-over-actions in value iteration with an expectation that weights actions proportional to their exponentiated Q-values. Since this change removes the main nondifferentiability of VINs, it is hypothesized that the resulting method will be easier to train than VINs, and experiments seem to support this hypothesis. Pros: + The proposed modification to VIN is simple, well-motivated, and addresses the nondifferentiability of VIN + Experiments on synthetic data demonstrate a significant improvement over the standard VIN method Cons: + Some important references are missing (e.g., MaxEnt IOC with deep-learned features) + Although intuitive, more detailed justification could be provided for replacing the max-over-actions with an exponentially-weighted average + No baselines are provided for the experiments with real data + All the experimental scenarios are fairly simple (2D grid-worlds with discrete actions, 1-channel input features) The proposed method is simple, well-motivated, and addresses a real concern in VINs, which is their nondifferentiability. Although many of the nonlinearities used in CNNs for computer vision applications are nondifferentiable, the theoretical grounds for using these in conjunction with gradient-based optimization is obviously questionable. Despite this, they are widely used for such applications because of strong empirical results showing that such nonlinearities are beneficial in image-processing applications. However, it would be incorrect to assume that because such nonlinearities work for image processing, they are also beneficial in the context of unrolling value iteration. Replacing the max-over-actions with an exponentially-weighted average is an intuitively well-motivated alternative because, as the authors note, it incorporates the values of suboptimal actions during the training procedure. We would therefore expect better or faster training, as the values of these suboptimal actions can be updated more frequently. The (admittedly limited) experiments bear out this hypothesis. Perhaps the most significant downside of this work is that it fails to acknowledge prior work in the RL and IOC literature that result in similar “smoothed” or “softmax" Bellman updates: in particular, MaxEnt IOC [A] and linearly-solvable MDPs [B] both fall in this category. Both of those papers clearly derive approximate Bellman equations from modified optimal control principles; although I believe this is also possible for the proposed update (Eq. 11), along the lines of the sentence after Eq. 11, this should be made more explicit/rigorous, and the result compared to [A,B]. Another important missing reference is [C], which learned cost maps with deep neural networks in a MaxEnt IOC framework. As far as I can tell, the application is identical to that of the present paper, and [C] may have some advantages: for instance, [C] features a principled, fully-differentiable training objective while also avoiding having to backprop through the inference procedure, as in VIN. Again, this raises the question of how the proposed method compares to MaxEnt IOC, both theoretically and experimentally. The experiments are also a bit lacking in a few ways. First, a baseline is only provided for the experiments with synthetic data. Although that experiment shows a promising, significant advantage over VIN, the lack of baselines for the experiment with real data is disappointing. Furthermore, the setting for the experiments is fairly simple, consisting of a grid-world with 1-channel input features. The setting is simple enough that even shallow IOC methods (e.g., [D]) would probably perform well; however, the deep IOC methods of [C] is also applicable and should probably also be evaluated as a baseline. In summary, although the method proposes an intuitively reasonable modification to VIN that seems to outperform it in limited experiments, the submission fails to acknowledge important related work (especially the MaxEnt IOC methods of [A,D]) that may have significant theoretical and practical advantages. Unfortunately, I believe the original VIN paper also failed to articulate the precise advantages of VIN over this prior work—which is not to say there are none, but it is clear that VINs applied to problems as simple as the one considered here have real competitors in prior work. Clarifying this connection, both theoretically and experimentally, would make this work much stronger and would be a valuable contribution to the literature. [A] Ziebart, Brian D. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. Carnegie Mellon University, 2010. [B] Todorov, Emanuel. "Linearly-solvable Markov decision problems." Advances in neural information processing systems. 2007. [C] Wulfmeier et al. Watch This: Scalable Cost-Function Learning for Path Planning in Urban Environments. IROS 2016 [D] Ratliff, Nathan D., David Silver, and J. Andrew Bagnell. "Learning to search: Functional gradient techniques for imitation learning." Autonomous Robots 27.1 (2009): 25-53.
iclr_2018_Bys_NzbC-
L1 and L2 regularizers are critical tools in machine learning due to their ability to simplify solutions. However, imposing strong L1 or L2 regularization with gradient descent method easily fails, and this limits the generalization ability of the underlying neural networks. To understand this phenomenon, we investigate how and why learning fails for strong regularization. Specifically, we examine how gradients change over time for different regularization strength and provide an analysis why the gradients diminish so fast when strong regularization is imposed. We find that there exists a tolerance level of regularization strength, where the learning completely fails if the regularization strength goes beyond it. We propose a simple but novel method, Delayed Strong Regularization, in order to moderate the tolerance level. Experiment results show that our proposed approach indeed achieves strong regularization for both L1 and L2 regularizers and improves both accuracy and sparsity on public data sets. Our source code is published.
The work was prompted by an interesting observation: a phase transition can be observed in deep learning with stochastic gradient descent and Tikhonov regularization. When the regularization parameter exceeds a (data-dependent) threshold, the parameters of the model are driven to zero, thereby preventing any learning. The authors then propose to moderate this problem by letting the regularization parameter to be zero for 5 to 10 epochs, and then applying the "strong" penalty parameter. In their experimental results, the phase transition is not observed anymore with their protocol. This leads to better performances, by using penalty parameters that would have prevent learning with the usual protocol. The problem targeted is important, in the sense that it reveals that some of the difficulties related to non-convexity and the use of SGD that are often overlooked. The proposed protocol is reported to work well, but since it is really ad hoc, it fails to convince the reader that it provides the right solution to the problem. I would have found much more satisfactory to either address the initialization issue by a proper warm-start strategy, or to explore standard optimization tools such as constrained optimization (i.e. Ivanov regularization) , that could be for example implemented by stochastic projected gradient or barrier functions. I think that the problem would be better handled that way than with the proposed strategy, which seems to rely only on a rather limited amount of experiments, and which may prove to be inefficient when dealing with big databases. To summarize, I believe that the paper addresses an important point, but that the tools advocated are really rudimentary compared with what has been already proposed elsewhere. Details : - there is a typo in the definition of the proximal operator in Eq. (9) - there are many unsubstantiated speculations in the comments of the experimental section that do not add value to the paper - the figure showing the evolution of the magnitude of parameters arrives too late and could be completed by the evolution of the data-fitting term of the training criterion
iclr_2018_HkcTe-bR-
The design of small molecules with bespoke properties is of central importance to drug discovery. However significant challenges yet remain for computational methods, despite recent advances such as deep recurrent networks and reinforcement learning strategies for sequence generation, and it can be difficult to compare results across different works. This work proposes 19 benchmarks selected by subject experts, expands smaller datasets previously used to approximately 1.1 million training molecules, and explores how to apply new reinforcement learning techniques effectively for molecular design. The benchmarks here, built as OpenAI Gym environments, will be open-sourced to encourage innovation in molecular design algorithms and to enable usage by those without a background in chemistry. Finally, this work explores recent development in reinforcement-learning methods with excellent sample complexity (the A2C and PPO algorithms) and investigates their behavior in molecular generation, demonstrating significant performance gains compared to standard reinforcement learning techniques.
The paper proposes a set of benchmarks for molecular design, and compares different deep models against them. The main contributions of the paper are 19 molecular design benchmarks (with chembl-23 dataset), including two molecular design evaluation criterias and comparison of some deep models using these benchmarks. The paper does not seem to include any method development. The paper suffers from a lack of focus. Several existing models are discussed to some length, while the benchmarks are introduced quite shortly. The dataset is not very clearly defined: it seems that there are 1.2 million training instance, does this apply for all benchmarks? The paper's title also does not seem to fit: this feels like a survey paper, which is not reflected in the title. Biologically lots of important atoms are excluded from the dataset, for instance natrium, calcium and kalium. I don't see any reason to exlude these. What does "biological activities on 11538 targets" mean? The paper discussed molecular generation and reinforcement learning, but it is somewhat unclear how it relates to the proposed dataset since a standard training/test setting is used. Are the test molecules somehow generated in a directed or undirected fashion? Shouldn't there also be experiments on comparing ways to generate suitable molecules, and how well they match the proposed criterion? There should be benchmarks for predicting molecular properties (standard regression), and for generating molecules with certain properties. Currently it's unclear which type of problems are solved here. Table 1 lists 5 models, while fig 3 contains 7, why the discrepancy? In table 1 the plotted runs seem to differ a lot from average results (e.g. -0.43 to 0.15, or 0.32 to 0.83). Variances should be added, and preferably more than 3 initialisations used. Overall this is an interesting paper, but does not have any methodological contribution, and there is also few insightful results about the compared methods, nor is there meaningful analysis of the problem domain of molecules either.
iclr_2018_B1Z3W-b0W
Inference models, which replace an optimization-based inference procedure with a learned model, have been fundamental in advancing Bayesian deep learning, the most notable example being variational auto-encoders (VAEs). In this paper, we propose iterative inference models, which learn how to optimize a variational lower bound through repeatedly encoding gradients. Our approach generalizes VAEs under certain conditions, and by viewing VAEs in the context of iterative inference, we provide further insight into several recent empirical findings. We demonstrate the inference optimization capabilities of iterative inference models, explore unique aspects of these models, and show that they outperform standard inference models on typical benchmark data sets.
Instead of either optimization-based variational EM or an amortized inference scheme implemented via a neural network as in standard VAE models, this paper proposes a hybrid approach that essentially combines the two. In particular, the VAE inference step, i.e., estimation of q(z|x), is conducted via application of a recent learning-to-learn paradigm (Andrychowicz et al., 2016), whereby direct gradient ascent on the ELBO criteria with respect to moments of q(z|x) is replaced with a neural network that iteratively outputs new parameter estimates using these gradients. The resulting iterative inference framework is applied to a couple of small datasets and shown to produce both faster convergence and a better likelihood estimate. Although probably difficult for someone to understand that is not already familiar with VAE models, I felt that this paper was nonetheless clear and well-presented, with a fair amount of useful background information and context. From a novelty standpoint though, the paper is not especially strong given that it represents a fairly straightforward application of (Andrychowicz et al., 2016). Indeed the paper perhaps anticipates this perspective and preemptively offers that "variational inference is a qualitatively different optimization problem" than that considered in (Andrychowicz et al., 2016), and also that non-recurrent optimization models are being used for the inference task, unlike prior work. But to me, these are rather minor differentiating factors, since learning-to-learn is a quite general concept already, and the exact model structure is not the key novel ingredient. That being said, the present use for variational inference nonetheless seems like a nice application, and the paper presents some useful insights such as Section 4.1 about approximating posterior gradients. Beyond background and model development, the paper presents a few experiments comparing the proposed iterative inference scheme against both variational EM, and pure amortized inference as in the original, standard VAE. While these results are enlightening, most of the conclusions are not entirely unexpected. For example, given that the model is directly trained with the iterative inference criteria in place, the reconstructions from Fig. 4 seem like exactly what we would anticipate, with the last iteration producing the best result. It would certainly seem strange if this were not the case. And there is no demonstration of reconstruction quality relative to existing models, which could be helpful for evaluating relative performance. Likewise for Fig. 6, where faster convergence over traditional first-order methods is demonstrated; but again, these results are entirely expected as this phenomena has already been well-documented in (Andrychowicz et al., 2016). In terms of Fig. 5(b) and Table 1, the proposed approach does produce significantly better values of the ELBO critera; however, is this really an apples-to-apples comparison? For example, does the standard VAE have the same number of parameters/degrees-of-freedom as the iterative inference model, or might eq. (4) involve fewer parameters than eq. (5) since there are fewer inputs? Overall, I wonder whether iterative inference is better than standard inference with eq. (4), or whether the recurrent structure from eq. (5) just happens to implicitly create a better neural network architecture for the few examples under consideration. In other words, if one plays around with the standard inference architecture a bit, perhaps similar results could be obtained. Other minor comment: * In Fig. 5(a), it seems like the performance of the standard inference model is still improving but the iterative inference model has mostly saturated. * A downside of the iterative inference model not discussed in the paper is that it requires computing gradients of the objective even at test time, whereas the standard VAE model would not.
iclr_2018_r1uOhfb0W
An ensemble of neural networks is known to be more robust and accurate than an individual network, however usually with linearly-increased cost in both training and testing. In this work, we propose a two-stage method to learn Sparse Structured Ensembles (SSEs) for neural networks. In the first stage, we run SG-MCMC with group sparse priors to draw an ensemble of samples from the posterior distribution of network parameters. In the second stage, we apply weight-pruning to each sampled network and then perform retraining over the remained connections. In this way of learning SSEs with SG-MCMC and pruning, we not only achieve high prediction accuracy since SG-MCMC enhances exploration of the modelparameter space, but also reduce memory and computation cost significantly in both training and testing of NN ensembles. This is thoroughly evaluated in the experiments of learning SSE ensembles of both FNNs and LSTMs. For example, in LSTM based language modeling (LM), we obtain 21% relative reduction in LM perplexity by learning a SSE of 4 large LSTM models, which has only 30% of model parameters and 70% of computations in total, as compared to the baseline large LSTM LM. To the best of our knowledge, this work represents the first methodology and empirical study of integrating SG-MCMC, group sparse prior and network pruning together for learning NN ensembles.
In this paper, the authors present a new framework for training ensemble of neural networks. The approach is based on the recent scalable MCMC methods, namely the stochastic gradient Langevin dynamics. The paper is overall well-written and ideas are clear. The main contributions of the paper, namely using SG-MCMC methods within deep learning, and then increasing the computational efficiency by group sparsity+pruning are valuable and can have a significant impact in the domain. Besides, the proposed approach is more elegant the competing ones, while still not being theoretically justified completely. I have the following minor comments: 1) The authors mention that retraining significantly improves the performance, even without pruning. What is the explanation for this? If there is no pruning, I would expect that all the samples would converge to the same minimum after retraining. Therefore, the reason why retraining improves the performance in all cases is not clear to me. 2) The notation |\theta_g| is confusing, the authors should use a different symbol. 3) After section 4, the language becomes quite informal sometimes, the authors should check the sentences once again. 4) The results with SGD (1 model) + GSP + PR should be added in order to have a better understanding of the improvements provided by the ensemble networks. 5) Why does the performance get worse "obviously" when the pruning is 95% and why is it not obvious when the pruning is 90%? 6) There are several typos pg7: drew -> drawn pg7: detail -> detailed pg7: changing -> challenging pg9: is strongly depend on -> depends on pg9: two curve -> two curves
iclr_2018_r1dHXnH6-
Published as a conference paper at ICLR 2018 NATURAL LANGUAGE INFERENCE OVER INTERACTION SPACE Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis. We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space. We show that an interaction tensor (attention weight) contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information. One instance of such architecture, Densely Interactive Inference Network (DIIN), demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus. It's noteworthy that DIIN achieve a greater than 20% error reduction on the challenging Multi-Genre NLI (MultiNLI; ) dataset with respect to the strongest published system.
This paper proposes Densely Interactive Inference Network to solve recognizing textual entailment via extracting a semantic feature from interaction tensor end-to-end. Their results show that this model has better performance than others. Even though the results of this paper is interesting, I have the problem with paper writing and motivation for their architecture: - Paper pages are well beyond 8-page limits for ICLR. The paper should be 8-pages + References. This paper has 11 pages excluding the references. - The introduction text in the 2nd page doesn't have smooth flow and sometimes hard to follow. - In my view section, 3.1 is redundant and text in section 3.2 can be improved - Encoding layer in section 3.2 is really hard to follow in regards to equations and naming e.g p_{itr att} and why choose \alpha(a,b,w)? - Encoding layer in section 3.2, there is no motivation why it needs to use fuse gate. - Feature Extraction Layer is very confusing again. What is FSDR or TSDR? - Why the paper uses Eq. 8? the intuition behind it? - One important thing which is missing in this paper, I didn't understand what is the motivation behind using each of these components? and how each of these components is selected? - How long does it take to train this network? Since it needs to works with other models (GLOV+ char features + POS tagging,..), it requires lots of effort to set up this network. Even though the paper outperforms others, it would be useful to the community by providing the motivation and intuition why each of these components was chosen. This is important especially for this paper because each layer of their architecture uses multiple components, i.e. embedding layer [Glov+ Character Features + Syntactical features]. In my view, having just good results are not enough and will not guarantee a publication in ICLR, the paper should be well-written and well-motivated in order to be useful for the future research and the other researchers. In summary, I don't think the paper is ready yet and it needs significant revision. --------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------- Comments after the rebuttal and revision : I'd like thanks the authors for the revision and their answers. Here are my comments after reading the revised version and considering the rebuttal: - It is fair to say that the paper presentation is much better now. That said I am still having issues with 11 pages. - The authors imply on page 2, end of paragraph 5, that this is the first work that shows attention weight contains rich semantic and previous works are used attention merely as a medium for alignment. Referring to the some of the related works (cited in this paper), I am not sure this is a correct statement. - The authors claim to introduce a new class of architectures for NLI and generability of for this problem. In my view, this is a very strong statement and unsupported in the paper, especially considering ablation studies (table 5). In order for the model to show the best performance, all these components should come together. I am not sure why this method can be considered a class of architecture and why not just a new model? some other comments: - In page 4, the citation is missing for highway networks - Page 5, equation 1, the parenthesis should close after \hat{P}_j. Since the new version has been improved, I have increased my review score. However, I'm still not convinced that this paper would be a good fit at ICLR given novelty and contribution.
iclr_2018_H1UOm4gA-
INTERACTIVE GROUNDED LANGUAGE ACQUISITION AND GENERALIZATION IN A 2D WORLD We build a virtual agent for learning language in a 2D maze-like world. The agent sees images of the surrounding environment, listens to a virtual teacher, and takes actions to receive rewards. It interactively learns the teacher's language from scratch based on two language use cases: sentence-directed navigation and question answering. It learns simultaneously the visual representations of the world, the language, and the action control. By disentangling language grounding from other computational routines and sharing a concept detection function between language grounding and prediction, the agent reliably interpolates and extrapolates to interpret sentences that contain new word combinations or new words missing from training sentences. The new words are transferred from the answers of language prediction. Such a language ability is trained and evaluated on a population of over 1.6 million distinct sentences consisting of 119 object words, 8 color words, 9 spatial-relation words, and 50 grammatical words. The proposed model significantly outperforms five comparison methods for interpreting zero-shot sentences. In addition, we demonstrate human-interpretable intermediate outputs of the model in the appendix.
[Overview] In this paper, the authors proposed a unified model for combining vision, language, and action. It is aimed at controlling an agent in a virtual environment to move to a specified location in a 2D map, and answer user's questions as well. To address this problem, the authors proposed an explicit grounding way to connect the words in a sentence and spatial regions in the images. Specifically, By this way, the model could exploit the outputs of concept detection module to perform the actions and question answering as well jointly. In the experiments, the authors compared with several previous attention methods to show the effectiveness of the proposed concept detection module and demonstrated its superiority on several configurations, including in-domain and out-of-domain cases. [Strengths] 1. I think this paper proposed interesting tasks to combine the vision, language, and actions. As we know, in a realistic environment, all three components are necessary to complete a complex tasks which need the interactions with the physical environments. The authors should release the dataset to prompt the research in this area. 2. The authors proposed a simple method to ground the language on visual input. Specifically, the authors grounded each word in a sentence to all locations of the visual map, and then perform a simple concept detection upon it. Then, the model used this intermediate representation to guide the navigation of agent in the 2D map and visual question answering as well. 3. From the experiments, it is shown that the proposed model outperforms several baseline methods in both normal tasks and out-of-domain ones. According to the visualizations, the interpreter could generate meaningful attention map given a textual query. [Weakness] 1. The definition of explicit grounding is a bit misleading. Though the grounding or attention is performed for each word at each location of the visual map. It is a still kind of soft-attention, except that is performed for each word in a sentence. As far as I know, this has been done in several previous works, such as: (a). Hierarchical question-image co-attention for visual question answering (https://scholar.google.com/scholar?oi=bibs&cluster=15146345852176060026&btnI=1&hl=en). Lu et al. NIPS 2016. (b). Graph-Structured Representations for Visual Question Answering. Teney et al. arXiv 2016. At most recent, we have seen some more explicit way for visual grounding like: (c). Bottom-up and top-down attention for image captioning and VQA (https://arxiv.org/abs/1707.07998). Anderson et al. arXiv 2017. 2. Since the model is aimed at grounding the language on the vision based on interactions, it is worth to show how well the final model could ground the text words to each of the visual objects. Say, show the affinity matrix between the words and the objects to indicate the correlations. [Summary] I think this is a good paper which integrates vision, language, and actions in a virtual environment. I would foresee more and more works will be devoted to this area, considering its close connection to our daily life. To address this problem, the authors proposed a simple model to ground words on visual signals, which prove to outperform previous methods, such as CA, SAN, etc. According to the visualization, the model could attend the right region of the image for finishing a navigation and QA task. As I said, the authors should rephrase the definition of explicit grounding, to make it clearly distinguished with the previous work I listed above. Also, the authors should definitely show the grounding attention results of words and visual signal jointly, i.e., showing them together in one figure instead of separately in Figure 9 and Figure 10.
iclr_2018_ByOnmlWC-
Published as a conference paper at ICLR 2018 POLICY OPTIMIZATION BY GENETIC DISTILLATION Genetic algorithms have been widely used in many practical optimization problems. Inspired by natural selection, operators, including mutation, crossover and selection, provide effective heuristics for search and black-box optimization. However, they have not been shown useful for deep reinforcement learning, possibly due to the catastrophic consequence of parameter crossovers of neural networks. Here, we present Genetic Policy Optimization (GPO), a new genetic algorithm for sample-efficient deep policy optimization. GPO uses imitation learning for policy crossover in the state space and applies policy gradient methods for mutation. Our experiments on MuJoCo tasks show that GPO as a genetic algorithm is able to provide superior performance over the state-of-the-art policy gradient methods and achieves comparable or higher sample efficiency.
This is a highly interesting paper that proposes a set of methods that combine ideas from imitation learning, evolutionary computation and reinforcement learning in a novel way. It combines the following ingredients: a) a population-based setup for RL b) a pair-selection and crossover operator c) a policy-gradient based “mutation” operator d) filtering data by high-reward trajectories e) two-stage policy distillation In its current shape it has a couple of major flaws (but those can be fixed during the revision/rebuttal period): (1) Related work. It is presented in a somewhat ahistoric fashion. In fact, ideas for evolutionary methods applied to RL tasks have been widely studied, and there is an entire research field called “neuroevolution” that specifically looks into which mutation and crossover operators work well for neural networks. I’m listing a small selection of relevant papers below, but I’d encourage the authors to read a bit more broadly, and relate their work to the myriad of related older methods. Ideally, a more reasonable form of parameter-crossover (see references) could be compared to -- the naive one is too much of a straw man in my opinion. To clarify: I think the proposed method is genuinely novel, but a bit of context would help the reader understand which aspects are and which aspects aren’t. (2) Ablations. The proposed method has multiple ingredients, and some of these could be beneficial in isolation: for example a population of size 1 with an interleaved distillation phase where only the high-reward trajectories are preserved could be a good algorithm on its own. Or conversely, GPO without high-reward filtering during crossover. Or a simpler genetic algorithm that just preserves the kills off the worst members of the population, and replaces them by (mutated) clones of better ones, etc. (3) Reproducibility. There are a lot of details missing; the setup is quite complex, but only partially described. Examples of missing details are: how are the high-reward trajectories filtered? What is the total computation time of the different variants and baselines? The x-axis on plots, does it include the data required for crossover/Dagger? What are do the shaded regions on plots indicate? The loss on \pi_S should be made explicit. An open-source release would be ideal. Minor points: - naively, the selection algorithm might not scale well with the population size (exhaustively comparing all pairs), maybe discuss that? - the filtering of high-reward trajectories is what estimation of distribution algorithms [2] do as well, and they have a known failure mode of premature convergence because diversity/variance shrinks too fast. Did you investigate this? - for Figure 2a it would be clearer to normalize such that 1 is the best and 0 is the random policy, instead of 0 being score 0. - the language at the end of section 3 is very vague and noncommittal -- maybe just state what you did, and separately give future work suggestions? - there are multiple distinct metrics that could be used on the x-axis of plots, namely: wallclock time, sample complexity, number of updates. I suspect that the results will look different when plotted in different ways, and would enjoy some extra plots in the appendix. For example the ordering in Figure 6 would be inverted if plotting as a function of sample complexity? - the A2C results are much worse, presumably because batchsizes are different? So I’m not sure how to interpret them: should they have been run for longer? Maybe they could be relegated to the appendix? References: [1] Gomez, F. J., & Miikkulainen, R. (1999). Solving non-Markovian control tasks with neuroevolution. [2] Larranaga, P. (2002). A review on estimation of distribution algorithms. [3] Stanley, K. O., & Miikkulainen, R. (2002). Evolving neural networks through augmenting topologies. [4] Igel, C. (2003). Neuroevolution for reinforcement learning using evolution strategies. [5] Hausknecht, M., Lehman, J., Miikkulainen, R., & Stone, P. (2014). A neuroevolution approach to general atari game playing. [6] Gomez, F., Schmidhuber, J., & Miikkulainen, R. (2006). Efficient nonlinear control through neuroevolution. Pros: - results - novelty of idea - crossover visualization, analysis - scalability Cons: - missing background - missing ablations - missing details [after rebuttal: revised the score from 7 to 8]
iclr_2018_B1EA-M-0Z
Published as a conference paper at ICLR 2018 DEEP NEURAL NETWORKS AS GAUSSIAN PROCESSES It has long been known that a single-layer fully-connected neural network with an i.i.d. prior over its parameters is equivalent to a Gaussian process (GP), in the limit of infinite network width. This correspondence enables exact Bayesian inference for infinite width neural networks on regression tasks by means of evaluating the corresponding GP. Recently, kernel functions which mimic multi-layer random neural networks have been developed, but only outside of a Bayesian framework. As such, previous work has not identified that these kernels can be used as covariance functions for GPs and allow fully Bayesian prediction with a deep neural network. In this work, we derive the exact equivalence between infinitely wide deep networks and GPs. We further develop a computationally efficient pipeline to compute the covariance function for these GPs. We then use the resulting GPs to perform Bayesian inference for wide deep neural networks on MNIST and CIFAR-10. We observe that trained neural network accuracy approaches that of the corresponding GP with increasing layer width, and that the GP uncertainty is strongly correlated with trained network prediction error. We further find that test performance increases as finite-width trained networks are made wider and more similar to a GP, and thus that GP predictions typically outperform those of finite-width networks. Finally we connect the performance of these GPs to the recent theory of signal propagation in random neural networks.
Neal (1994) showed that a one hidden layer Bayesian neural network, under certain conditions, converges to a Gaussian process as the number of hidden units approaches infinity. Neal (1994) and Williams (1997) derive the resulting kernel functions for such Gaussian processes when the neural networks have certain transfer functions. Similarly, the authors show an analogous result for deep neural networks with multiple hidden layers and an infinite number of hidden units per layer, and show the form of the resulting kernel functions. For certain transfer functions, the authors perform a numerical integration to compute the resulting kernels. They perform experiments on MNIST and CIFAR-10, doing classification by scaled regression. Overall, the work is an interesting read, and a nice follow-up to Neal’s earlier observations about 1 hidden layer neural networks. It combines several insights into a nice narrative about infinite Bayesian deep networks. However, the practical utility, significance, and novelty of this work -- in its current form -- are questionable, and the related work sections, analysis, and experiments should be significantly extended. In detail: (1) This paper misses some obvious connections and references, such as * Krauth et. al (2017): “Exploring the capabilities and limitations of Gaussian process models” for recursive kernels with GPs. * Hazzan & Jakkola (2015): “Steps Toward Deep Kernel Methods from Infinite Neural Networks” for GPs corresponding to NNs with more than one hidden layer. * The growing body of work on deep kernel learning, which “combines the inductive biases and representation learning abilities of deep neural networks with the non-parametric flexibility of Gaussian processes”. E.g.: (i) “Deep Kernel Learning” (AISTATS 2016); (ii) “Stochastic Variational Deep Kernel Learning” (NIPS 2016); (iii) “Learning Scalable Deep Kernels with Recurrent Structure” (JMLR 2017). These works should be discussed in the text. (2) Moreover, as the authors rightly point out, covariance functions of the form used in (4) have already been proposed. It seems the novelty here is mainly the empirical exploration (will return to this later), and numerical integration for various activation functions. That is perfectly fine -- and this work is still valuable. However, the statement “recently, kernel functions for multi-layer random neural networks have been developed, but only outside of a Bayesian framework” is incorrect. For example, Hazzan & Jakkola (2015) in “Steps Toward Deep Kernel Methods from Infinite Neural Networks” consider GP constructions with more than one hidden layer. Thus the novelty of this aspect of the paper is overstated. See also comment [*] later on the presentation. In any case, the derivation for computing the covariance function (4) of a multi-layer network is a very simple reapplication of the procedure in Neal (1994). What is less trivial is estimating (4) for various activations, and that seems to the major methodological contribution. Also note that multidimensional CLT here is glossed over. It’s actually really unclear whether the final limit will converge to a multidimensional Gaussian with that kernel without stronger conditions. This derivation should be treated more thoroughly and carefully. (3) Most importantly, in this derivation, we see that the kernels lose the interesting representations that come from depth in deep neural networks. Indeed, Neal himself says that in the multi-output settings, all the outputs become uncorrelated. Multi-layer representations are mostly interesting because each layer shares hidden basis functions. Here, the sharing is essentially meaningless, because the variance of the weights in this derivation shrinks to zero. In Neal’s case, the method was explored for single output regression, where the fact that we lose this sharing of basis functions may not be so restrictive. However, these assumptions are very constraining for multi-output classification and also interesting multi-output regressions. [*]: Generally, in reading the abstract and introduction, we get the impression that this work somehow allows us to use really deep and infinitely wide neural networks as Gaussian processes, and even without the pain of training these networks. “Deep neural networks without training deep networks”. This is not an accurate portrayal. The very title “Deep neural networks as Gaussian processes” is misleading, since it’s not really the deep neural networks that we know and love. In fact, you lose valuable structure when you take these limits, and what you get is very different than a standard deep neural network. In this sense, the presentation should be re-worked. (4) Moreover, neural networks are mostly interesting because they learn the representation. To do something similar with GPs, we would need to learn the kernel. But here, essentially no kernel learning is happening. The kernel is fixed. (5) Given the above considerations, there is great importance in understanding the practical utility of the proposed approach through a detailed empirical evaluation. In other words, how structured is this prior and does it really give us some of the interesting properties of deep neural networks, or is it mostly a cute mathematical trick? Unfortunately, the empirical evaluation is very preliminary, and provides no reassurance that this approach will have any practical relevance: (i) Directly performing regression on classification problems is very heuristic and unnecessary. (ii) Given the loss of dependence between neurons in this approach, it makes sense to first explore this method on single output regression, where we will likely get the best idea of its useful properties and advantages. (iii) The results on CIFAR10 are very poor. We don’t need to see SOTA performance to get some useful insights in comparing for example parametric vs non-parametric, but 40% more error than SOTA makes it very hard to say whether any of the observed patterns hold weight for more competitive architectural choices. A few more minor comments: (i) How are you training a GP exactly on 50k training points? Even storing a 50k x 50k matrix requires about 20GB of RAM. Even with the best hardware, computing the marginal likelihood dozens of times to learn hyperparameters would be near impossible. What are the runtimes? (ii) "One benefit in using the GP is due to its Bayesian nature, so that predictions have uncertainty estimates (Equation (9)).” The main benefit of the GP is not the uncertainty in the predictions, but the marginal likelihood which is useful for kernel learning.
iclr_2018_SkHkeixAW
Regularization is one of the crucial ingredients of deep learning, yet the term regularization has various definitions, and regularization methods are often studied separately from each other. In our work we present a novel, systematic, unifying taxonomy to categorize existing methods. We distinguish methods that affect data, network architectures, error terms, regularization terms, and optimization procedures. We identify the atomic building blocks of existing methods, and decouple the assumptions they enforce from the mathematical tools they rely on. We do not provide all details about the listed methods; instead, we present an overview of how the methods can be sorted into meaningful categories and sub-categories. This helps revealing links and fundamental similarities between them. Finally, we include practical recommendations both for users and for developers of new regularization methods.
This paper is unusual in that it is more of a review than contributing novel knowledge. It considers a taxonomy of all the ways that machine learning (mostly deep learning) methods can achieve a form of regularization. Unfortunately, it starts with a definition of regularization ('making the model generalize better') which I believe misses the point which was made in Goodfellow et al 2016 ('intend to improve test error but not necessarily training error'), i.e., that we would like to separate as much as possible the regularization effects from the optimization effect. Indeed, under the definition proposed here, any improvement in the optimizer could be considered like a regularizer, so long as we are not in the overfitting regime. That does not sound right to me. There are several places where the authors make TOO STRONG STATEMENTS, taking for truth what are simply beliefs with no strong supporting evidence (at least published). This is not good for a review and when making recommendations. The other weakness I estimate in this paper is that I did not get a sense that the taxonomy really helped us (me at least) to get insight into the different mentions being cited. Besides the obvious proposal to combine ideas to write new papers (but we did not need that paper to figure that out) I did not find much meat in the 'future directions' section. However, I that except in a few places the understand of the field displayed by the authors is pretty good and, with correction, could serve as a useful reference for students of deep learning. The recommendations were reasonable although lacking empirical support (or pointers to the literature), so I would take them somewhat carefully, more as the current 'group think' than ground truth. Finally, here a few minor points which could be fixed. Eq. 1: in typical DL, minimization is approximate, not exact, so the proposed formalism does not reflect reality. Eq. 4: in many cases, the noise is not added (e.g. dropout), so that should be clarified there. page 3, first bullet of 'Effect on the data representation': not clear, may want to give translations as an example of such transformations,. page 8, activation functions: the ReLU is actually older than the cited papers, it was used by computational neuroscientists a long time ago. Jarrett 2009 did not use the ReLU but an absolute-value rectifier and it was Glorot 2011 who showed that the ReLU really kicked ass for deeper networks. Nair 2010 used the ReLU in a very different context (RBMs), not really feedforward multi-layer networks where it shines now. In that same section (and probably elsewhere) there are TOO STRONG STATEMENTS, e.g., the "facts" mentioned are not facts but merely folk belief, as far as I know, and I would like to see well-done supporting evidence before treating those as facts. Note that approximating the sigmoid precisely would require many ReLUs! page 8: it is not clear how multi-task learning fits under the 'architecture' formalism provided at the beginning of section 4. section 7 (page 10): there is earlier work on the connection between early stopping and L2 regularization, at least dating back to Ronan Collobert's PhD thesis (with neural nets), probably earlier for linear systems.
iclr_2018_B1uvH_gC-
We propose a metric-learning framework for computing distance-preserving maps that generate low-dimensional embeddings for a certain class of manifolds. We employ Siamese networks to solve the problem of least squares multidimensional scaling for generating mappings that preserve geodesic distances on the manifold. In contrast to previous parametric manifold learning methods we show a substantial reduction in training effort enabled by the computation of geodesic distances in a farthest point sampling strategy. Additionally, the use of a network to model the distance-preserving map reduces the complexity of the multidimensional scaling problem and leads to an improved non-local generalization of the manifold compared to analogous non-parametric counterparts. We demonstrate our claims on point-cloud data and on image manifolds and show a numerical analysis of our technique to facilitate a greater understanding of the representational power of neural networks in modeling manifold data.
The authors argue that the spectral dimensionality reduction techniques are too slow, due to the complexity of computing the eigenvalue decomposition, and that they are not suitable for out-of-sample extension. They also note the limitation of neural networks, which require huge amounts of data to properly learn the data structure. The authors therefore propose to first sub-sample the data and afterwards to learn an MDS-like cost function directly with a neural network, resulting in a parametric framework. The paper should be checked for grammatical errors, such as e.g. consistent use of (no) hyphen in low-dimensional (or low dimensional). The abbreviations should be written out on the first use, e.g. MLP, MDS, LLE, etc. In the introduction the authors claim that the complexity of parametric techniques does not depend on the number of data points, or that moving to parametric techniques would reduce memory and computational complexities. This is in general not true. Even if the number of parameters is small, learning them might require complex computations on the whole data set. On the other hand, even if the number of parameters is equal to the number of data points, the computations could be trivial, thus resulting in a complexity of O(N). In section 2.1, the authors claim "Spectral techniques are non-parametric in nature"; this is wrong again. E.g. PCA can be formulated as MDS (thus spectral), but can be seen as a parametric mapping which can be used to project new words. In section 2.2, it says "observation that the double centering...". Can you provide a citation for this? In section 3, the authors propose they technique, which should be faster and require less data than the previous methods, but to support their claim, they do not perform an analysis of computational complexity. It is not quite clear from the text what the resulting complexity would be. With N as number of data points and M as number of landmarks, from the description on page 4 it seems the complexity would be O(N + M^2), but the steps 1 and 2 on page 5 suggest it would be O(N^2 + M^2). Unfortunately, it is also not clear what the complexity of previous techniques, e.g DrLim, is. Figure 3, contrary to text, does not provide a visualisation to the sampling mechanism. In the experiments section, can you provide a citation for ADAM and explain how the parameters were selected? Also, it is not meaningful to measure the quality of a visualisation via the MDS fit. There are more useful approaches to this task, such as the quality framework [*]. In figure 4a, x-axis should be "number of landmarks". It is not clear why the equation 6 holds. Citation? It is also not clear how exactly the equation 7 is evaluated. It says "By varying the number of layers and the number of nodes...", but the nodes and layer are not a part of the equation. The notation for equation 8 is not explained. Figure 6a shows visualisations by different techniques and is evaluated "by looking at it". Again, use [*]. [*] Lee, John Aldo ; Verleysen, Michel. Scale-independent quality criteria for dimensionality reduction. In: Pattern Recognition Letters, Vol. 31, no. 14, p. 2248-2257 (2010). doi:10.1016/j.patrec.2010.04.013.
iclr_2018_SkmM6M_pW
Inspired by neurophysiological discoveries of navigation cells in the mammalian brain, we introduce the first deep neural network architecture for modeling Egocentric Spatial Memory (ESM). It learns to estimate the pose of the agent and progressively construct top-down 2D global maps from egocentric views in a spatially extended environment. During the exploration, our proposed ESM network model updates belief of the global map based on local observations using a recurrent neural network. It also augments the local mapping with a novel external memory to encode and store latent representations of the visited places based on their corresponding locations in the egocentric coordinate. This enables the agents to perform loop closure and mapping correction. This work contributes in the following aspects: first, our proposed ESM network provides an accurate mapping ability which is vitally important for embodied agents to navigate to goal locations. In the experiments, we demonstrate the functionalities of the ESM network in random walks in complicated 3D mazes by comparing with several competitive baselines and state-of-the-art Simultaneous Localization and Mapping (SLAM) algorithms. Secondly, we faithfully hypothesize the functionality and the working mechanism of navigation cells in the brain. Comprehensive analysis of our model suggests the essential role of individual modules in our proposed architecture and demonstrates efficiency of communications among these modules. We hope this work would advance research in the collaboration and communications over both fields of computer science and computational neuroscience.
Significance of Contributions Unclear The paper describes a neural network architecture for monocular SLAM that is argued to take inspiration from neuroscience. The architecture is comprised of four components: one that estimates egomotion (HDU) much like prediction in a filtering framework; one that fuses the current image into a local 2D metric map (BVU); one that detects loop closures (PCU); and one that integrates local maps (GU). These modules along with their associated representations are learned in an end-to-end fashion. The method is trained and evaluated on simulated grid environments and compared to two visual SLAM algorithms. The contributions and significance of the paper are unclear. SLAM is arguably a solved problem at the scales considered here, with existing solutions capable of performing localization and mapping in large (city-scale), real-world environments. That aside, one can appreciate the merits of representation learning in the context of SLAM and a handful of neural network-based approaches to SLAM and the related problem of navigation have been proposed of-late. However, the paper doesn't do a sufficient job making the advantages of the proposed approach over these methods clear. Further, the paper emphasizes parallels to neuroscience models for navigation as being a contribution, however these similarities are largely hand wavy and one could argue that they also exist for the many other SLAM algorithms that perform prediction (as in HDU), local/global mapping (as in BVU and GU) and loop closure detection (as in PCU). More fundamentally, the proposed method does not appear to account for motion or measurement noise that are inherent in any SLAM problem and, related, does not attempt to model the uncertainty in the resulting map or pose estimates. The paper evaluates the individual components of the architecture. The results suggest that the different modules are doing something reasonable, though the evaluation is rather limited (in terms of spatial scale) and a bit arbitrary (e.g., comparing local maps to the ground truth at a seemingly arbitrary 32s). The evaluation of the loop closure is limited to a qualitative measure and is therefore not convincing. The authors should quantitatively evaluate the performance of loop closure in terms of precision and recall (this is particularly important given effects of erroneous loop closure detections and the claims that the proposed method is robust). Meanwhile, it isn't clear that much can be concluded from the ablation studies as there is relatively little difference in MSE between the two ablated models. Additional comments/questions: * A stated advantage of this method over that of Gupta et al. is that the agent's motion is not assumed to be known. However, it isn't clear whether and how the model incorporates motion or measurement uncertainty, which is fundamental to any SLAM (or navigation) framework. * Related, an important aspect of any SLAM algorithm is an explicit estimate of the uncertainty in the agent's pose and the map, however it doesn't seem that the proposed model attempts to express this uncertainty. * The paper claims to estimate the agent's pose as it navigates, but it is not apparent how the pose is maintained beyond estimating egomotion by comparing the current image to a local map. * Related, it is not clear how the method balances egomotion estimates and exteroceptive measurements (e.g, as are fused with traditional filtering frameworks). There are vague references to "eliminating discrepancies" when merging measurements, but it isn't clear what this actually means, whether the output is consistent, or how the advantages of egomotion estimation and measurements are balanced. * The BVU module is stated as generating a "local" map, but it is not clear from the discussion what limits the resulting map to the area in the vicinity of the robot vs. the entire environment. * It is not clear why previous data is transformed to the agent's reference frame as a result of motion vs. the more traditional approach of transforming the agent's pose to a global reference frame. * The description of loop closure detection and the associated heuristics is confusing. For example, Section 3.4 states that the agent only considers positions that are distant from the most recent visited position as a means of avoiding trival loop closures, however Section 3.4 states that GU provides memory vectors near the current location for loop closure classification. * The description of the GU module is confusing. How does spatial indexing deal with changes to the map (e.g., as a result of loop closures/measurement updates) or transformations to the robot's frame-of-reference? What are h, H, w, and W and how are they chosen? * The architecture assumes a discrete (and course) action space, whereas actions are typically continuous. Have the authors tried regressing to continuous actions or experimenting with finer discretizations that are more suitable to real applications? * It is not clear what is meant by the statement that the PU "learns to encode the representation of visited places". * The means by which the architecture is trained is unclear. What is the loss that is optimized? How is the triplet loss (Eqn. 3) incorporated (e.g., is it weighted differently than other terms in the loss)? * Section 3.2 states that the "agent has to learn to take actions to explore its surroundings", however it isn't apparent that the method reasons over the agent's policy. Indeed, this is an open area of research. Instead, the results section suggests that the agent acts randomly. * Section 4.1 draws comparisons between HDU and Head Direction Cells, however the latter estimate location/orientation whereas the former (this method) predicts egomotion. While egomotion can be integrated to estimate pose (as is done in Fig 4), these are not the same thing. * The authors are encouraged to tone down claims regarding parallels to navigation models from neuroscience as they are largely unjustified. * The comparison to existing monocular SLAM baselines is surprising and the reviewer remains skeptical regarding the stated advantages of the proposed method. How much of this difference is a result of testing in simulation? It would be more convincing to compare performance in real-world environments, for which these baselines have proven effective. * Figure 1: "Border" --> "Boundary" * Figure 1: The camera image should also go to the BVU block * Many of the citations are incorrectly not parenthesized * The paper should be proof-read for grammatical errors
iclr_2018_HJtEm4p6Z
Published as a conference paper at ICLR 2018 DEEP VOICE 3: SCALING TEXT-TO-SPEECH WITH CONVOLUTIONAL SEQUENCE LEARNING We present Deep Voice 3, a fully-convolutional attention-based neural textto-speech (TTS) system. Deep Voice 3 matches state-of-the-art neural speech synthesis systems in naturalness while training an order of magnitude faster. We scale Deep Voice 3 to dataset sizes unprecedented for TTS, training on more than eight hundred hours of audio from over two thousand speakers. In addition, we identify common error modes of attention-based speech synthesis networks, demonstrate how to mitigate them, and compare several different waveform synthesis methods. We also describe how to scale inference to ten million queries per day on a single GPU server.
This paper provides an overview of the Deep Voice 3 text-to-speech system. It describes the system in a fair amount of detail and discusses some trade-offs w.r.t. audio quality and computational constraints. Some experimental validation of certain architectural choices is also provided. My main concern with this work is that it reads more like a tech report: it describes the workings and design choices behind one particular system in great detail, but often these choices are simply stated as fact and not really motivated, or compared to alternatives. This makes it difficult to tell which of these aspects are crucial to get good performance, and which are just arbitrary choices that happen to work okay. As this system was clearly developed with actual deployment in mind (and not purely as an academic pursuit), all of these choices must have been well-deliberated. It is unfortunate that the paper doesn't demonstrate this. I think this makes the work less interesting overall to an ICLR audience. That said, it is perhaps useful to get some insight into what types of models are actually used in practice. An exception to this is the comparison of "converters", model components that convert the model's internal representation of speech into waveforms. This comparison is particularly interesting because some of the results are remarkable, i.e. Griffin-Lim spectrogram inversion and the WORLD vocoder achieving very similar MOS scores in some cases (Table 2). I wish there would be more of that kind of thing in the paper. The comparison of attention mechanisms is also useful. I'm on the fence as I think it is nice to get some insight into a practical pipeline which benefits from many current trends in deep learning research (autoregressive models, monotonic attention, ...), but I also feel that the paper is a bit meager when it comes to motivating all the architectural aspects. I think the paper is well written so I've tentatively recommended acceptance. Other comments: - The separation of the "decoder" and "converter" stage is not entirely clear to me. It seems that the decoder is trained to predict spectrograms autoregressively, but its final layer is then discarded and its hidden representation is then used as input to the converter stage instead? The motivation for doing this is unclear to me, surely it would be better to train everything end-to-end, including the converter? This seems like an unnecessary detour, what's the reasoning behind this? - At the bottom of page 2 it is said that "the whole model is trained end-to-end, excluding the vocoder", which I think is an unfortunate turn of phrase. It's either end-to-end, or it isn't. - In Section 3.3, the point of mixing of h_k and h_e is unclear to me. Why is this done? - The gated linear unit in Figure 2a shows that speaker embedding information is only injected in the linear part. Has this been experimentally validated to work better than simpler mechanisms such as adding conditioning-dependent biases/gains? - When the decoder is trained to do autoregressive prediction of spectrograms, is it autoregressive only in time, or also in frequency? I'm guessing it's the former, but this means there is an implicit independence assumption (the intensities in different frequency bins are conditionally independent, given all past timesteps). Has this been taken into consideration? Maybe it doesn't matter because the decoder is never used directly anyway, and this is only a "feature learning" stage of sorts? - Why use the L1 loss on spectrograms? - The recent work on Parallel WaveNet may allow for speeding up WaveNet when used as a vocoder, this could be worth looking into seeing as inference speed is used as an argument to choose different vocoder strategies (with poorer audio quality as a result). - The title heavily emphasizes that this model can do multi-speaker TTS with many (2000) speakers, but that seems to be only a minor aspect that is only discussed briefly in the paper. And it is also something that preceding systems were already capable of (although maybe it hasn't been tested with a dataset of this size before). It might make sense to rethink the title to emphasize some of the more relevant and novel aspects of this work. ---- Revision: the authors have adequately addressed quite a few instances where I feel motivations / explanations were lacking, so I'm happy to increase my rating from 6 to 7. I think the proposed title change would also be a good idea.
iclr_2018_ByZmGjkA-
Neural network-based systems can now learn to locate the referents of words and phrases in images, answer questions about visual scenes, and even execute symbolic instructions as first-person actors in partially-observable worlds. To achieve this so-called grounded language learning, models must overcome certain well-studied learning challenges that are also fundamental to infants learning their first words. While it is notable that models with no meaningful prior knowledge overcome these learning obstacles, AI researchers and practitioners currently lack a clear understanding of exactly how they do so. Here we address this question as a way of achieving a clearer general understanding of grounded language learning, both to inform future research and to improve confidence in model predictions. For maximum control and generality, we focus on a simple neural network-based language learning agent trained via policy-gradient methods to interpret synthetic linguistic instructions in a simulated 3D world. We apply experimental paradigms from developmental psychology to this agent, exploring the conditions under which established human biases and learning effects emerge. We further propose a novel way to visualise and analyse semantic representation in grounded language learning agents that yields a plausible algorithmic account of the observed effects.
This paper presents an analysis of an agent trained to follow linguistic commands in a 3D environment. The behaviour of the agent is analyzed by means of a set of "psycholinguistic" experiments probing what it learned, and by inspection of its visual component through an attentional mechanism. On the positive side, it is nice to read a paper that focuses on understanding what an agent is learning. On the negative side, I did not get many new insights from the analyses presented in the study. 3 A situated language learning agent I can't make up the chair from the refrigerator in the figure. 4.1 Word learning biases This experiment shows that, when an agent is trained on shapes only, it will exhibit a shape bias when tested on new shapes and colors. Conversely, when it is exposed to colors only, it will have a color bias. When the training set is balanced, the agent shows a mild bias for the simpler color property. How is this interesting or surprising? The crucial question, here, would be whether, when an agent is trained in a naturalistic environment (i.e., where distributions of colors, shapes and other properties reflect those encountered by biological agents), it would show a human-like shape bias. This, however, is not addressed in the paper. Minor comments about this section: - Was there noise also in shape generation, or were all object instances identical? - propensity to select o_2: rather o_1? - I did not follow the paragraph starting with "This effect provides". 4.2 The problem of learning negation I found this experiment very interesting. Perhaps, the authors could be more explicit about the usage of negation here. The meaning of commands containing negation are, I think, conjunctions of the form "pick something and do not pick X" (as opposed to the more natural "do not pick X"). modifiation: modification 4.3 Curriculum learning Perhaps the difference in curriculum effectiveness in language modeling vs grounded language learning simulations is due to the fact that the former operates on large amounts of natural data, where it's hard to define the curriculum, while the latter are typically grounded in toy worlds with a controlled language, where it's easier to construct the curriculum. 4.4 Processing and representation differences There is virtually no discussion of what makes the naturalistic setup naturalistic, and thus it's not clear which conclusions we should derive from the corresponding experiments. Also, I don't see what we should learn from Figure 5 (besides the fact that in the controlled condition shapes are easier than categories). For the naturalistic condition, the current figure is misleading, since different classes contain different numbers of instances. It would be better to report proportions. Concerning the attention analysis, it seems to me that all it's saying is that lower layers of a CNN detect lower-level properties such as colors, higher layers detect more complex properties, such as shapes characterizing objects. What is novel here? Also, since introducing attention changes the architecture, shouldn't the paper report the learning behaviour of the attention-augmented network? The explanation of the attention mechanism is dense, and perhaps could be aided by a diagram (in the supplementary materials?). I think the description uses "length" when "dimensional(ity)" is meant. 6. Supplementary material It would be good to have an explicit description of the architecture, including number of layers of the various components, structure of the CNN, non-linearities, dimensionality of the layers, etc. (some of this information is inconsistently provided in the paper). It's interesting that the encoder is actually a BOW model. This should be discussed in the paper, as it raises concerns about the linguistic interest of the controlled language that was used. Table 3: indicates is: indicates if
iclr_2018_SyELrEeAb
Progress in probabilistic generative models has accelerated, developing richer models with neural architectures, implicit densities, and with scalable algorithms for their Bayesian inference. However, there has been limited progress in models that capture causal relationships, for example, how individual genetic factors cause major human diseases. In this work, we focus on two challenges in particular: How do we build richer causal models, which can capture highly nonlinear relationships and interactions between multiple causes? How do we adjust for latent confounders, which are variables influencing both cause and effect and which prevent learning of causal relationships? To address these challenges, we synthesize ideas from causality and modern probabilistic modeling. For the first, we describe implicit causal models, a class of causal models that leverages neural architectures with an implicit density. For the second, we describe an implicit causal model that adjusts for confounders by sharing strength across examples. In experiments, we scale Bayesian inference on up to a billion genetic measurements. We achieve state of the art accuracy for identifying causal factors: we significantly outperform existing genetics methods by an absolute difference of 15-45.3%.
In this paper, the authors propose to use the so-called implicit model to tackle Genome-Wide Association problem. The model can be viewed as a variant of Structural Equation Model. Overal the paper is interesting and relatively well-written but some important details are missing and way more experiments need to be done to show the effectiveness of the approach. * How do the authors call a variant to be associated with the phenotype (y)? More specifically, what is the distribution of the null hypothesis? Section D.3 in the appendix does not explain the hypothesis testing part well. This method models $x$ (genetic), $y$ (phenotype), and $z$ (confounder) but does not have a latent variable for the association. For example, there is no latent indicator variable (e.g., Spike-Slab models [1]) for each variant. Did they do hypothesis testing separately after they fit the model? If so, this has double dipping problem because the data is used once to fit the model and again to perform statistical inference. * In GWAS, a method resulting in high power with control of FP is favored. In traditional univariate GWAS, the false positive rate is controlled by genome-wide significant level (7e-8), Bonferroni correction or other FP control approaches. Why Table 1 does not report FP? I need Table 1 to report the following: What is the power of this method if FPR is controlled(False Positive Rate < 0.05)? Also, the ROC curve for FPR<0.05 should be reported for all methods. * I believe that authors did a good job in term of a survey of the available models for GWA from marginal regression to mixed effect model, etc. The authors account for typical confounders such as cryptic relatedness which I liked. However, I recommend the authors to be cautious calling the association detected by their method " a Causal Association." There are tons of research done to understand the causal effect of the genetic variants and this paper (and this venue) is not addressing those. There are several ways for an associated variant to be non-causal and this paper does not even scratch the surface of that. For example, in many studies, discovering the causal SNPs means finding a genetic variant among the SNPs in LD of each other (so-called fine mapping). The LD-pruning procedure proposed in this paper does not help for that purpose. * This approach jointly models the genetic variants and the phenotype (y). Let us assume that one can directly maximize the ML (ELBO maximizes a lower bound of ML). The objective function is disproportionally influenced by the genetic variants (x) than y because M is very large ( $\prod_{m=1}^M p(w) p(x|z,w,\phi) >> p(z) p(y|x,z,\theta) $ ). Effectively, the model focuses on the genetic variants, not by the disease. This is why multi-variate GWAS focuses on the conditional p(y|x,z) and not p(y,x,z). Nothing was shown in the paper that this focusing on p(y,x,z) is advantageous to p(y|x,z). * In this paper, the authors use deep neural networks to model the general functional causal models. Since estimation of the causal effects is generally unidentifiable (Sprites 1993), I think using a general functional causal model with confounder modeling would have a larger chance to weaken the causal effects because the confounder part can also explain part of the causal influences. Is there a theoretical guarantee for the proposed method? Practically, how did the authors control the model complexity to avoid trivial solutions? Minor ------- * The idea of representing (conditional) densities by neural networks was proposed in the generative adversarial networks (GAN). In this paper, the authors represent the functional causal models by neural networks, which is very related to the representation used in GANs. The only difference is that GAN does not specify a causal interpretation. I suggest the authors add a short discussion of the relations to GAN. * Previous methods on causal discovery rely on restricted functional causal models for identifiability results. They also use Gaussian process or multi-layer perceptron to model the functions implicitly, which can be consider as neural networks with one hidden layer. The sentence “These models typically focus on the task of causal discovery, and they assume fixed nonlinearities or smoothness which we relax using neural networks.” in the related work section is not appropriate. [1] Scalable Variational Inference for Bayesian Variable Selection in Regression, and Its Accuracy in Genetic Association Studies
iclr_2018_B1hcZZ-AW
Published as a conference paper at ICLR 2018 N2N LEARNING: NETWORK TO NETWORK COMPRESSION VIA POLICY GRADIENT REINFORCEMENT LEARNING While wider and deeper neural network architectures continue to advance the state-of-the-art for many computer vision tasks, real-world adoption of these networks is impeded by hardware and speed constraints. Conventional model compression methods attempt to address this problem by modifying the architecture manually or using pre-defined heuristics. Since the space of all reduced architectures is very large, modifying the architecture of a deep neural network in this way is a difficult task. In this paper, we tackle this issue by introducing a principled method for learning reduced network architectures in a data-driven way using reinforcement learning. Our approach takes a larger 'teacher' network as input and outputs a compressed 'student' network derived from the 'teacher' network. In the first stage of our method, a recurrent policy network aggressively removes layers from the large 'teacher' model. In the second stage, another recurrent policy network carefully reduces the size of each remaining layer. The resulting network is then evaluated to obtain a reward -a score based on the accuracy and compression of the network. Our approach uses this reward signal with policy gradients to train the policies to find a locally optimal student network. Our experiments show that we can achieve compression rates of more than 10× for models such as ResNet-34 while maintaining similar performance to the input 'teacher' network. We also present a valuable transfer learning result which shows that policies which are pre-trained on smaller 'teacher' networks can be used to rapidly speed up training on larger 'teacher' networks.
Summary: The manuscript introduces a principled way of network to network compression, which uses policy gradients for optimizing two policies which compress a strong teacher into a strong but smaller student model. The first policy, specialized on architecture selection, iteratively removes layers, starting with architecture of the teacher model. After the first policy is finished, the second policy reduces the size of each layer by iteratively outputting shrinkage ratios for hyperparameters such as kernel size or padding. This organization of the action space, together with a smart reward design achieves impressive compression results, given that this approach automates tedious architecture selection. The reward design favors low compression/high accuracy over high compression/low performance while the reward still monotonically increases with both compression and accuracy. As a bonus, the authors also demonstrate how to include hard constraints such as parameter count limitations into the reward model and show that policies trained on small teachers generalize to larger teacher models. Review: The manuscript describes the proposed algorithm in great detail and the description is easy to follow. The experimental analysis of the approach is very convincing and confirms the author’s claims. Using the teacher network as starting point for the architecture search is a good choice, as initialization strategies are a critical component in knowledge distillation. I am looking forward to seeing work on the research goals outlined in the Future Directions section. A few questions/comments: 1) I understand that L_{1,2} in Algorithm 1 correspond to the number of layers in the network, but what do N_{1,2} correspond to? Are these multiple rollouts of the policies? If so, shouldn’t the parameter update theta_{{shrink,remove},i} be outside the loop over N and apply the average over rollouts according to Equation (2)? I think I might have missed something here. 2) Minor: some of the citations are a bit awkward, e.g. on page 7: “algorithm from Williams Williams (1992). I would use the \citet command from natbib for such citations and \citep for parenthesized citations, e.g. “... incorporate dark knowledge (Hinton et al., 2015)” or “The MNIST (LeCun et al., 1998) dataset...” 3) In Section 4.6 (the transfer learning experiment), it would be interesting to compare the performance measures for different numbers of policy update iterations. 4) Appendix: Section 8 states “Below are the results”, but the figure landed on the next page. I would either try to force the figures to be output at that position (not in or after Section 9) or write "Figures X-Y show the results". Also in Section 11, Figure 13 should be referenced with the \ref command 5) Just to get a rough idea of training time: Could you share how long some of the experiments took with the setup you described (using 4 TitanX GPUs)? 6) Did you use data augmentation for both teacher and student models in the CIFAR10/100 and Caltech256 experiments? 7) What is the threshold you used to decide if the size of the FC layer input yields a degenerate solution? Overall, this manuscript is a submission of exceptional quality and if minor details of the experimental setup are added to the manuscript, I would consider giving it the full score.
iclr_2018_SJw03ceRW
Conventional deep learning classifiers are static in the sense that they are trained on a predefined set of classes and learning to classify a novel class typically requires re-training. In this work, we address the problem of Low-Shot network-expansion learning. We introduce a learning framework which enables expanding a pre-trained (base) deep network to classify novel classes when the number of examples for the novel classes is particularly small. We present a simple yet powerful distillation method where the base network is augmented with additional weights to classify the novel classes, while keeping the weights of the base network unchanged. We term this learning hard distillation, since we preserve the response of the network on the old classes to be equal in both the base and the expanded network. We show that since only a small number of weights needs to be trained, the hard distillation excels for low-shot training scenarios. Furthermore, hard distillation avoids detriment to classification performance on the base classes. Finally, we show that low-shot network expansion can be done with a very small memory footprint by using a compact generative model of the base classes training data with only a negligible degradation relative to learning with the full training set.
The paper proposes a method for adapting a pre-trained network, trained on a fixed number of classes, to incorporate novel classes for doing classification, especially when the novel classes only have a few training examples available. They propose to do a `hard' distillation, i.e. they introduce new nodes and parameters to the network to add the new classes, but only fine-tune the new networks without modifying the original parameters. This ensures that, in the new expanded and fine-tuned network, the class confusions will only be between the old and new classes and not between the old classes, thus avoiding catastrophic forgetting. In addition they use GMMs trained on the old classes during the fine-tuning process, thus avoiding saving all the original training data. They show experiments on public benchmarks with three different scenarios, i.e. base and novel classes from different domains, base and novel classes from the same domain and novel classes have similarities among themselves, and base and novel classes from the same domain and each novel class has similarities with at least one of the base class. - The paper is generally well written and it is clear what is being done - The idea is simple and novel; to the best of my knowledge it has not been tested before - The method is compared with Nearest Class Means (NCM) and Prototype-kNN with soft distillation (iCARL; where all weights are fine-tuned). The proposed method performs better in low-shot settings and comparably when large number of training examples of the novel classes are available - My main criticism will be the limited dataset size on which the method is validated. The ILSVRC12 subset contains 5 base and 5 novel classes and the UT-Zappos50K subset also has 10 classes. The idea is simple and novel, which is good, but the validation is limited and far from any realistic use. Having only O(10) classes is not convincing, especially when the datasets used do have large number of classes. I agree that this will not allow or will takes some involved manual effort to curate subsets for the settings proposed, but it is necessary for being convincing.